In order to write f(n) is O(g(n)), you already had to smuggle in the idea that you were talking worst case before even talking about O(*). What is f? Typically it is the max of "steps taken" over all input problems of size n. i.e. the worst case.
The O(g(n)) part says f is asymptotically bounded by g(n) up to some constant factor.
If you're just trying to explain what f~O(g) means, then it makes sense to talk in terms of functions and asymptotes like you would in an intro to calculus, sure. Then, separately, computer scientists are interested in f(n) = max_{|S|=n} steps_required(program, S) where S is some input/problem type with some notion of size.
The fact that f looks innocuous but is secretly actually this complicated thing is perhaps the stumbling block for people. The actual O(*) part is just "line stays at or below other line once you go far enough out" (with a slight tweak to allow you to pre-stretch the bounding line).
I'm aiming this at junior software engineers without math or computer science backgrounds. My own computer science background is very old at this point, and I never did have the math. So I'm not surprised I've made mistakes that are being pointed out by folks that live and breathe this stuff. I want to impart enough useful (if not strictly correct) information to folks to make a difference in their day jobs.
The O(g(n)) part says f is asymptotically bounded by g(n) up to some constant factor.