Yes, one more precise way to phrase this is that the expected value of the dot product between two random vectors chosen from a vector space tends towards 0 as the dimension tends to infinity (I think the scaling is 1/sqrt(dimension)). But the probability of drawing two truly orthogonal vectors at random (over the reals) is zero - the dot product will be very small but nonzero.
That said, for sparse high dimensional datasets, which aren't proper vector spaces, the probability of being truly orthogonal can be quite high - e.g. if half your vectors have totally disjoint support from the other half then the probability is at least 50-50.
Note that ML/LLM practioners use "approximate orthogonality" anyway.
That link doesn't contradict the person you're replying to. Actual orthogonality still has a probability of zero, just as the equator of a sphere has zero surface area, because it's a one-dimensional line (even if it is in some sense "bigger" than the Arctic circle).
If you're picking a random point on the (idealized) Earth, the probability of it being exactly on the equator is zero, unless you're willing to add some tolerance for "close enough" in order to give the line some width. Whether that tolerance is +/- one degree of arc, or one mile, or one inch, or one angstrom, you're technically including vectors that aren't perfectly orthogonal to the pole as "successes". That idea does generalize into higher dimensions; the only part that doesn't is the shape of the rest of the sphere (the spinning-top image is actually quite handy).
Yeah, I’m fairly sure the “range extender” (engine) component was optional, not sure what the take rate was. A small percentage of an already infrequently-purchased car.