I spent years optimizing physics simulations from different domains (weather, fluid dynamics, all sorts of stuff). Even there you have to carefully pick the parts you want to optimize to get the best outcome for the resources you invest in optimization. It's completely infeasible to make everything crazy fast with a blanket approach.
You're speaking directly past your parent's point. I've spent years optimizing physics simulations, more abstract discrete math stuff, etc. myself. There, especially there, it's usually pretty easy to identify what should be the hot loop, by eye. Easier with a profiler if the project gets big. In scientific computing, it's very common to find that 99% of the time is spent on a scant few lines of code. Unless somebody bungs up the I/O and you end up spending 99% of the time there instead, but I digress.
But a whole OS, with a browser, network stack, dozens of applcations, etc., it's pretty common to find hundreds or thousands of things which are all more or less equally sucking performance. So on one hand, it's a much harder problem. On the other hand, front-end people have this attitude that performance doesn't matter because hardware is fast enough and it's better to write in the most abstract language possible because changing diapers makes your hands smell gross.