Wouldn’t this mean that we should focus on writing more efficient code than before?
Especially in the startup space I saw companies building software with the hypothesis “users need the latest device for our product and they will get faster anyway so we don’t need optimize our code. Instead we deliver features on max speed skipping optimizations and wait until our users upgraded to newer devices during the coming 2-4 years”.
It’s offloading the cost to the customer. It is way cheaper to develop in some famous interpreted language than creating a set of robust compiled binaries. As long as customers can pay up for newer hardware we will keep seeing clunky UIs that can barely handle 20 list items of variable size without noticeable lag on a modern computer.
I don’t even blame the companies for doing this. The benefit to cost ratio of using idk C++ for everything is just too bad.
But the inefficiencies you sometimes see today can’t even be explained with any bad choice of language. You can create more than fast enough programs with interpreted languages with garbage collection. But then of course you need to know at least a little bit about data structures and not doing dozens of REST calls anytime anyone taps the screen.
Most code is so bad that "optimization" isnt the issue, at least in no sense that a language/runtime would help with.
It's just designed with O(N^3) architectures, "servers on servers" and "callbacks on callbacks", etc. because, seemingly, new features are made by very jnr developers -- who glue these idiot-proof frameworks together ---- trading needed-knowhow for polynomial behaviour.
Especially in the startup space I saw companies building software with the hypothesis “users need the latest device for our product and they will get faster anyway so we don’t need optimize our code. Instead we deliver features on max speed skipping optimizations and wait until our users upgraded to newer devices during the coming 2-4 years”.