If some engineer optimizes something in the Google search stack that makes it, on average, just 0.01% faster (not 1%, but one-one-hundredth of a percent), then they have paid their salary for the entire year. Almost in perpetuity. No matter what level they are.
Very small gains multiplied out over extremely large amounts of compute over large amounts of time add up big.
And that's why Google can spend so much money on fairly small scoped teams.
Both Google and Microsoft are bigger, and with more products than Meta.
But both Google and Microsoft also massively overhired around the same timeframe as Meta, and are still digging themselves out of the mess of their own making. And making their teams pay for such stupidity.
Quick browsing at adafruit.com (or any other similar vendor), reveals plenty of displays that are 128, 240, and 320 pixels wide. At 6 pixels of width per character, that's only 21, 40, and 53 characters wide. Seems quite useful to me.
There are also several 32x32 led panels, which one could imagine needing some text.
Also, this kind of thing is just interesting, regardless of the usefulness.
It could be set up such that the AI can "fire" them, in that they no longer work at the store, and aren't paid wages that count against the experimental establishment's costs, but still get paid to do something else, or to do nothing at all.
I doubt the experiment is set up that way, but that would be an ethical way to do it.
You have to fix them at some point, but not in the middle of doing other things. Right now. With no possible way to make progress elsewhere while deferring this decision.
One of the things that makes jj worth trying out is simply the fact that it is different than git, and having exposure to more than one way of doing things is a good thing.
Even if you don't adopt it (and I didn't), it's easy to think that "this way is the only way", and seeing how systems other than your own preferred one manage workflows and issues is very useful for perspective.
That doesn't mean you should try everything regardless (we all only have so much time), but part of being a good engineer is understanding the options and tradeoffs, even of well loved and totally functional workflows.
I think siblings point needs to be made more sharply: this could've gone somewhere good, "I evaluated it and found the gain was not worth the cost to change", but instead went to "the gain from a change is insignificant 99% of the time, so it's not worth understanding it".
Students in the 2010s were building twitter clones as part of third-year college courses.
And somehow twitter survived and thrived and didn't really get viable competitors until forces external to the code and product itself motivated other investment. And even then it still rolls on, challenged these days, but not by the ease of which a "clone" can be made.
A modern pharmaceutical manufacturing plant costs two-billion dollars just to build, and that doesn't include developing a drug to actually manufacture there, or a distribution network to sell what you make inside it.
Very small gains multiplied out over extremely large amounts of compute over large amounts of time add up big.
And that's why Google can spend so much money on fairly small scoped teams.
reply