Not to mention, it doesn't actually create the productivity promised at the lower rates promised. The most enthusiastic proponents are middle-management, not actual doers.
It's an expensive route to mediocrity, which doesnt offer an edge in a market where everyone is using the same snakeoil.
They got way over their skis on this one. There's a difference between "impressive" tech vs. "operational" tech. That difference usually boils down to prioritizing engineering rigor over marketing.
Unreliable mediocrity, because you simply can never be sure when the damned thing lies/hallucinates unless you double-check everything.
So now you're wrangling an "AI" system and you're doing most of the work you would have had to anyway. ...And when you don't it can get really embarrassing.
Not the first time, surely not the last. The problem is that so much money is tied up in this thing, and the moment the music stops the bag holders are going to be utterly doomed.
This study considers caffeine concumption outside of coffee, so an alternative caffeine source might be worth looking into. That was my takeaway, at least. I also drink espresso, for the caffeine and the noticable ease on my gut compared to drip or pressed coffee.
Remember when Steve said 'The computers for the rest of us'?
I suppose it isn't a surprise. Are researchers/generally geeky people meant to be able to relate to the average person's day-to-day beyond their sphere? Lmao.
You can't produce stuff for people you don't understand. Understand being a very key term.
Ha! To think that we're finally back to asking ourselves why we are using generative models for categorization and extraction. I wonder how much money has collectively been wasted by companies wittling away at square pegs.
They amortized the creation of corpuses with trainable features, not the myriad of methods that can categorize text with a success rate in the levels required by high-stakes industries.
Yeah, LLMs are a solution to the cold start problem plus they are easy to integrate and if you know what you're doing in terms of evals, post processing and so on you can get excellent performance out of them, plus they can do semantic classification and reasoning that you won't get out of some bespoke traditional DS/ML model.
Not to mention that the objectively bad practice of piping a curl call to bash is nowhere close to "playing doom via curl". It's almost as if they simply prompted "play doom with curl". In my experience, almost any overly-ambitious prompt ends similarly.
One form is to `curl` to get the bash code to just run the loop on keys that make additional curl requests. That is playing DOOM via curl. Each curl is a movement.
However, if you see the next form, it creates a single curl request that stays open and does not need bash or additional calls for each key. It uses the open request. (You do need to set the terminal to raw for this.)
HOWEVER, to push back against myself: It's relatively easy to do when other people did the heavy lifting: Doomgeneric, terminals, curl, etc. I also couldn't use websockets because it doesn't come with curl by default. And you can also move forward only when pressing a button to refresh the frame.
So yeah, not the most impressive thing by far. But it is curl used to play the game and that's... something?
You are correct, on all counts. (See my response to the other commenter.) I was more harsh than I should have been. You're correct that it's not the most advanced of DOOM implementations, but I'd hate to hinder DOOM-ifying things with pedantry. (So long as one isn't simply piping a DOOM binary to bash, of course.)
But that's not what it does, the bash option just saves you from doing stty setup and reset I think? You can just type it all out by hand, too, as the readme explains
I'll throw "proof-of-useful-work" into the ring. Reallocating at least a portion of BTCs verification onto existing energy costs could go a long way.
Not suggesting it would be easy or that the entire network would be able to agree on what tasks to use, just that it's a theoretical option.
It's also not simply a matter of agreeing on what tasks to use. The task has to be computationally difficult to perform, but computationally trivial to verify. It must also be verifiable with only the context of the blockchain (no "oracle" that can make claims about real-world events).
Primecoin exist(ed?) and used the search for Mersenne prime numbers as its proof-of-work. That was 13 years ago and is still the only example I know of "proof-of-useful-work", and it would not be difficult to find sour voices challenging its usefulness.
While they don't have that many in the wild, the number of implementations it lists is still more than I expected. There's also the Monero 51% takeover, which was purportedly done using a PoUW technique to garner more hashing power.
I'm reminded of Samsung's "AI moon" debacle and how divided people were over it. At the end of the day, any photos with so many unknown variables wouldn't suffice for scientific purposes.
Digital information may be our first post-scarce resource. It's interesting, and sad, to see so many attempt to fit it within scarcity-based economic models.
It's an expensive route to mediocrity, which doesnt offer an edge in a market where everyone is using the same snakeoil.
reply