For me the best benefit of nushell is not the easier syntax, but the static type checks. It catches most typos before running the script, which is a godsend when the script is slow and/or has destructive operations.
The graph is scary, but I think it's conflating two things:
1. Newbies asking badly written basic questions, barely allowed to stay, and answered by hungry users trying to farm points, never to be re-read again. This used to be the vast majority of SO questions by number.
2. Experiencied users facing a novel problem, asking questions that will be the primary search result for years to come.
It's #1 that's being canibalized by LLM's, and I think that's good for users. But #2 really has nowhere else to go; ChatGPT won't help you when all you have is a confusing error message caused by the confluence of three different bugs between your code, the platform, and an outdated dependency. And LLMs will need training data for the new tools and bugs that are coming out.
The newbies vastly outnumber the experienced people (in every discipline), and have more to ask per-capita, and are worse at asking it. Category 2 is much smaller. The volume of Stack Overflow was never going to be sustainable and was not reasonably reflective of its goals.
We are talking about a site that has accumulated more than three times as many questions as there are articles on Wikipedia. Even though the scope is "programming languages" as compared to "literally anything that is notable".
I’m going to argue the opposite. LLMs are fantastic at answering well posed questions. They are like chess machines evaluating a tonne of scenarios. But they aren’t that good at guessing what you actually have on your mind. So if you are a novice, you have to be very careful about framing your questions. Sometimes, it’s just easier to ask a human to point you in the right direction. But SO, despite being human, has always been awful to novices.
On the other hand, if you are experienced, it’s really not that difficult to get what you need from an LLM, and unlike on SO, you don’t need to worry about offending an overly sensitive user or a moderator. LLMs never get angry at you, they never complain about incorrect formatting or being too lax in your wording. They have infinite patience for you. This is why SO is destined to be reduced to a database of well structured questions and answers that are gradually going to become more and more irrelevant as time goes by.
Yes, LLMs are great at answering questions, but providing reasonable answers is another matter.
Can you really not think of anything that hasn't already been asked and isn't in any documentation anywhere? I can only assume you haven't been doing this very long. Fairly recently I was confronted with a Postgres problem, LLMs had no idea, it wasn't in the manual, it needed someone with years of experience. I took them IRC and someone actually helped me figure it out.
Until "AI" gets to the point it has run software for years and gained experience, or it can figure out everything just by reading the source code of something like Postgres, it won't be useful for stuff that hasn't been asked before.
And that is exactly why so many people gripe about SO being "toxic". They didn't present a well posed question. They thought it was for private tutoring, or socializing like on reddit.
All I can say to these is: Ma'am, this is a Wendy's.
So here's an example of SO toxicity. I asked on Meta: "Am I allowed to delete my comments?" question body: "The guidelines say comments are ephemeral and can be deleted at any time, but I was banned for a month for deleting my comments. Is deleting comments allowed?"
For asking this question (after the month ban expired) I was banned from Meta for a year. Would you like to explain how that's not toxic?
Maybe if you haven't used the site since 2020 you vastly underestimated the degree to which it enshittified since then?
I think you overestimate 2 by a longshot most problems only appear novel because they couched in a special field, framework or terminology, otherwise it would be years of incremental work. Some are, they are more appropriately put in a recreational journal or BB.
The reason the "experts" hung around SO was to smooth over the little things. This create a somewhat virtuous cycle, but required too much moderation and as other have pointed out, ultimately unsustainable even before the release of LLMs.
The first actually insightful comment under the OP. I agree all of it.
If SO manages to stay online, it'll still be there for #2 people to present their problems. Don't underestimate the number of bored people still scouring the site for puzzles to solve.
SE Inc, the company, are trying all kinds of things to revitalize the site, in the service of ad revenue. They even introduced types of questions that are entirely exempt from moderation. Those posts feel literally like reddit or any other forum. Threaded discussions, no negative scores, ...
If SE Inc decides to call it quits and shut the place down and freeze it into a dataset, or sell it to some SEO company, that would be a loss.
I'm also surprised by people's defense of VLC. It's a nice project, especially when it was created, but the bugs I regularly encountered were numerous and in seemingly common use cases.
My main problem with VLC is that when I accidentally hit the wrong key on my keyboard (usually in the dark, because that's how I watch movies), it is quite often almost impossible to get the settings back to what they were without restarting the player.
Honestly, I'm absolutely not. I still vividly remember those times when we have to install codecs separately. And every month something something new and incompatible pops up on a radar, which sent all users on a wild hunt for that exact codec and instructions how to tweak it so the funny clip could play. Oh dear I'm not loking back to times od all versions of divX xVid, matroska, mkv avi wma, mp4, mp3 vba ogg and everything else, all thos cryptic incantations to summon a non-broken video frame on a modern hardvare, for everyone but few people in anime community who drove that insanity on everyone else.
I'll die on a hill of VLC, despite all its flaws, because it gave an escape route for everyone else - if you don't give a F about "pixel perfect lowest overhead most progressive compression that is still a scientific experiment but we want to encode a clip with it" and simply want to view a video - vlc was the way. Nothing else made so much good to users who simply want to watch a video and not be extatic about its perfect colour profile, loosless sound and smallest size possible.
All other players lost their plot when they tried to steer users into some madness pit of millions tweaks and configurations that somehow excites aughors of those players and some cohort of people who encode videos that way.
I istall vlc very single time, because this is a blunt answer to all video playing problems, even if its imperfect. And walked away from ever single player who tries to sell me something better asking to configure 100 parameters I've no idea about.
Hope this answers the question why VLC won.
Love these "lessons learned" posts, keep the coming!
My only feedback is about the Quickstart of passkeybot, "feed this example into a good LLM with these instructions". I undeerstand the idea, but I was a bit shocked that the first time I see these sort of instructions is for an auth framework.
Counterpoint, I have definitely taken them into consideration when designing my backup script. It's the reason why I hash my files before transferring, after transferring, and at periodic intervals.
And if you're designing a Hardware Security Module, as another example, I hope that you've taken at least rowhammer into consideration.
Hot take, but I blame 90% of these problems on the internet's overreliance on funding from advertisement. It all flows from there:
1. To display ads is to sacrifice user experience. This is a slippery slope and both developers and users get used to it, which affects even ad-free services. Things like "yes/maybe later" become normal.
2. Ads are only displayed when the user visits the service directly. Therefore we cannot have open APIs, federation, alternative clients, or user customization.
3. The advertisement infrastructure is expensive. This has to be paid with more ads. Like the rocket equation, this eventually plateaus, but by then the software is bloated and cannot be funded traditionally anymore, so any dips are fatal. Constant churn.
4. Well targeted ads are marginally more profitable, therefore all user information is valuable. Cue an entire era of tracking, privacy violations, and psychological manipulation.
5. Advertiser don't want to be associated with anything remotely controversial, so the circle of acceptable content shrinks every year. The fringes become worse and worse.
6. The system only works with a very large number of users. It becomes socially expected to participate, and at the same time, no customer support is provided when things go wrong.
I'm fairly sure ads are our generation's asbestos or leaded gasoline, and would be disappointed if they are not largely banned in the future.
Heh came here to post the same comment, I was somewhat shocked by the alleged power of the almighty dollar ... but it's just a typo of course. Phew. :)
According to Google's built-in exchange rate calculator it should say $235m.
I think the author makes a hard distinction between consumer products and infrastructure/engineering products. The Shelby Cobra has a funny name, but its engine is the memorably named V8. The Hoover Dam is a dam, and the Golden Gate Bridge is a bridge.
We can argue about namespace pollution and overly long names, but I think there's a point there. When I look at other profession's jargon, I never have the impression they are catching Pokemon like programmers do.
Except for the ones with Latin and Greek names, but old mistakes die hard and they're not bragging about their intelligibility.
Yeah, V8 is the shape of the engine - 8 cylinders in two rows offset at an acute angle (i. e. V-shaped). Likewise a V6 has the same number of cylinders as an inline 6 but performs very differently. There's a handful of different engine shapes - I'm fond of the rotary engines used in early aircraft. Traditionally, the name of an engine was just the year, the manufacturer, and the displacement (like 1965 Ford 352). You often leave off the year and even the manufacturer if it's not required by context.
The Ford 351 is a bit special because there were two different engines made by Ford in the same time period with the same displacement, so they tacked on the city they were manufactured in (Windsor or Cleveland).
That would be a laudable goal, but I feel like it's contradicted by the text:
> Even on a low-quality image, GPT‑5.2 identifies the main regions and places boxes that roughly match the true locations of each component
I would not consider it to have "identified the main regions" or to have "roughly matched the true locations" when ~1/3 of the boxes have incorrect labels. The remark "even on a low-quality image" is not helping either.
Edit: credit where credit is due, the recently-added disclaimer is nice:
> Both models make clear mistakes, but GPT‑5.2 shows better comprehension of the image.
Yeah, what it's calling RAM slots is the CMOS battery. What it's calling the PCIE slot is the interior side of the DB-9 connector. RAM slots and PCIE slots are not even visible in the image.
It just overlaid a typical ATX pattern across the motherboard-like parts of the image, even if that's not really what the image is showing. I don't think it's worthwhile to consider this a 'local recognition failure', as if it just happened to mistake CMOS for RAM slots.
Imagine it as a markdown response:
# Why this is an ATX layout motherboard (Honest assessment, straight to the point, *NO* hallucinations)
1. *RAM* as you can clearly see, the RAM slots are to the right of the CPU, so it's obviously ATX
2. *PCIE* the clearly visible PCIE slots are right there at the bottom of the image, so this definitely cannot be anything except an ATX motherboard
3. ... etc more stuff that is supported only by force of preconception
--
It's just meta signaling gone off the rails. Something in their post-training pipeline is obviously vulnerable given how absolutely saturated with it their model outputs are.
Troubling that the behavior generalizes to image labeling, but not particularly surprising. This has been a visible problem at least since o1, and the lack of change tells me they do not have a real solution.
Eh, I'm no shill but their marketing copy isn't exactly the New York Times. They're given some license to respond to critical feedback in a manner that makes the statements more accurate without the same expectations of being objective journalism of record.
Look, just give the Qwen3-vl models a go. I've found them to be fantastic as this kind of thing so far, and what I'm seeing on display here, is laughable in comparison. Close source / closed weight paid model with worse performance than open? common. OpenAI really is a bubble.
reply