> The key thing is to develop an intuition for questions it can usefully answer vs questions that are at a level of detail where the lossiness matters.
If that intuition is not explained, it's no different than "works for me, you're using it wrong", which is definitely an argument, but not the best one.
I mean, I'm confident lots of people can use LLMs well, but I don't understand how the analogy is supposed to teach anything about it.
The core analogy is intended as a warning: this is a thing that looks like an encyclopedia but really isn't.
There's not much I can do about the intuition thing. I've been trying to figure out ways to teach people to use LLMs for over three years now, but it genuinely comes down to them being utterly weird and unintuitive pieces of technology that pretend to be easy to use when they aren't.
The only way to get truly competent with them is to put in the time deliberately experimenting to figure out what does and doesn't work.
> Unleash Claude’s raw power directly in your terminal.
> Search million-line codebases instantly.
> Turn hours-long workflows into a single command.
> Your tools. Your workflow. Your codebase, evolving at thought speed.
--
> Watch as Claude Code tackles an unfamiliar Next.js project, builds new functionality, creates tests, and fixes what’s broken
No mention of "weird and unintuitive". It sounds like a dream. It sounds like it would accept a prompt for a niche raspberry pi skeleton and work with it. Can you really blame a beginner for buying the product and being disappointed?
You should be angry at the LLM companies. While you're trying to teach people, they're working to generate tons of unsatisfied customers. And they come here, and you answer them for free? It doesn't make much sense to me.
Describing the commercial offerings as "weird and unintuitive" is a weak criticism palatable to corporate comms teams. It suggests a fault in the user ("you're holding it wrong") rather than deficiencies inherent to LLM architecture. No amount of marketing can fix the lethal trifecta or the hallucination problem, can it?
Generate dependency graphs, identify dead code, and prioritize refactoring based on code complexity metrics and business impact.
Transform legacy codebases systematically while maintaining business continuity.
Claude Code preserves critical business logic while modernizing to current frameworks.
Claude Code can seamlessly create unit tests for refactored code, identify missing test coverage, and help write regression tests.
Identify and patch vulnerabilities while maintaining regulatory compliance patterns embedded in legacy systems.
Create modern documentation from undocumented legacy code, capturing institutional knowledge before it's lost.
I don't particularly care how these companies market their software - what I care about is figuring out what these things can actually do and what they're genuinely useful for, then helping other people use them in as productive a way as possible given their inherent flaws.
> The key thing is to develop an intuition for questions it can usefully answer vs questions that are at a level of detail where the lossiness matters.
If that intuition is not explained, it's no different than "works for me, you're using it wrong", which is definitely an argument, but not the best one.
I mean, I'm confident lots of people can use LLMs well, but I don't understand how the analogy is supposed to teach anything about it.