Hacker Newsnew | past | comments | ask | show | jobs | submit | jarjoura's commentslogin

This is exactly the sentiment I have been trying to articulate myself.

The ONLY reason we are here today is because OpenAI, and Anthropic, by extension, took it upon themselves to launch chat bots trained on whatever datasources they could get in a short amount of time to quickly productize their investments. Their first versions didn't include any references to the source material, and just acted as if they knew everything.

When CoPilot was built as a better auto-complete engine, trained on opensource projects, it was an interesting idea, because it doing what people already did. They searched GitHub for examples of the solution or nudged them in that direction. However, the biggest difference, using other project code was stable, because it came with a LICENSE.md that you then agreed to, and paid it forward. (i.e. "I used code from this project").

CoPilot initially would just inject snippets for you, without you knowing the source. It was only later, they walked that back and if you did use CoPilot, it shows you the most-likely source of the code it used. This is exactly the direction all of the platforms seem headed.

It's not easy to walk back the free-for-all system (i.e. Napster), but I'm optimistic over time it'll become a more fair, pay to access system.


Do you live in a liminal hall of doorways? LOL


I had a european friend introduce me to indoor drying racks, and since, anything I plan to keep long term, I hang dry as well. I've found my clothes last longer and look nicer. Only thing I've found doesn't work well are towels.


I got a Foxydry (Italy) wall-mounted rack a few years back, best €100 I spent that year. Bottom rack folds up flush to the wall, top rack raises nearly to the ceiling. Towels dry fine spread over extra bar or three to allow for better air circulation.


Wouldn't C++ and Rust eventually call down into those same libc functions?

I guess for your example, qsort() it is optional, and you can chose another implementation of that. Though I tend to find that both standard libraries tend to just delegate those lowest level calls to the posix API.


Rust doesn't call into libc for sort, it has its own implementation in the standard library.


Obviously. How about more complex things like multi-threading APIs though? Can the Rust compiler determine that the subject program doesn't need TLS and produce a binary that doesn't set it up at all, for example?


Optimising out TLS isn't going to be a good example of compiler capability. Whether another thread exists is a global property of a process, and beyond that the system that process operates in.

The compiler isn't going to know for instance that an LD_PRELOAD variable won't be set that would create a thread.


> Whether another thread exists is a global property of a process, and beyond that the system that process operates in.

TLS is a language feature. Whether another thread exists doesn't mean it has to use the same facilities as the main program.

> The compiler isn't going to know for instance that an LD_PRELOAD variable won't be set that would create a thread.

Say the program is not dynamically linked. Still no?


> Say the program is not dynamically linked. Still no?

Whether the program has dynamic dependencies does not dictate whether a thread can be created, that's a property of the OS. Windows has CreateRemoteThread, and I'd be shocked if similar capabilities didn't exist elsewhere.

If I mark something as thread-local, I want it to be thread-local.


I mean, it’s not that obvious, your parent asked about it directly, and you could easily imagine calling it libc for this.

I beehive the answer to your question is “yes” because no-std binaries can be mere bytes in size, but I suspect that more complex programs will almost always have some dependency somewhere (possibly even the standard library, but I don’t know offhand) that uses TLS somewhere in it.


Many of the libc functions are bad apis with traditionally bad implementations.


At that point the real question should be restated. Does the LLVM IL that is generated from clang and rustc matter in a meaningful way?


I've also found it to keep such a constrained context window (on large codebases), that it writes a secondary block of code that already had a solution in a different area of the same file.

Nothing I do seems to fix that in its initial code writing steps. Only after it finishes, when I've asked it to go back and rewrite the changes, this time making only 2 or 3 lines of code, does it magically (or finally) find the other implementation and reuse it.

It's freakin incredible at tracing through code and figuring it out. I <3 Opus. However, it's still quite far from any kind of set-and-forget-it.


Aren't toll roads the norm? It was radical in the 1940s and 1950s to create public freeways.

Toll roads do have real consequences and, do, raise the cost of everything that needs to travel over it. It also means things that could exist on one side of a bridge or tolled section will relocate to other areas to avoid tolls.

Not against them, but I also don't like them. I personally think it's a failure of a state managing its roads where the cost has to become disproporiationally spread.


>Aren't toll roads the norm?

No. I won't say they're rare but they're not especially common in the US.


Do you perhaps live in Florida or Oklahoma? They are quite rare in CA, the southwestern states in general, and the upper midwest.


> "For now, drivers pay to access just 6,300 miles of America’s 160,000 or so miles of highway"


It's a shame that my company tied itself to claude-code way too fast. It was like a single week last summer of, "oh what's everyone's favorite? claude? okay, let's go!"

OpenCode has been truely innovating in this space and is actually open source, and would naturally fit into custom corporate LLM proxies. Yet, now we've built so many unrulely wrappers and tools around claude-code's proprietary binary just to sandbox it, and use it with our proxy, that now I fear it's too late to walk back.

Not sure how OpenCode can break through this barrier, but I'm an internal advocate for it. For hobby projects, it's definitely my goto tool.


As of Dec 2025, Sonnet/Opus and GPTCodex are both trained and most good agent tools (ie. opencode, claude-code, codex) have prompts to fire off subagents during an exploration (use the word explore) and you should be able to Research without needing the extra steps of writing plans and resetting context. I'd save that expense unless you need some huge multi-step verifiable plan implemented.

The biggest gotcha I found is that these LLMs love to assume that code is C/Python but just in your favorite language of choice. Instead of considering that something should be written encapsulated into an object to maintain state, it will instead write 5 functions, passing the state as parameters between each function. It will also consistently ignore most of the code around it, even if it could benefit from reading it to know what specifically could be reused. So you end up with copy-pasta code, and unstructured copy-pasta at best.

The other gotcha is that claude usually ignores CLAUDE.md. So for me, I first prompt it to read it and then I prompt it to next explore. Then, with those two rules, it usually does a good job following my request to fix, or add a new feature, or whatever, all within a single context. These recent agents do a much better job of throwing away useless context.

I do think the older models and agents get better results when writing things to a plan document, but I've noticed recent opus and sonnet usually end up just writing the same code to the plan document anyway. That usually ends up confusing itself because it can't connect it to the code around the changes as easily.


>Instead of considering that something should be written encapsulated into an object to maintain state, it will instead write 5 functions, passing the state as parameters between each function.

Sounds very functional, testable, and clean. Sign me up.


I know this is tongue in cheek, but writing functional code in an object oriented language, or even worse just taking a giant procedural trail of tears and spreading it across a few files like a roomba through a pile of dog doo is ... well.. a code smell at best.

I have a user prompt saved called clean code to make a pass through the changes and remove unused, DRY and refactor - literally the high points of uncle bob's Clean Code. It works shockingly well at taking AI code and making it somewhat maintainable.


>I know this is tongue in cheek, but writing functional code in an object oriented language, or even worse just taking a giant procedural trail of tears and spreading it across a few files like a roomba through a pile of dog doo is ... well.. a code smell at best.

After forcing myself over years to apply various OOP principles using multiple languages, I believe OOP has truly been the worst thing to happen to me personally as engineer. Now, I believe what you actually see is just an "aesthetics" issue, moreover it's purely learned aesthetics.


Does its output follow the "no comments needed" principle of the uncle Bob?


Not so much tongue in cheek, but a little on the light side, sure.

I'd argue writing functional code in C++ (which is multi-paradigm anyway), or Java, or Typescript is fine!


Care to share the prompt? Sounds useful!


Sure. Please improve it and come back around to let me know.

https://gist.github.com/prostko/5cf33aba05680b722017fdc0937f...


> As of Dec 2025, Sonnet/Opus and GPTCodex are both trained and most good agent tools (ie. opencode, claude-code, codex) have prompts to fire off subagents during an exploration (use the word explore) and you should be able to Research without needing the extra steps of writing plans and resetting context. I'd save that expense unless you need some huge multi-step verifiable plan implemented.

Does the UI shows clearly what portion was done by a subagent?


Yes it will, this is almost verbatim (redacted product) claude-code output from my current session:

   ● I'll explore the codebase to understand the current <redacted> architecture, testing patterns, and integration points. This will help me formulate effective strategies for reducing QA burden.

   ● 3 Explore agents finished (ctrl+o to expand)
      ├─ Explore <redacted> architecture · 57 tool uses · 60.0k tokens
      │  ⎿  Done
      ├─ Explore current testing approach · 29 tool uses · 51.7k tokens
      │  ⎿  Done
      └─ Explore API integration patterns · 44 tool uses · 71.7k tokens
         ⎿  Done

During agent execution, it also shows what each sub-agent is up to. In ctrl+o mode it'll show the prompts it passed to each sub-agent.


The UI (terminal) in Claude code will tell you if it has launched a subagent to research a particular file or problem. But it will not be highlighted for you, simply displayed in its record of prompts and actions.


If you use the vscode extension you can click to view the sub-agent prompts and see all tool calls.


If claude ignores your claude.md you can force it to read via settings to cat it every session start for example.


AI can be an FP absolutist too.


Interesting, for me they almost always assume/write TS.


Anecdotally, the common theme I'm starting to hear more often now is that people who use “AI” at work despise it when it replaces humans outright, but love it when it saves them from mundane, repetitive crap that they have to do.

These companies are not selling the world on a vision where LLMs are a companion tool, instead, they are selling the world on some idea that this is the new AI coworker. That 80/20 rule you're calling out is explained away with words like “junior employee.”


I think it's also important to see that even IF there are those selling it as a companion tool, it's only in the meantime. That is, it's your companion now, but because we need you next to it to make it better so it can be an "AI employee" once it's trained from your companionship.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: