As a long time fan, I’m kind of sad this happened and thought I’d start a thread here. I love niche operating systems and want them to thrive, and having more SBCs than I can count I would love to run my own OS builds on them, but not taking advantage of AI to speed up and automate things strikes me as… weird. Is it just me, or are even mainstream tech folk refusing to use AI to take away toil so devs can focus on actual creative work?
The official wording is very precise.
If you want to get LLM assisted code upstream in Haiku, you have to do the work to show that your LLM didn’t accidentally generate code that is too similar to something from its training database without attribution, or code that is under a license incompatible with the MIT used in Haiku.
That is, of course, in addition to making sure you fully understand the code you are submitting. I would say this is the same as when you write the code yourself, but it is significantly harder to achieve that when the code is generated and you didn’t carefully think about each line of code when writing it.
Projects of long standing all have their own club rules, often you can play by house rules or fork.
Yeah, well, they didn’t even check that this is still just build automation on a native beefy ARM64 host. Zero haiku code was touched except tool chain fixes.
But now I’m set on forking it. I have rcarmo/9front almost booting on the target hardware, and when that works (much faster to flash 100MB images and iterate that way), I’ll port that back to “my” Haiku.
Neat - I built https://github.com/rcarmo/go-rdp a few months back and use it daily, but it’s nice seeing a different take (I went all out on deep protocol suppose because I’m a network nerd)
Actually, I think their deeper problems are twofold:
- Claude Code is _vastly_ more wasteful of tokens than anything else I've used. The harness is just plain bad. I use pi.dev and created https://github.com/rcarmo/piclaw, and the gaps are huge -- even the models through Copilot are incredibly context-greedy when compared to GPT/Codex
- 4.7 can be stupidly bad. I went back to 4.6 (which has always been risky to use for anything reliable, but does decent specs and creative code exploration) and Codex/GPT for almost everything.
So there is really no reason these days to pay either their subscription or their insanely high per/token price _and_ get bloat across the board.
PSA: Since you are still required to use Claude Code and I have had a bunch of non-technical people asking me to make https://github.com/rcarmo/piclaw based on Claude rather than pi (which is never gonna happen), I have started pivoting its Python grand-daddy into a Go-based web front-end that runs Claude as an ACP agent.
reply