I purchased GitKraken, because they have a decent automated “AI” merge conflict resolution tool.
My use case is resolving conflicts from multiple parallel coding agent sessions, and because none of them are doing rocket surgery I’ve found the tool to be reliable.
Interesting. Looking through a strategic lens, I feel like this is related to the $1,000 free credit for Claude Code Web (I used a few hundred). What the heck are they aiming for? CodeAct? (https://arxiv.org/abs/2402.01030)
It is good, but Pro subscribers get only five per month. After that, it’s a limited version, and it’s not good (normal 5.1 gives more comprehensive answers than DR Limited).
This is actually very interesting I think, as Anthropic pushes against The Bitter Lesson a bit! The model is a great reasoner, but we still need a concrete way to manage tasks - like we needed for tool calling. Claude Code has an opinionated loop, something like ReAct/CoT etc with prompting tricks for tasks/skills/etc, but here they add a Hierarchical Controller/Worker thing leveraging the Claude SDK. Mixing agency with actual control using program logic - not just alignment using prompts screaming in all caps and emoji.
We are going to break out of the coding agent’s loop in this way - it’s sorta curving back around to Workflows, after leaving them behind for agency, but right now we need to orchestrate this with deterministic code written mostly by humans - like the git repo anthropic shared. This won’t last long.
Their comment would technically be proprietary code since there's no license alongside it, but grishka wrote the original implementation of the reverse engineered code in that mastodon commit in the first place. So I'd imagine it's free game to use it as a reference (IANAL)
Grishka expresses that the code is trivial. Trivial inventions are not covered by patents. I believe, therefore, that a license for trivial code is not necessary.
But if someone knows better I would appreciate any correction. Legal matters are seldom clear or logical. Your jurisdiction may vary, etc etc.
In case there are any doubts, consider this code and its description public domain.
But then I'm not sure how much code is enough to be considered copyrightable. Is "2*2" copyrightable? Clearly not, because it's too trivial. Where is the line?
Patent != copyright. You can patent an algorithm (e.g., Adaptive Replacement Caching, which was scheduled to go into public domain this year but unfortunately got renewed successfully) but when it gets to the level of an actual specific implementation, it's a matter of copyright law.
It's why black-box clones where you look at an application and just try to make one with the same externally-observable behavior without looking at the code is legal (as long as you don't recycle copyrighted assets like images or icons) but can be infringing if you reuse any of the actual source code.
This was an issue that got settled early on and got covered in my SWE ethics class in college, but then more recently was re-tried in Oracle v Google in the case of Google cloning the Java standard library for the Android SDK.
I have no idea how copyright applies here. StackOverflow has a rule in their terms of use that all the user-generated content there is redistributable under some kind of creative commons license that makes it easy to reuse. Perhaps HN has a similar rule? Not that I'm aware of, though.
What does it say about me, that I was SURE his article was going to be admitting out loud that we are engineering ourselves into obsolescence, a lot of us are really enjoying it, and nobody is seriously discussing how afraid we should be for our families and future. I’m afraid to mention it professionally, given we have a literal policy around “AI doomers” (not the exact term) that has the word “separation” in it. Worse, I’m afraid to THINK it, like a cognitive dissonance while Claude writes module after module for me.
I am enjoying the hell out of it, I’ve done nothing else for dozens of months, and I feel that hence I am/developers are in a unique position to understand what type of hell - or heaven - our society might experience in the next five years. Shouldn’t we be openly discussing how we can leverage this foreknowledge?
> I’m afraid to mention it professionally, given we have a literal policy around “AI doomers” (not the exact term) that has the word “separation” in it.
Dude, your employer is toxic AF. Look for a new job starting today.
The joy of US "at-will" employment is that every company's Code of Conduct reserves the right to "separate" you for undermining mission alignment. The whole system is toxic.
Why would I care? I’ll either steer the agents, or I’ll collaborate with them. Or I’ll do something else equally fun. It’s not as programming is the only worthwhile endeavour in the world.
Programming has one of the easiest working conditions out there while paying a fortune. I came from retail and made a fraction of what I make now for way harder work.
I wont go for a 3rd career, I'd rather be jobless for decades like the sailors in the cities around me over having to learn a 3rd job.
idk...all the Claude-generated code I'm seeing checked into our codebase is as bad as the code the same people wrote themselves. Claude probably makes it less strenuous and faster for them to produce those results though.
Uh oh.. maybe it's a problem to grease the wheels of the least-skilled...
I think you mentioned elsewhere that you dont want to have a lot of dependencies, but as the format evolves using the reference impl will allow you to work on real features.
llm stores data (prompts, responses, chats, fragments, aliases, attachment metadta) in a central sqlite database outside the working directory, and you have to use the tool to view and manipulate that data. I prefer a tool like this to default to storing things in a file or files in the project directory I'm working in, in a way that is legible e.g. plain text files. Contrast with e.g. git where everything goes into .git.
Functions require you to specify them on the command line every time they're invoked. I would prefer a tool like this to default to reading the functions from a hierarchy where it reads e.g. .llm-functions in the current folder, then ~/.config/llm-functions or something like that.
In general I found myself baffled when trying to figure out where and how to configure things. That's probably me being impatient but I have found other tools to have more straightforward setup and less indirection.
Basically I like things to be less centralized, magic, and less controlled by the tool.
Another thing, which is not the fault of llm at all, is I find Python based tools annoying to install. I have to remember the env where I set them up. Contrast with a golang application which is generally a single file I can put in ~/bin. That's the reason I don't want to introduce a dep to runprompt if I can avoid it.
The final thing that I found frustrating was the name 'llm' which makes it difficult to conduct searches as it is the generic name for what the thing is.
It is an amazing piece of engineering and I am a huge fan of simonw's work, but I don't use llm much for these reasons.
My use case is resolving conflicts from multiple parallel coding agent sessions, and because none of them are doing rocket surgery I’ve found the tool to be reliable.
reply