> asking it to fix it again works just well enough.
I've yet to encounter any LLM from chatGPT to cursor, that doesn't choke and start to repeat itself and say it changed code when it didn't, or get stuck changing something back and forth repeatedly inside of 10-20 minutes. Like just a handful of exchanges and it's worthless. Are people who make this workflow effective summarizing and creating a fresh prompt every 5 minutes or something?
One of the most important skills to develop when using LLMs is learning how to manage your context. If an LLM starts misbehaving or making repeated mistakes, start a fresh conversation and paste in just the working pieces that are needed to continue.
I estimate a sizable portion of my successful LLM coding sessions included at least a few resets of this nature.
I treat tokens like the tachometer for a car's engine. The higher you go, the more gas you will consume, and the greater the chance you will blow up your engine. Different LLMs will have different redlines and the more tokens you have, the more costly every conversation will become and the greater the chance it will just start spitting gibberish.
So far, my redline for all models is 25,000 tokens, but I really do not want to go above 20,000. If I hit 16,000 tokens, I will start to think about summarizing the conversation and starting a new one based on the summary.
The initial token count is also important in my opinion. If you are trying to solve a complex problem that is not well known by the LLM and if you are only starting with 1000 or less tokens, you will almost certainly not get a good answer. I personally think 7,000 to 16,000 is the sweet spot. For most problems, I won't have the LLM generate any code until I reach about 7,000 since it means it has enough files in context to properly take a shot at producing code.
Only if you assume one is blindly copy/pasting without reading anything, or is already a domain expert. Otherwise you’ve absolutely got the ability to learn from the process, but it’s an active process you’ve got to engage with. Hell, ask questions along the way that interest
you, as you would any other teacher. Just verify the important bits of course.
I’d agree that’s one definition of learning, but there exists entire subsets of learning that don’t require you to be stuck on a problem. You can pick up simple, and related concepts without first needing to struggle with them. Incrementally building on those moments is as true a form of learning as any other I’d argue. I’d go as far as saying you can also have the moments you’re describing while using an LLM, again with intentionality, not passively.
Hm, I use LLMs almost daily, and I've never had it say it changed code and not do it. If anything, they will sometimes try to "improve" parts of the code I didn't ask them to modify. Most times I don't mind, and if I do, it's usually a quick edit to say "leave that bit alone" and resubmit.
> Are people who make this workflow effective summarizing and creating a fresh prompt every 5 minutes or something?
I work on one small problem at a time, only following up if I need an update or change on the same block of code (or something very relevant). Most conversations are fewer than five prompt/response pairs, usually one-three. If the LLM gets something wrong, I edit my prompt to explain what I want better, or to tell it not to take a specific approach, rather than correcting it in a reply. It gets a little messy otherwise, and the AI starts to trip up on its own past mistakes.
If I move on to a different (sub)task, I start a new conversation. I have a brief overview of my project in the README or some other file and include that in the prompt for more context, along with a tree view of the repository and the file I want edited.
I am not a software engineer and I often need things explained, which I tell the LLM in a custom system prompt. I also include a few additional instructions that suit my workflow, like asking it to tell me if it needs another file or documentation, if it doesn't know something, etc.
Creating a new prompt. Sometimes it can go for a while without, but the first response (with crafted context) is generally the best. Having context from the earlier conversation has its uses though.
And very often, if the LLM produces a poopoo, asking it to fix it again works just well enough.