Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can you share some tips? Right now I spend 20 minutes vibe coding 3000 lines of code and then 3 hours reviewing every single line.




Exactly, I found that most of the time, I spend significantly more time reviewing the code; most of the time, there is a lot of repeated code. Refactoring and cleaning the code also require a lot of time.

I found that the time I spend reviewing and refactoring is marginally less than the time it takes to write the code myself. For very repetitive tasks, there is a sweet spot.

The only case where vibe-coding really delivered on the promise of super high-speed development is when you completely gave up on the code quality or you are in a greenfield project where all the business logic hasn't been fully defined.


Work in smaller chunks. 3000 lines of code is horrible to review, regardless if it’s human- or computer-made. Structure the tasks in a way that will enable the agent will to verify and iterate by itself.

Reviewing 100% of generated code is undoubtedly good software engineering practice, but its not vibe coding, at least by my definition.

I vibe coded a tool this week and my process was an iterative process of the prompting an LLM agent to implement small bits of functionality, the me checking the resultant output by hand (the output of the program, not the outputted code). There were shockingly few problems with this process, but when they did arise, I fixed them through a combination of reviewing the relevant code myself, reviewing log files, and additional prompting. I dont think I actually wrote or fixed a single line of code by hand, and I definitely haven't read 100% of it.

Once the project was feature complete, I used more or less the same process to refactor a number of things to implement improvements, simplifications, and optimizations. Including a linter in my automated commit workflow also found a couple minor problems during refactoring such as unused imports that were trivial for the agent to fix.

Is the code perfect, bug free, and able to handle every imaginable edge case without issue? Almost certainly not, but it works well enough for our use already and is providing real labor savings. But, its not documented very well, nor are any tests written yet. It might or might not be long term maintainable in its current state, but I certainly wouldn't be comfortable trying to sell or support it (we are not a software company and I am not a software developer).

I should note that while I have been very impressed with my use of agentic coding tools, I am skeptical that they scale well above small projects. The tool we built this week is a bit over 2000 lines of code. I am not nearly skilled enough to work on a large codebase but I suspect this vibe coding style of programming would not work well for larger projects.


That is a lot of code! Maybe that is the problem.

Normally I review 3-5 files in a single change. Tests are done separately to reduce the impact of writing tests that fit whatever it was written and there are a few dozen custom eslint rules (like eslint plugins) to enforce certain types of coding practices that make it harder for the LLM to generate code that I would other reject.

It is not that difficult really.


That's still a great trade in time if you end up keeping most of those 3000 lines.

depends on what you're trying to make. i suggest trying to vibe code a tool like i have called llm.exe which takes contents of predefined md file and sends it for response which is added to the end of the file. then incrementally add new flags to the tool to have more features and use other models, anything from generating audio using audio models to archiving and image input. Then try to create something in framework you are not familiar with and come up with your own methods allowing you to go much further than one shot. i tried to vibe code winapi and it's hard, but i think doable even for large scoped projects, the problem is context hoisting and you need to keep track of a spec. try to think what is the minimum text you need to describe what you are doing. ask models to generate one file or method at a time. i don't use fancy ide.

This is what I've never understood about vibe coding. Every attempt I've made (and quite a lot) makes me feel faster but in reality is slower.

When traditional coding >50% of the debugging is done while writing lines. If all you are measuring is lines out then you're discounting more than half the work.

I already have a strategy to optimize this. I work mostly in Python but it translates even when I work in compiled languages.

I write code 3 times.

Step 1: hack in an ipython terminal where I can load the program modules. This is actually where AI can help the most imo. Doesn't matter how messy, your first iteration is always garbage.

Step 2: translate that into a function in the codebase. Use Unix philosophy, keeping functions minimal. This helps you move fast now but more importantly you move fast later. Do minimal cleanup so everything is coherent. Any values should be translated into variables that the function can take as arguments. No hard coding! I guarantee even though you don't know it now you'll want to turn those knobs later. This is your MVP.

Step 3: this is a deceptively critical step! Write quick documentation. Explain your function signature. What is every variable's purpose and expected type. Ditto for the output. Add a short explanation of what the function does. Then you write developer comments (I typically do this outside the docstring). What are the limits? When does it work? When does it fail? What needs to be improved? What are the bottlenecks? Could some abstraction help? Is it too abstract? What tests do you need? What do they show or don't show? Big or small, add it. You add it now because you won't remember any of this after lunch, let alone tomorrow or in a year. This might sound time consuming but if your functions are simple then this entire step takes no more than 5 minutes. If you're taking more than 30 then your function is probably too complex. Either fix that now (goto step 1) or add that to your developer notes and move on.

Step 4: triage and address your notes. If there's low hanging fruit, get it now. A small issue now is a big issue tomorrow, so get it when it's small. If there's critical issues, address now. No code is perfect and you can't nor shouldn't address every issue. You triage! But because you have the notes if they become bigger issues or things change (they always do!) then you or someone else can much more easily jump in and address them.

This sounds complicated but it moves surprisingly fast when you get the habit. Steps 2-4 is where all the debugging happens. Step 2 gives you time to breath and sets you up to be able to think clearly. Step 3 makes you think and sets you up for success in the future even you'll inevitably come back. (Ironically the most common use I see is agents is either creating this documentation or being a substitution for it. But the best docs come from those who wrote the code and understand the intent, not just what it does). Step 4 is the execution, squashing those bugs. The steps aren't always clear cut and procedural, but more like a guide of what you need to do.

And no, I've never been able to get the to AI to do this end to end. I have found it helpful in parts but I find it best to have it run parallel to me, not in the foreground. It might catch mistakes I didn't see but it also tends to create new ones. Importantly it almost always misses big picture things. I agree with others, work in smaller chunks, but that's good advice when working with agents or not. Code is abstraction and abstraction isn't easy. Code isn't self documenting, no matter how long or descriptive your variable names are. I can't believe documentation is even a contentious subject considering how much time we waste on analyzing to just figure out what it even does (let alone if it does it well).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: