Hacker Newsnew | past | comments | ask | show | jobs | submit | par1970's commentslogin

Why?

How much domain experience do you have? Is it helping you solve problems for paying customers?

I have plenty of domain experience but I won't define myself as an expert. It helped me solve real business problems.

If your project requires the solution of a tricky algorithmic issue, then is the AI system able to solve that part, or do you have to give it the solution?

I haven't yet tried to solve truly complex algorithmic problems.

Generally speaking, if the problem is common, the model has likely already been trained to solve it.

If it's truly complex and/or specific to my needs, I can try using a reasoning model to think through a solution before moving on to implementation.

I use the agent to conduct research, find resources to understand the complexity, best practices, feedback, etc., and to write a Markdown analysis file on the topic.

Then I can use this file as a basis to precisely define what I want to do and brainstorm with the agent in thinking mode. The more the task is described and defined, the more accurate the result will be.


What models + versions are you using?

Is it bad at designing systems that don't have a bunch of integrations?


> I don't use ChatGPT, but i've been using an agent with Claude Sonnet 4.

Are you using Sonnet 4.6?

> So this AI Agent... It is much faster at doing code when given specific instructions. But it keeps loosing context on architecture, and i cant really let it build complex things with interdependencies that build on each other.

I've only built small things (< 1000 lines) with the systems, so I might be missing this problem.

Is it better than you at building small self-contained things?

> And i get a bad feel when i then wonder how is this app doing what it does? because my agent cant explain it, and i would be stupid to believe what it hallucinated because it sounds really solid until you scratch the construction.

Do you ask it to generate test suites for the things that it builds?

> it would be also faster to build a catastrophic spaghetti code nightmare if not used with great care.

noted


i started working with this two weeks ago, so im learning as i go (or should i say stumble and fall). Weird as it may sound what i found so trustworthy at the beginning, it sounded so rational and logic as it really knew better and i liked letting it do. Obviously it dis not go so well, and i had to correct a lot. But i am learning, what can i say? And yes, i gave it many commandements like "thouh shalt always test before releasing" and it sounded so convincing when it confirmed what an excellent idea that was that i was surprised at least -imagine that- when something did not go as planned on prod because of , well you know...

Did you tell it that it should test, or did you have it generate actual tests that you could run if you wanted to?

Which models + versions are you using? Can you give a specific problem that you found them to be bad at?

The most recent logic I tried getting it to code for me was to make me some recursive C# functions to reverse navigate a node map (a Microsoft Project plan with various feeding chains) to calculate all possible paths, and return them as a list of objects.

It kept producing code that looked to eye that it might work, but each time I ran it it would just throw schoolboy exceptions. I got tired of telling it to correct the things it kept forgetting to check for (nulls, path starts, empty lists), and just coded it from scratch myself.

I find ChatGpt is like pair-programming with a junior, except I'm not getting paid to coach them like I would if it were an actual graduate hire.


Your prompts are zero out of 10 quality

Learn how to prompt better you'll be fine


I think I'm doing just fine, thanks for your concern.

keeping context is a thing that they are bad at. For now, i admit, but they are.

Given a long haul goal with instructions and everything they will reinvent the wheel four times and one of those you will get a square. Reminds me of that monkey paw wish thing. You look at your finished app. Looks beautiful, but its inner workings are a ball of confusion.


Are you arguing this?

(Premise 1) If a country has 350 million people, then the Senate will produce unrepresentative outcomes.

(Premise 2) America has 350 million people.

(Conclusion 1) So, the Senate will produce unrepresentative outcomes in America.

(Conclusion 2) So, the Senate is bad for America.


The Senate is not the group meant to represent the people, so why would you think OP is arguing this?


I think OP is arguing that because they literally said "The Senate is fundamentally a ridiculous way of representing 350 million people and we’re going to continue to get absurd unrepresentative outcomes for as long as it remains a relevant body."

What do you think they are arguing?


Right, but that's explicitly not the body of government meant to represent people. So is he saying the Senate is fundamentally a ridiculous way of representing 100 states, or is he saying the House is fundamentally a ridiculous way of representing 350 million people?


Maybe we are talking past one another.

> Right, but that's explicitly not the body of government meant to represent people.

I haven't claimed that the Senate was intended to represent the people. I also haven't claimed that OP claimed that the Senate was intended to represent the people.

> So is he saying the Senate is fundamentally a ridiculous way of representing 100 states, or is he saying the House is fundamentally a ridiculous way of representing 350 million people?

He didn't say either of those things. He said this "The Senate is fundamentally a ridiculous way of representing 350 million people."


I know that's what he typed, I'm asking what he meant. The Senate does not represent 350 million people. It has never represented people. It was never meant to represent people. Of course it's a ridiculous way of representing people, in the same way that a hammer is a ridiculous tool for heating something up. It's a completely nonsensical statement.


> We're nowhere close to AGI and don't have a clue how to get there.

Do you have an argument?


So do we already do this? And if not, why not?


We sure do, Sweden imports trash (actual trash, not recycling) because it's a huge part of their energy source.

A large amount of plastic recycling is burned, but always in secret, because when people find out they freak out, because they mistakenly think that making some new plastic out of it is somehow better.


> Sadly, the answer is that you can't.

Suppose there are two distinct entities, each such that if it is learned about, then it kills the learner, call them Geigh and Ritaar. What happens when Geigh learns about Ritaar?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: