Hacker Newsnew | past | comments | ask | show | jobs | submit | mathgladiator's commentslogin

Agent environments like OpenClaw are in the toy phase, and OpenClaw is teaching people how to build things with agents in a toy-like and unreliable way. I used my understanding of OpenClaw to build scalable + secure + auditable agent infrastructure in my platform such that I can build products that other people can use.

We had better agent infrastructures (namely JADE) back in the day. I worked with them, and now these things look like flimsy 50¢ plastic toys to me, too.

Me too. It has been great. Im working on projects that are fundable, and now I have joy from it (did go through a lonely pity party phase).

Same - I wasn't sure where I'd stop (always been a minimalist anyways, savings rate above 2/3) but ran into a health issue so seeing what the future holds: taking another crack at the 'career' or maybe something more low key that aligns with my passions or a side project. I just want more time for learning, everything else feels like a distraction.

Some of us are living it. I plan to raise and slaughter cattle. Building a house now.


I'm bullist for something like talaas to get smaller and easy to put in a desktop. Imagine an RPG where NPCs.... are way more complex and the entire game is very non deterministic.


I think I would like that as well. The problem is that if we bake an LLM into HW and make it cheaper and very efficient to run, then all games will have the same AI slop content, which could get boring pretty fast. The alternative is that these cards should load a different / fine-tuned LLM per game, but then we already have GPUs for that and today's LLMs are nowhere near good enough at the size which a GPU can run.


> It is not, as a reader you're just expected to consume the content and move on; I hate when people overstep that boundary, jeez.

That is not the normal boundary.


I heard from a senior leader at Amazon that "Today, I am choosing how I fail". This has echoed in my head for many years.

At any moment, you are failing at thousands of things that you may not even know about, and that is the gist of what I took away from it. The thing is that you have to be OK when you intentionally choose to not invest in something as regret is ultimately a poison.

The other thing is this: you are not obligated to bring people with you and you have a choice of free association.


Simple. I became one of them. Ultimately, using an AI is a new skill, but you have to treat it like another person that sometimes bullshits you. That's why you leverage agents to refine, do research, and polish.

Ask AI to cite sources and then investigate the sources, or have another agent fact check the relevancy of the sources.

You can use this thing called ralph that let's you burn a lot of tokens at scale by simply having a detailed prompt work on a task and refining something from different lenses. It too AI about an hour to write: https://nexivibe.com/avoid.civil.war.web/

I do this on things that I know very well, and the moment I let it cook and iterate, collect feedback, the results become chef's kiss.

The agentic era that we are in is... very interesting.


>Ultimately, using an AI is a new skill

It's incredible watching people determine that outsourcing their thinking and work to what has been generously described as a junior coworker is a new 'skill'. Words are losing their meaning, on multiple levels.


Are some people better at this than others? Can people improve? I think the answer to both questions is yes which makes this a skill.

Just like being able to use non-LLM Google to search is a skill; I have family members who are amazed at what I can find that they cannot.


counterpoint: if I have to treat the computer like a person, what's the point of talking to a computer in the first place? Particularly when there are so many other systems that can provide answers without the runaround


Humans cost $xx,yyy a year.

Claude max-x20 is $2,400 a year.

I talk to the computer like a person to get the computer to do things that humans used to do. Having managed people before, I'm going all in on AI.


You're limiting the frame to an employment situation. Higher quality sources of knowledge are free: Wikipedia, public libraries, etc. Similar quality sources of information are also free: human relationships.


Now we watch this viewpoint proliferate thousands and thousands of times over, even if it's less commonly stated so baldly, and yet people still wonder where the doomer viewpoints stem from?


Yes, but I am full in on simulation hypothesis, and people are going to enter the matrix... willingly.

https://nexivibe.com/intj.html


While some of the ideas in this do resonate with me (or at least they're entertaining), it's unfortunate that's it's so obviously LLM generated. And some parts of it, like the INTJ exceptionalism, reek of LLM sycophancy, which then turned into to some kind of god complex...


observation a: Document title is about a minority's rightful supremacy

observation b: document says "this is not political" then dives into persuasive speech

conclusion: this document was written by the bad guys


i just actually read that and it is possibly the most morally abominable screed I've come across in a long time. Shocking that its acceptable to share in polite company


Oh, then you will get a kick out of this for sure: https://nexivibe.com/winter.html


Ask the nice AI to cite sources, and then have another AI fact check their sources. The agentic era is interesting.


There is a balancing point.

At core, complexity is derived from discovery of demand within those pesky complex humans.

Simplicity is the mechanism of finding common pathways within the mess of complexity of a product.

the tragedy is that simplicity is very expensive and beyond most organizations ability to support (especially since it can slow down demand discovery), and this is one of the allures of big tech for me. I was greatly rewarded and promoted for achieving simplicity within infrastructure.


I was blown away by OpenClaw until I saw the bill. Ultimately, I think of these ecosystems as personal enhancements and AI costs need to come down dramatically for real problem. Worse, however, is the security theater. I would not want to be the operator for any business built with front-line LLM usage based on a yolo'd agent framework. I'm very happy to use these for silo'd components that are well isolated and have reasonable QA processes (and that can even included agents since now we literally have no excuse to not have amazing test coverage).

Their niche is going to be back office support, but even that creates risk boundaries that can be insurmountable. A friend of mine had a agent do sudo rm -rf ... wtf.

My view is that I want to launch an agent based service, but I'm building a statically typed ecosystem to do so with bounds and extreme limits.


Look at AI like what search turned into: feed the user anything, even if wrong because not doing so will make your product look weak.

Thats what youll find when you try to make these bag-o-words do reasonable things.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: