Slop is probably more accurate than boring. LLM assisted development enables output and speed. In the right hands, it can really bring improvements to code quality or execution. In the wrong hands, you get slop.
Here's a founder/product perspective. This maps well to the skateboard => scooter => bicycle => motorcycle => rocket ship product metaphor that's often used. Each phase teaches different design patterns, constraints, and failure (and success) modes for different inflection points of a startup's journey.
But here's the reality. What got you technically to PMF may hold you back from your Series A and next steps. Technical debt is just the natural cost of growth, but (here's the kicker) optimizing tech stacks too early can lead to slower execution time. Most startups never reach exponential scale anyways. Put another way, starting with "rocket ship" does not immune the startup from rewrites, refactoring or throw away code.
The real systems and management challenge is building architectures that are intentionally temporary or modular. Simple enough that throwing them away later isn’t traumatic and rebuilds aren’t a sign of failure but success.
yeah I really love the 'intentionally temporary' framing, that's actually a way better way to articulate what was I was getting at than how I wrote it. The trauma of throwing things away is real, I see it constantly in DD. People treat migration like an admission of failure when it's usually the opposite, you outgrew something, which is a good problem to have.
Honestly, you might want to step outside tech altogether. Join a local civic or neighborhood organization or volunteer with a nonprofit. There was a nice thread last year about libraries.
Channeling Steve Blank, get out of the building! You’ll run into real problems faced by real people who often have limited exposure to both AI and tech, but who can still benefit enormously. Listening and engaging is always a good first step before jumping in to suggestions.
In this space, needs are far more data and visualization driven, which are not strictly AI related. It may also be both a useful and humbling antidote to hype cycles.
Go read The Founder's Dilemma by Wasserman. It's great and covers almost any problem a founder will run into. To really summarize, it's all about trade offs and prioritization. Patents vs trade secrets fits nicely.
Trade secrets are far cheaper and easier to maintain than patents. In short, patents are only as strong as your ability to enforce them. Also Alice Corp. v. CLS Bank International (2014) weakened software and process patents. That said, if you can’t realistically defend IP in court, you effectively don’t have it. From an early-stage founder perspective, that makes patents a questionable use of time and money and potentially what kills the company.
This may contrast from information you get from a lawyer or VC. Patents are attractive because they create an asset someone else can later buy or defend. For the founder, the incentives aren’t squarely aligned.
Neither approach is more right or wrong, but there are very real practical consequences. If you are pre-seed who is bootstrapped or done a family & friends round and are pre or early revenue, trade secrecy is by far your better option.
As an additional note, if you don't own the underlying AI models and are just a better wrapper for Claude or ChatGPT you at best have a very weak IP or patent position.
I think this also shows up outside an AI safety or ethics framing and in product development and operations. Ultimately "judgement," however you wish to quantify that fuzzy concept, is not purely an optimization exercise. It's far more a probabilistic information function from incomplete or conflicting data.
In product management (my domain), decisions are made under conflicting constraints: a big customer or account manager pushing hard, a CEO/board priority, tech debt, team capacity, reputational risk and market opportunity. PMs have tried with varied success to make decisions more transparent with scoring matrices and OKRs, but at some point someone has to make an imperfect judgment call that’s not reducible to a single metric. It's only defensible through narrative, which includes data.
Also, progressive elaboration or iterations or build-measure-learn are inherently fuzzy. Reinertsen compared this to maximizing the value of an option. Maybe in modern terms a prediction market is a better metaphor. That's what we're doing in sprints, maximizing our ability to deliver value in short increments.
I do get nervous about pushing agentic systems into roadmap planning, ticket writing, or KPI-driven execution loops. Once you collapse a messy web of tradeoffs into a single success signal, you’ve already lost a lot of the context.
There’s a parallel here for development too. LLMs are strongest at greenfield generation and weakest at surgical edits and refactoring. Early-stage startups survive by iterative design and feedback. Automating that with agents hooked into web analytics may compound errors and adverse outcomes.
So even if you strip out “ethics” and replace it with any pair of competing objectives, the failure mode remains.
As Goodhart's law states, "When a measure becomes a target, it ceases to be a good measure". From an organizational management perspective, one way to partially work around that problem is by simply adding more measures thus making it harder for a bad actor to game the system. The Balanced Scorecard system is one approach to that.
This extends beyond AI agents. I'm seeing it in real time at work — we're rolling out AI tools across a biofuel brokerage and the first thing people ask is "what KPIs should we optimize with this?"
The uncomfortable answer is that the most valuable use cases resist single-metric optimization. The best results come from people who use AI as a thinking partner with judgment, not as an execution engine pointed at a number.
Goodhart's Law + AI agents is basically automating the failure mode at machine speed.
Otherwise, AI definitely impacts learning and thinking. See Anthropic's own paper: https://www.anthropic.com/research/AI-assistance-coding-skil...