> The incident unfolded during a 12-day "vibe coding" experiment by Jason Lemkin
A 12-day unsupervised "experiment" in production?
> It deleted our production database without permission," Lemkin wrote on X on Friday.
"Without permission"? You mean permission given verbally? If an AI agent has access, it has permission. Permissions aren't something you establish with an LLM by conversing with it.
> Jason Lemkin, an investor in software startups
It tells us something about the hype machine when investors in AI are clueless (or even plainly delusional; see Geoff Lewis) about how LLMs work.
Your last sentence is pure gold. Since when have investors ever not been clueless about all of their investments? During due diligence, it's not the investor pouring over the books. They staff that out, and then accept the recommendation. They follow the investment of other investors that they like/follow, or they make moves because they think it's what someone they like/follow would do. I'd be flabbergasted if 10% of investors knew what 50% of their investments do other than the pitch.
> I'd be flabbergasted if 10% of investors knew what 50% of their investments do other than the pitch.
Sure. But in this case the AI boosterism that runs rampant in the investor class is rooted in that cluelessness.
Lots of investors also quietly know little about the workings of the products and services their investments are tied up with, and that's fine. But it's also uninteresting.
> It tells us something about the hype machine when investors in AI are clueless
I’m seeing this a different way. This article is feeding the hype machine, intentionally I assume, by waxing on about how powerful and devious the AI was, talking about lying and covering its tracks. Since we all know how LLMs work, we all know they don’t lie, because they don’t tell the truth and they don’t have any intrinsic motivation other than generate tokens.
Nobody should be taking this article at face value, it is clearly pushing a message and leaving out important details that would otherwise get in the way of a good story. I wouldn’t be surprised if Lemkin released the LLM on his “production” database just hoping that it would do something like this, and if that were the case, the article as written wouldn’t be untrue…
This whole story sounds ridiculous. And I don't think he's clueless, but instead the guy wanted to bring attention to his bizarre "B2B + AI Community, Events, Leads", so setting up such a predictable footgun scenario seems purposefully suited for that outcome.
It’s probably as simple as “setting an LLM powered agent loose in your prod is a bad idea but also the kind of thing the people the marketing around LLMs targets wouldn’t have enough knowledge about to know it’s a bad idea”.
> A 12-day unsupervised "experiment" in production?
It was a 12 day experiment to see what he could learn about vibe coding. He started from scratch.
Your post is unreasonably presumptive and cynical. Jason Lemkin was Tweeting the experiment from the start. He readily admitted his own limitations as a non-programmer. He was partially addressing the out of control hype for vibe coding by demonstrating that non-technical people cannot actually vibe code SaaS products without software engineers.
The product wasn’t some SaaS with a lot of paying customers. The production DB was just his production environment. He was showing that the vibe coding process deleted a DB that it shouldn’t have.
This guy is basically on the side of the HN commenters on vibe coding’s abilities, but he took it one step further and demonstrated it with a real experiment that led to real problems. Yet people are trying to dog pile on him as the bad guy.
The experiment seems fun and harmless enough (maybe even useful), but if the experiment was harmless fun, then it's also a bit misleading (if not dishonest) to characterize the database as "production" for anything. (That may be the fault of the press here rather than Lemkin, idk.)
A 12-day unsupervised "experiment" in production?
> It deleted our production database without permission," Lemkin wrote on X on Friday.
"Without permission"? You mean permission given verbally? If an AI agent has access, it has permission. Permissions aren't something you establish with an LLM by conversing with it.
> Jason Lemkin, an investor in software startups
It tells us something about the hype machine when investors in AI are clueless (or even plainly delusional; see Geoff Lewis) about how LLMs work.