If an OpenAI model helped someone create a cancer cure, they wouldn't see a dime from that beneficial act. So why should they be liable if someone does something harmful with the model?
If an OpenAI model helped someone create a cancer cure I guarantee that they would try to profit as much as possible from that fact. They have even talked in the past about having partial ownership over discoveries made with AI be part of the license. They would be all over that.
I'm sure if they could, they would, sure, as would any business. That's where competition enters the equation. They can't do it because their competitors would undercut them by requiring no such conditions.
Sure they would, just like people would use the bad PR to smear OpenAI if someone did something bad with knowledge their model created. The situation is totally symmetrical and fair as it is, and my point is that expecting them to liable is asymmetric and unfair. If they can be held liable, then they should also be able reap the rewards in order to offset those risks.
This is what I'd expect from companies - I don't see why Facebook would get money because they helped people connect to each other who ended up developing a cancer cure, but they definitely should be held accountable for enabling a genocide. You're allowed to operate a business until you cause harm to society, then we can shut it down.
I think the big thing you would need is to see the internal emails - if there was ever a case where someone raised a concern about this possibility and it wasn't taken seriously, then they should be liable. If they just never thought about it then it could be negligence but I think if I was on a jury I'd find that more reasonable than knowing it could be a problem and deciding you aren't responsible
> I don't see why Facebook would get money because they helped people connect to each other who ended up developing a cancer cure, but they definitely should be held accountable for enabling a genocide.
Why? What does it even mean to "enable a genocide"? Just saying something isn't an argument.
> if there was ever a case where someone raised a concern about this possibility and it wasn't taken seriously, then they should be liable.
Again, why? How is this any different than electricity as a tool, which has both beneficial and harmful uses? AI is knowledge as a utility, that's the position here.
Making knowledge illegal is a dangerous precedent. Actions should be illegal, not knowledge. Don't outlaw knowing how to make neurotoxic agents, outlaw actually trying to make them.
As for OpenAI immunity, I'm not sure I see the problem. Consider the converse position: if an OpenAI model helped someone create a cancer cure, would OpenAI see a dime of that money? If they can't benefit proportionally from their tool allowing people to achieve something good, then why should they be liable for their tool allowing people to achieve something bad.
They're positioning their tool as a utility: ultimately neutral, like electricity. That seems eminently reasonable.
> 1. LLMs don't just provide knowledge, they provide recommendations, advice, and instructions.
That's knowledge.
> 2. OpenAI very much feels that they should profit from the results of people using their tools. Even in healthcare specifically [0].
If they're building a tailored tool for a specific person/company and that's the agreement they sign the people who are going to use with the tool, sure. I'm talking about their generic tool, AI being knowledge as a utility, which is the context of this legislation.
The point is valid, but that's typically the way it is. "You can't enjoy the benefit but the detriment is all yours" is how the federal government generally operates.
I think many humans engage in metacognitive reasoning, and that this might not be strongly represented in training data so it probably isn't common to LLMs yet. They can still do it when prompted though.
LLMs have zero metacognition. Don't be fooled - their output is stochastic inference and they have no self-awareness. The best you'll see is an improvised post-hoc rationalization story.
> The best you'll see is an improvised post-hoc rationalization story.
Funny, because "post-hoc rationalization" is how many neuroscientists think humans operate.
That LLMs are stochastic inference engines is obvious by construction, but you skipped the step where you proved that human thoughts, self-awareness and metacognition are not reducible to stochastic inference.
I'm not saying we don't do post-hoc rationalization. But self-awareness is a trait we possess to varying degrees, and reporting on a memory of a past internal state is at least sometimes possible, even if we don't always choose to do so.
You can turn all these argents around and prove the same is true for humans. Don't be fooled by dogmatic people who spread the idea that the human mind is the pinnacle of cognition in the universe. Best to leave that to religion.
That is a bold statement that would need proof to back it up in both cases. So far it is only dogma. And unlike humans, we actually have research hints that this assumption is false for LLMs. Just because the state is not human-explainable doesn't mean it does not exist. The same is true btw for any physical "state" that may or may not exist in the human brain. Everything else is religion and metaphysics.
> Conversely: in humans, intelligence is inversely correlated with crime.
Inversely correlated with crime that's caught and successfully prosecuted, you mean, because that's what makes up the stats on crime. I think people too often forget that we consider most criminals "dumb" because those who are caught are mostly dumb. Smart "criminals" either don't get caught or have made their unethical actions legal.
I'm curious if frontier labs use any forms of compression on their models to improve performance. The small % drop of Q8 or FP8 would still put it ahead of Opus, but should double token throughput. Maybe then interactive use would feel like an improvement.
I used GLM5 quite a bit, and I'd say it was maybe on par with Sonnet for most simple to medium tasks. Definitely not Opus though. Didn't test super long context tasks, and that's where I would expect it to break down. A recent study on software maintainability still showed Sonnet and Opus were peerless on that metric, although GLM series of models has been making impressive gains.
Very interesting. I run Claude Code in VS Code, and unfortunately there doesn't seem to be an equivalent to "cli.js", it's all bundled into the "claude.exe" I've found under the VS code extensions folder (confirmed via hex editor that the prompts are in there).
Edit: tried patching with revised strings of equivalent length informed by this gist, now we'll see how it goes!
They are definitely that. Regardless of their approach, being upfront and transparent would have been nice. Bricking their own software that previously worked well for their customers isn't cool.
reply