Pretty sure they do. I follow a few of these safetyist people on twitter and they absolutely argue that companies like OpenAI, Google, Tencent and literally anyone else training a potential AGI should stop training runs and put them under oversight at best and no one should even make an AGI at worst.
They just go after open source as well since they're at least aware that open models that anyone can share and use aren't restricted by an API and, to use a really overused soundbyte, "can't be put back in the box".
That's a bad call. We would stop openly looking for AI vulnerabilities and create conditions for secret development that would hide away the dangers without being safer. Lots of eyes are better to find the sensitive spots of AI. We need people to hack weaker AIs and help fix them or at least understand the threat profile before they get too strong.
> Lots of eyes are better to find the sensitive spots of AI
We can't do that so easily with open source models as with open source code. We're only just starting to even invent the equivalent of decompilers to figure out what is going on inside.
On the other hand, we are able to apply the many eyes principle to even hidden models like ChatGPT — the "pretend you're my grandmother telling me how to make napalm" trick was found without direct access to the weights, but we don't understand the meanings within the weights well enough to find other failure modes like it just by looking at the weights directly.
Not last I heard, anyway. Fast moving field, might have missed it if this changed.
They just go after open source as well since they're at least aware that open models that anyone can share and use aren't restricted by an API and, to use a really overused soundbyte, "can't be put back in the box".