---
(2) “Critical harm” does not include any of the following:
(A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.
---
This exception swallows any rule and fails to target the difference with AI: it's actually better than an ordinary person at assimilating multiple fact streams.
That suggests this law is legislative theater: something designed to enlist interest and donations, i.e., to build a political franchise. That could be why it targets only the largest models, affecting only the biggest players, who have the most resources to donate per decision and the least goodwill to burn with opposition.
Regulating AI would be a very difficult legislative/administrative task, on the order of a new tax code in its complexity. But it will be impossible if treated as a political franchise.
As for self-regulation, with OpenAI's changing to for-profit, the non-profit form is insufficient to maintain a public benefit focus. Permitting this conversion is on par with the 1990's+ conversion of nonprofit hospital systems to for-profit.
AI's potential shines a bright light on our weakness in governance. While weak governance affords more opportunities, escaping the exploitation caused by governance failures is the engine of autocracy, and autocracy consumes property along with all other rights.
(A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.
---
This exception swallows any rule and fails to target the difference with AI: it's actually better than an ordinary person at assimilating multiple fact streams.
That suggests this law is legislative theater: something designed to enlist interest and donations, i.e., to build a political franchise. That could be why it targets only the largest models, affecting only the biggest players, who have the most resources to donate per decision and the least goodwill to burn with opposition.
Regulating AI would be a very difficult legislative/administrative task, on the order of a new tax code in its complexity. But it will be impossible if treated as a political franchise.
As for self-regulation, with OpenAI's changing to for-profit, the non-profit form is insufficient to maintain a public benefit focus. Permitting this conversion is on par with the 1990's+ conversion of nonprofit hospital systems to for-profit.
AI's potential shines a bright light on our weakness in governance. While weak governance affords more opportunities, escaping the exploitation caused by governance failures is the engine of autocracy, and autocracy consumes property along with all other rights.