The article is well worth reading. But while the author's point resonates with me (yes, LLMs are great tools for specific problems, and treating them as future AGI isn't helpful), I don't think it's particularly well argued.
Yes, the huge expected value argument is basically just Pascal's wager, there is a cost on the environment, and OpenAI doesn't take good care of their human moderators. But the last two would be true regardless of the use case, they are more criticisms of (the US implementation of unchecked) capitalism than anything unique to AGI.
And as the author also argues very well, solving today's problems isn't why OpenAI was founded. As a private company they are free to pursue any (legal) goal. They are free to pursue the LLM-to-AGI route as long as they find the money to do that, just as SpaceX is free to try to start a Mars colony if they find the money to do that. There are enough other players in the space focused in the here and now. Those just don't manage to inspire as well as those with huge ambitions and consequently are much less prominent in public discourse
Yes, the huge expected value argument is basically just Pascal's wager, there is a cost on the environment, and OpenAI doesn't take good care of their human moderators. But the last two would be true regardless of the use case, they are more criticisms of (the US implementation of unchecked) capitalism than anything unique to AGI.
And as the author also argues very well, solving today's problems isn't why OpenAI was founded. As a private company they are free to pursue any (legal) goal. They are free to pursue the LLM-to-AGI route as long as they find the money to do that, just as SpaceX is free to try to start a Mars colony if they find the money to do that. There are enough other players in the space focused in the here and now. Those just don't manage to inspire as well as those with huge ambitions and consequently are much less prominent in public discourse