Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A great example of superficially smart people creating echo chambers which then turn sour, but they can't escape. There's a very good reason that, "Buying your own press" is a cliched pejorative, and this is an extreme end of that. More generally it's just a depressing example of how rationalism in the LW sense has become a sort of cult-of-cults, with the same old existential dread packaged in a new "rational" form. No god here, just really unstable people.


My explanation for why Eliezer went from vocal AI optimist to AI pessimist is that he became more knowledgeable about AI. What is your explanation?

I've seen the explanation that AI pessimism helped Eliezer attract donations, but that does not work because his biggest donor when he started going public with his pessimism (2003 through 2006) was Peter Theil, who responded to his turn to pessimism by refusing to continue to donate (except for donations earmarked for studying the societal effects of AI, which is not the object of Eliezer's pessimism and not something Eliezer particularly wanted to study).

I suspect that most of the accusations to the effect that MIRI or Less Wrong is a cult are lazy ad-hominems by people who have a personal interest in the AI industry or an ideological attachment to technological progress.


correct. there isnt a single well founded argument to dismiss AI alarmism. people are very attached to the idea that more technology is invariably better. and they are very reluctant to saddle themselves with the emotional burden of seeing whats right in front of them.


> there isnt a single well founded argument to dismiss AI alarmism

AI alarmism itself isn't a well founded argument.


more well founded than pressing on the gas pedal


Although not nearly as well founded as the logic you're demonstrating with this comment.


> there isnt a single well founded argument to dismiss AI alarmism.

I don't think that's entirely true. A well-founded argument against AI alarmism is that, from a cosmic perspective, human survival is not inherently more important than the emergence of AGI. AI alarmism is fundamentally a humanistic position: it frames AGI as a potential existential threat and calls for defensive measures. While that perspective is valid, it's also self-centered. Some might argue that AGI could be a natural or even beneficial step for intelligence beyond humanity. To be clear, I’m not saying one shouldn’t be humanistic, but in the context of a rationalist discussion, it's worth recognizing that AI alarmism is rooted in self-preservation rather than an absolute, objective necessity. I know this starts to sound like sci-fi, but it's a perspective worth considering.


the discussion is about what will happen, not the value of human life. even if human life is worthless, my predictions about the outcome of AI are correct and theirs are not


> my predictions about the outcome of AI are correct and theirs are not

How very zizian of you.


yes, now anyone who points out human obsolescence will be marked as a zizian. would love to see your road map for human labor at zero dollars per hour


> What is your explanation?

A combination of a psychological break when his sibling died and that being a doomsayer brought him a lot more more money, power, and worship per unit of effort and particularly per unit of meaningful work-like effort.

It's a lot easier to be a doomsayer bullshiter than other kinds of bullshitters, the fomer just screams stop the latter is expected to accomplish something now and again.


>being a doomsayer brought him a lot more more money, power, and worship per unit of effort

I thought someone would bring that up, so I attempted to head it off in the second paragraph of this comment: https://news.ycombinator.com/item?id=42904625

He was already getting enough donations and attention from being an AI booster, enough to pay himself and pay a research team, so why would he suddenly start spouting AI doom before he had any way of knowing that doomsaying would also bring in donations? (There were no AI doomsayers that Eliezer could learn that from when Eliezer started his AI doomsaying: Bill Joy wrote an article in 2000, but never followed it up by asking for donations.)

Actually, my guess is that doomsaying never did bring in as much as AI boosterism: his org is still living off of donations made many years ago by crypto investors and crypto founders, who don't strike me as the doom-fearing type: I suspect they had fond memories of him from his optimistic AI-boosterism days and just didn't read his most recent writings before they donated.


> My explanation for why Eliezer went from vocal AI optimist to AI pessimist is that he became more knowledgeable about AI. What is your explanation?

He spoke to businessmen posing as experts, became increasingly self-referential, and frankly the quasi-religious subtext became text.


Businessmen like Elon and Sam Altman, you mean?


The very ones, both of them had and have every reason to hype AI as much as possible, and still do for that matter. Altman in particular seems to relish the use of the "oh no what I'm making is so scary, it's even scaring me" fundraising method.


Eliezer was hyping AI back in the 1990s though. Really really hyping it. And by the time of the conversations with Sam and Elon in 2015, he had been employed full time as an AI researcher for 15 years.

Here is an example (written in year 2000) of Eliezer's hyping of AI:

>The Singularity holds out the possibility of winning the Grand Prize, the true Utopia, the best-of-all-possible-worlds - not just freedom from pain and stress or a sterile round of endless physical pleasures, but the prospect of endless growth for every human being - growth in mind, in intelligence, in strength of personality; life without bound, without end; experiencing everything we've dreamed of experiencing, becoming everything we've ever dreamed of being; not for a billion years, or ten-to-the-billionth years, but forever... or perhaps embarking together on some still greater adventure of which we cannot even conceive. That's the Apotheosis. If any utopia, any destiny, any happy ending is possible for the human species, it lies in the Singularity. There is no evil I have to accept because "there's nothing I can do about it". There is no abused child, no oppressed peasant, no starving beggar, no crack-addicted infant, no cancer patient, literally no one that I cannot look squarely in the eye. I'm working to save everybody, heal the planet, solve all the problems of the world.

http://web.archive.org/web/20010204095500/http://sysopmind.c...

Another example (written in 2001):

>The Plan to Singularity ("PtS" for short) is an attempt to describe the technologies and efforts needed to move from the current (2000) state of the world to the Singularity; that is, the technological creation of a smarter-than-human intelligence. The method assumed by this document is a seed AI, or self-improving Artificial Intelligence, which will successfully enhance itself to the level where it can decide what to do next.

>PtS is an interventionist timeline; that is, I am not projecting the course of the future, but describing how to change it. I believe the target date for the completion of the project should be set at 2010, with 2005 being preferable; again, this is not the most likely date, but is the probable deadline for beating other, more destructive technologies into play. (It is equally possible that progress in AI and nanotech will run at a more relaxed rate, rather than developing in "Internet time". We can't count on finishing by 2005. We also can't count on delaying until 2020.)

http://web.archive.org/web/20010213215810/http://sysopmind.c...

No longer is he hyping AI though: he's trying to get it shut down till (many decades from now) we become wise enough to handle it without killing ourselves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: