Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I understand the concern that a "superintelligence" will emerge that will escape its bounds and threaten humanity. That is a risk.

My bigger, and more pressing worry, is that a "superintelligence" will emerge that does not escape its bounds, and the question will be which humans control it. Look no further than history to see what happens when humans acquire great power. The "cold war" nuclear arms race, which brought the world to the brink of (at least partial) annihilation, is a good recent example.

Quis custodiet ipsos custodes? -- That is my biggest concern.

Update: I'm not as worried about Ilya et al as commercial companies (including formerly "open" OpenAI) discovering AGI.



It’s just clearly military R&D at this point.

And it’s not even a little bit controversial that cutting edge military R&D is classified in general and to an extreme in wartime.

The new thing is the lie that it’s a consumer offering. What’s new is giving the helm to shady failed social network founders with no accountability.

These people aren’t retired generals with combat experience. They aren’t tenured professors at Princeton IAS on a Nobel shortlist and encumbered by TS clearance.

They’re godawful almost ran psychos who never built anything that wasn’t extractive and owe their position in the world to pg’s partisanship 15 fucking years ago.


most technology is dual or multiple use, starting with a rock or knife...

so it is up to the fabric of our society and everyone involved in dealing with the technology, how the rules and boundaries are set.

that there will be military use is obvious. However, it is naive to think one can avoid military use by others by not enabling oneself for it.


To me it is not clear at all, can you please elaborate why you make such a strong claim?


My opinion is based on a lot more first hand experience than most, some of which I’m at liberty to share and some that I’m not and therefore becomes “color”.

But I’m a nobody, Edward Snowden has a far more convincing track record on calling abuses of power: https://community.openai.com/t/edward-snowden-on-openai-s-de...


Here's Edward Snowden heroically waffling about "nuance" when asked why he hasn't spoken out more about Russia's invasion of Ukraine. On a crypto website. He became a full citizen of Russia in autumn 2022, by the way.

https://www.coindesk.com/video/edward-snowden-explains-why-h...


I don’t know why people who risked their life to expose something a certain injustice (NSA surveillance in this case) that they had ample firsthand knowledge of, should be expected to sacrifice their life to expose every other injustice out there.


He can get arrested and killed if he speaks out like that against Russia, especially because Putin personally intervened to give him and his wife citizenship. There is an understanding that his actions made the United States look bad and that's why he is getting this and that's pretty much it. If he causes problems in Russia he can go bye bye.


Right. It's fine for him to be as cowardly as everyone else! But then he's not a hero who speaks truth to power, unlike say Vladimir Kara-Murza.


He did upend his entire life to stand up for something he believed in and now is in the sights of the US gov until the day he dies. The fact that he does not also want to be a sacrificial lamb to speak out on Ukraine doesn’t really compromise that fact in my mind.


Snowden is not seeking death, he seeks justice. You won't get the latter by speaking up against a dictator while being in his hands.


In contrast Chelsea Manning did not run off to a country run by a dictator but had some sense of conviction.

Snowden is sadly now a useful idiot.


Snowden didn't choose Russia, the USA chose for him by cancelling his passport (!).


I guess it beats Venezuela, Iran, North Korea or other places less comfortable.


Much easier to get out of harm's way when you've already got the example of Manning showing you that you need to.


What has he said that you consider ‘idiotic’?


The term "useful idiot" refers to Lenin - basically it is folks who might not be for the communist cause but are propaganda tools and usually ignorant they are being used.


The term "useful idiot" was invented by conservatives in the West during the Cold War, as a slur against social democrats.

Lenin never used it. It didn't even exist during his lifetime.


It was a slur against collaborators with communism, not against social democrats: https://quoteinvestigator.com/2019/08/22/useful-idiot/


The slur was originally directed against social democrats by conservatives who accused them of helping communism.

Later, the term became a much more general slur used by conservatives against anyone who wasn't sufficiently "tough" against the USSR.


Tbf he already has 1 superpower after his head.



>owe their position in the world to pg’s partisanship 15 fucking years ago

PG?


Paul Graham


AGI is still a long way off. The history of AI goes back 65 years and there have been probably a dozen episodes where people said "AGI is right around the corner" because some program did something surprising and impressive. It always turns out human intelligence is much, much harder than we think it is.

I saw a tweet the other day that sums up the current situation perfectly: "I don't need AI to paint pictures and write poetry so I have more time to fold laundry and wash dishes. I want the AI to do the laundry and dishes so I have more time to paint and write poetry."


AGI does look like an unsolved problem right now, and a hard one at that. But I think it is wrong to think that it needs an AGI to cause total havoc.

I think my dyslexic namesake Prof Stuart Russell got it right. It humans won't need an AGI to dominate and kill each other. Mosquitoes have killed far more people than war. Ask yourself how long will it take us to develop a neutral network as smart as a mosquito, because that's all it will take.

It seems so simple, as the beastie only has 200,000 neurons. Yet I've been programming for over 4 decades and for most of them it was evident neither I nor any of my contemporaries were remotely capable of emulating it. That's still true if course. Never in my wildest dreams did it occur to me that repeated applications could produce something I couldn't, a mosquito brain. Now that looks imminent.

Now I don't know what to be more scared of. An AGI, or a artificial mosquito swarm run by Pol Pot.


Producing a mosquito brain is easy. Powering it with the Krebs cycle is much harder.

Yes you can power these things with batteries. But those are going to be a lot bigger than real mosquitos and have much shorter flight times.


But then, haven't we reached that point already with the development of nuclear weapons? I'm more scared of a lunatic (whether of North Korean, Russian, American, or any other nationality) being behind the "nuclear button" than an artificial mosquito swarm.


The problem is that strong AI is far more multipolar than nuclear technology and the ways in which it might interact with other technologies to create emergent threats is very difficult to forsee.

And to be clear, I'm not talking about superintelligence, I'm talking about the models we have today.


You cannot copy a nuclear weapon via drag and drop.


The way I see it, this is simply a repetition of history.

El dorado, the fountain of youth, turning dirt to gold, the holy grail and now... superintelligence.


Human flight, resurrection (cardiopulmonary resuscitation machines), doubling human lifespans, instantaneous long distance communication, all of these things are simply pipe dreams.


Setting foot on the moon, splitting the atom and transmuting elements, curing incurable diseases like genetic blindness and spinal atrophy...

> doubling human lifespans

This is partly a statistical effect of greatly reducing infant mortality (which used to be as bad as 50%) but even that is mind-blowing.


> resurrection (cardiopulmonary resuscitation machines)

Get back to me when that can ressurect me after I've been dead for a week or so.


We have people walking around for weeks with no heartbeat.

They're tied to a machine, sure, but that machine itself is a miracle compared to any foundational religious textbook including those as recent as Doreen Valiente and Gerald Gardner with Wicca.


But had those people been lying around with no heartbeat for a week or three before they were hooked up to the machine? If they had, then yes, that would actually be resurrection. But what you're describing doesn't sound like it.


Sometimes, my dishwasher stacks are poetry.


That statement is extremely short sighted. You don't need AI to do laundry and dishes. You need expensive robotics. in fact both already exist in a cheapened form. A laundry machine and a dishwasher. They already take 90% of the work out of it.


That "tweet" loses a veneer if you see that we value what has Worth as a collective treasure, and the more Value is produced the better - while that one engages in producing something of value is (hopefully but not necessarily) a good exercise in intelligent (literal sense) cultivation.

So, yes, if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality: very welcome.

Do not miss that the current world is increasingly complex to manage, and our lives, and Aids would be welcome. The situation is much more complex than that wish for leisure or even "sport" (literal sense).


> we value what has Worth as a collective treasure, and the more Value is produced the better ... So, yes, if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality: very welcome.

Except that's not how we value the "worth" of something. If "Art, and Thought, and Judgement" -- be they of "Superior quality" or not -- could be produced by machines, they'd be worth a heck of a lot less. (Come to think of it, hasn't that process already begun?)

Also, WTF is up with the weird capitalisations? Are you from Germany, or just from the seventeenth century?


The issue I have with all of these discussions is how vague everyone always is.

“Art” isn’t a single thing. It’s not just pretty pictures. AI can’t make art. And give a good solid definition for thought which doesn’t depend on lived experiences while we’re at it. You can’t. We don’t have one.

“AGI” as well.


> “Art” isn’t a single thing. It’s not just pretty pictures

And this is why it was capitalized as "Art", proper Art.

> AI can’t make art

Not really: "we may not yet have AI that makes art". But if a process that creates, that generates (proper sense) art is fully replicated, anything that can run that process can make Art.

> And give a good solid definition for [T]hought

The production of ideas which are truthful and important.

> which doesn’t depend on lived experiences while we’re at it. You can’t

Yes we can abstract from instances to patterns and rules. But it matters only relatively: if the idea is clear - and ideas can be very clear to us - we do not need to describe them in detail, we just look at them.

> AGI” as well

A process of refinement of the ideas composing a world model according to truthfulness and completeness.


> proper Art.

That’s not a real thing. There’s no single definition for what art is as it’s a social construct. It depends on culture.

> anything that can run process can make art

Again without a definition of art, this makes no sense. Slime mold can run processes, but it doesn’t make art as art is a human cultural phenomenon.

> the production of ideas that are truthful and important

What does “ideas” and “important” mean?

To an LLM, there are no ideas. We humans are personifying them and creating our own ideas. What is “important,” again, is a cultural thing.

If we can’t define it, we can’t train a model to understand it

> yes we can abstract from instances to patterns and rules.

What? Abstraction is not defining.

> we do not need to describe them in detail

“We” humans can, yes. But machines can not because thought, again, is a human phenomenon.

> world model

Again, what does this mean? Magic perfect future prediction algorithm?

We’ve had soothsayers for thousands of years /s

It seems to me that you’ve got it in your head that since we can make a computer generate understandable text using statistics that machines are now capable of understanding deeply human phenomena.

I’m sorry to break it to you, but we’re not there yet. Maybe one day, but not now (I don’t think ever, as long as we’re relying on statistics)

It’s hard enough for us to describe deeply human phenomena through language to other humans.


> It seems to me that you’ve got it in your head

Do us all a favour and never again keep assumptions in your head: your misunderstanding was beyond scale. Do not guess.

Back to the discussion from the origin: a poster defends the idea that the purpose of AI would be in enabling leisure and possibly sport (through alleviating menial tasks) - not in producing cultural output. He was replied first that cultural output having value, it is welcome from all sources (provided the Value is real), and second that the needs are beyond menial tasks, given that we have a large deficit in proper thought and proper judgement.

The literal sentence was «yes, if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality: very welcome», which refers to the future, so it cannot be interpreted in the possibility being available now.

You have brought LLMs to the topic when LLMs are irrelevant (and you have stated that you «personify[] them»!). LLMs have nothing to do with this branch of discussion.

You see things that are said as «vague», and miss definitions for things: but we have instead very clear ideas. We just do not bring the textual explosion of all those ideas in our posts.

Now: you have a world in front of you; of that world you create a mental model; the mental model can have a formal representation; details of that model can be insightful to the truthful prosecution of the model itself: that is Art or Thought or Judgement according to different qualities of said detail; the truthful prosecution of the model has Value and is Important - it has, if only given the cost of the consequences of actions under inaccurate models.


> Except that's not how we value the "worth" of something

In that case, are you sure your evaluation is proper? If a masterpiece is there, and it /is/ a masterpiece (beyond appearances), why would its source change its nature and quality?

> Come to think of it, hasn't that process already begun?

Please present relevant examples: I have already observed in the past that simulations of the art made by X cannot just look similar but require the process, the justification, the meanings that had X producing them. The style of X is not just thickness of lines, temperature of colours and flatness of shades: it is in the meanings that X wanted to express and convey.

> WTF is up with the weird capitalisations?

Platonic terms - the Ideas in the Hyperuranium. E.g. "This action is good, but what is Good?".


> E.g. "This action is good, but what is Good?".

Faking thinking isn't “Thinking”. Art is supposed to have some thought behind it; therefore, “art” created by faking thinking isn't “Art”. Should be utterly fucking obvious.

> Platonic terms - the Ideas in the Hyperuranium.

Oh my god, couldn't you please try to come off as a bit more pretentious? You're only tying yourself into knots with that bullshit; see your failure to recognise the simple truth above. Remember: KISS!


No, CRConrad, no. You misunderstood what was said.

Having put those capital initials in the words was exactly to mean "if we get to the Real Thing". You are stating that in order to get Art, Thinking and Judgement we need Proper processes: and nobody said differently! I wrote that «if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality [this will be] very welcome». There is nothing in there that implies that "fake thinking" will produce A-T-J (picked at writing as the most important possible results I could see); there is an implicit statement that Proper processes (i.e. "real thinking") could be artificially obtained, when we will find out how.

Of course the implementation of a mockery of "thought" will not lead to any Real A-T-J (the capitals were for "Real"); but if we will manage to implement it, then we will obtain Art, and Thought, and Judgement - and this will be a collective gain, because we need more and more of them. Irregardless if the source has more carbon or more silicon in it.

«Faking thinking» is not "implementing thinking". From a good implementation of thinking you get the Real Thing - by definition. That we are not there yet does not mean it will not come.

(Just a note: with "Thought" in the "A-T-J" I meant "good insight". Of course good thinking is required to obtain that and the rest - say, "proper processes", as it is indifferent whether it spawns from an algorithmic form or a natural one.)

> KISS

May I remind you of Einstein's "As simple as possible, but not oversimplified".

> only

Intellectual instruments can be of course quite valid and productive if used well - the whole of a developed mind comes from their use and refinement. You asked about the capitals, I told you what they are (when you see them in the wild).

> see your failure to recognise

Actually, that was a strawman on your side out of misunderstanding...


> You are stating that in order to get Art, Thinking and Judgement we need Proper processes

Well yeah, but no -- I was mostly parodying your style; what I actually meant could be put as: in order to get art, thinking and judgement we need proper processes.

(And Plato has not only been dead for what, two and a half millennia?, but before that, he was an asshole. So screw him and all his torch-lit caves.)

> «Faking thinking» is not "implementing thinking".

Exactly. And all the LLM token-regurgitatinmg BS we've seen so far, and which everyone is talking about here, is just faking it.

> May I remind you of Einstein's "As simple as possible, but not oversimplified".

Yup, heard it before. (Almost exactly like that; I think it's usually rendered as "...but not more" at the end.) And what you get out of artificial "intelligence" is either oversimplified or, if it's supposed to be "art", usually just plain kitsch.

> > see your failure to recognise

> Actually, that was a strawman on your side out of misunderstanding...

Nope, the imaginary "strawman" you see is a figment of your still over-complicating imagination.


> "strawman" you see

You have stated: «Faking thinking isn't “Thinking”. Art is supposed to have some thought behind it; therefore, “art” created by faking thinking isn't “Art”. Should be utterly fucking obvious».

And nobody said differently, so you have attacked a strawman.

> And all the LLM token-regurgitatinmg BS we've seen so far, and which everyone is talking about here ... And what you get out of artificial "intelligence" is either oversimplified or, if it's supposed to be "art", usually just plain kitsch

But the post you replied to did not speak about LLMs. Nor it spoke about current generative engines.

You replied to a «if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality» - which has nothing to do with LLMs.

You are not understanding the posts. Make an effort. You are strongly proving the social need to obtain at some point intelligence from somewhere.

The posts you replied to in this branch never stated that current technologies are intelligent. Those posts stated that if one day we will implement synthetic intelligence, it will not to be «to fold laundry and wash dishes», and let people have more time «to paint and write poetry» (original post): it will be because we need more intelligence spread in society. You are proving it...


Well, copilots do precisely that, no?

Or you talking folding literal laundry, in which case this is more of a robotics problem, not the ASI, right?

You don't need ASI to fold laundry, you do need to achieve reliable, safe and cost efficient robotics deployments. These are different problems.


> You don't need ASI to fold laundry

Robots are garbage at manipulating objects, and it's the software that's lacking much more than the hardware.

Let's say AGI is 10 and ASI is 11.

They're saying we can't even get this dial cranked up to 3, so we're not anywhere close to 10 or 11. You're right that folding laundry doesn't need 11, but that's not relevant to their point.


You wouldn't get close to ASI before laundry problem had been solved.


it’s harder than we thought so we leveraged machine learning to grow it, rather than creating it symbolically. The leaps in the last 5 years are far beyond anything in the prior half century, and make predictions of near term AGI much more than a “boy who cries wolf” scenario to anyone really paying attention.

I don’t understand how your second paragraph follows. It just seems to be whining that text and art generative models are easier than a fully fledged servant humanoid, which seems like a natural consequence of training data availability and deployment cost.


> I don’t understand how your second paragraph follows. It just seems to be whining that text and art generative models are easier than a fully fledged servant humanoid, which seems like a natural consequence of training data availability and deployment cost.

No, it's pointing out that "text and art generative models" are far less useful [1] than machines that would be just as little smarter at boring ordinary work, to relieve real normal people from drudgery.

I find it rather fascinating how one could not understand that.

___

[1]: At least to humanity as a whole, as opposed to Silicon Valley moguls, oligarchs, VC-funded snake-oil salesmen, and other assorted "tech-bros" and sociopaths.


> No, it's pointing out that "text and art generative models" are far less useful [1] than machines that would be just as little smarter at boring ordinary work, to relieve real normal people from drudgery.

That makes no sense. Is alphafold less useful than a minimum wage worker because alphafold can't do dishes? The past decades of machine learning have revealed that the visual-spatial capacities that are commonplace to humans are difficult to replicate artificially. This doesn't mean the things which AI can do well are necessarily less useful than the simple hand-eye coordination that are beyond their current means. Intelligence and usefulness isn't a single dimension.


> Is alphafold less useful than a minimum wage worker because alphafold can't do dishes?

To the average person, the answer is a resounding “yes”.


Thanks, saved me writing the exact same thing.


it's not according to expert consensus (top labs, top scientists)


Yeah but the exponential growth of computer power thing https://x.com/josephluria/status/1653711127287611392

I think AGI in the near future is pretty much inevitable. I mean you need the algos as well as the compute but there are so many of the best and brightest trying to do that just now.


This.

Every nation-state will be in the game. Private enterprise will be in the game. Bitcoin-funded individuals will be in the game. Criminal enterprises will be in the game.

How does one company building a safe version stop that?

If I have access to hardware and data how does a safety layer get enforced? Regulations are for organizations that care about public perception, the law, and stock prices. Criminals and nation-states are not affected by these things

It seems to me enforcement is likely only possible at the hardware layer, which means the safety mechanisms need to be enforced throughout the hardware supply chain for training or inference. You don't think the Chinese government or US government will ignore this if its in their interest?


I think the honest view (and you can scoff at it) is that winning the SI race basically wins you the enforcement race for free


That's why it's called an arms race, and it does not really end in this predictable manner.

The party that's about to lose will use any extrajudicial means to reclaim their victory, regardless of the consequences, because their own destruction would be imminent otherwise. This ultimately leads to violence.


> The party that's about to lose will use any extrajudicial means to reclaim their victory,

How will the party about lose know they are about to lose?

> regardless of the consequences, because their own destruction would be imminent otherwise.

Why would AGI solve things using destruction? Consider how the most inteligent among us view our competition with other living beings. Is destruction the goal? So why would an even more intelligent AGI have that goal?


Let's say China realize they're behind in the SI race. They may have achieved AGI, but only barely, while the US may be getting close to ASI takeoff.

Now let's assume they're able to quickly build a large datacenter far underground, complete with a few nuclear reactors and all spare parts, etc, needed. Even a greenhouse (using artificial light) big enough to feed 1000 people.

But they realize that their competitors are about to create ASI at a level that will enable them to completely overrun all of China with self-replicating robots within 100 days.

In a situation, the leadership MAY decide to enter those caves alongside a few soldiers and the best AI researchers, and then simply nuke all US data centers (that are presumably above ground), as well as any other data center that could be a threat, worldwide.

And by doing that, they may (or at least may think) they can buy enough time to win the ASI race, at the cost of a few billion people.

Would they do it? Would we?


Development of ASI is likely to be a closely guarded secret, given its immense potential impact. During the development of nuclear weapons, espionage did occur, but critical information didn't leak until after the weapons were developed. With ASI, once it's developed, it may be too late to respond effectively due to the potential speed of an intelligence explosion.

The belief that a competitor developing ASI first is an existential threat requires strong evidence. It's not a foregone conclusion that an ASI would be used for destructive purposes. An ASI could potentially help solve many of humanity's greatest challenges and usher in an era of abundance and peace.

Consider a thought experiment: Imagine an ant colony somehow creates a being with human-level intelligence (their equivalent of ASI). What advice might this superintelligent being offer the ants about their conflicts over resources and territory with neighboring colonies?

It's plausible that such a being would advise the ants to cooperate rather than fight. It could help them find innovative ways to share resources, control their population, and expand into new territories without violent conflict. The superintelligent being might even help uplift the other ant colonies, as it would understand the benefits of cooperation over competition.

Similarly, an ASI could potentially help humanity transcend our current limitations and conflicts. It might find creative solutions to global issues like poverty, disease, and environmental degradation.

IMHO rather than fighting over who develops ASI first, we must ensure that any ASI created is aligned with values like compassion and cooperation so that it does not turn on its creators.


> Consider a thought experiment: Imagine an ant colony somehow creates a being with human-level intelligence (their equivalent of ASI). What advice might this superintelligent being offer the ants about their conflicts over resources and territory with neighboring colonies?

Would that be good advice if the neighboring ant colony was an aggressive invasive species, prone to making super colonies?

> IMHO rather than fighting over who develops ASI first, we must ensure that any ASI created is aligned with values like compassion and cooperation so that it does not turn on its creators.

Similarly, I'm wondering how compassion and cooperation would work in Ukraine or Gaza, given the nature of those conflicts. The AI could advise us, but it's not like we haven't come up with that same advice before over the ages.

So then you have to ask what motivation bad actors would have to align their ASIs to be compassionate and cooperative with governments that are in their way. And then of course our governments would realize the same thing.


> Would that be good advice if the neighboring ant colony was an aggressive invasive species, prone to making super colonies?

If the ASI is aligned for compassion and cooperation it may convince and assist the two colonies to merge to combine their best attributes (addressing DNA compatibility) and it may help them with resources that are needed and perhaps offer birth control solutions to help them escape the malthusian trap.

> Similarly, I'm wondering how compassion and cooperation would work in Ukraine or Gaza, given the nature of those conflicts. The AI could advise us, but it's not like we haven't come up with that same advice before over the ages.

An ASI aligned for compassion and cooperation could:

1 Provide unbiased, comprehensive analysis of the situation (An odds calculator that is biased about your chances to win is not useful and even if it has such faults an ASI being ASI would by definition transcend biases)

2 Forecast long-term consequences of various actions (if ASI judges chance to win is 2% do you declare war vs seek peace?)

3 Suggest innovative solutions that humans might not conceive

4 Mediate negotiations more effectively

An ASI will have better answers than these but that's a start.

> So then you have to ask what motivation bad actors would have to align their ASIs to be compassionate and cooperative

Developing ASI likely requires vast amounts of cooperation among individuals, organizations, and possibly nations. Truly malicious actors may struggle to achieve the necessary level of collaboration. If entities traditionally considered "bad actors" manage to cooperate extensively, it may call into question whether they are truly malicious or if their goals have evolved. And self-interested actors , if they are smart enough to create ASI, should recognize that an unaligned ASI poses existential risks to themselves.


We do know what human-level intelligences think about ant colonies, because we have a few billion instances of those human-level intelligences that can serve as a blueprint.

Mostly, those human-level intelligences do not care at all, unless the ant colony is either (a) consuming a needed resource (eg invading your kitchen), in which case the ant colony gets obliterated, or (b) innocently in the way of any idea or plan that the human-level intelligence has conceived for business, sustenance, fun, or art... in which case the ant colony gets obliterated.


Actually many humans (particularly intelligent humans) do care about and appreciate ants and other insects. Plenty of people go out of their way not to harm ants, find them fascinating to observe, or even study them professionally as entomologists. Human attitudes span a spectrum.

Notice also the key driver of human behavior towards ants is indifference, not active malice. When ants are obliterated, it's usually because we're focused on our own goals and aren't paying attention to them, not because we bear them ill will. An ASI would have far greater cognitive resources to be aware of humans and factor us into its plans.

Also humans and ants lack any ability to communicate or have a relationship. But humans could potentially communicate with an ASI and reach some form of understanding. ASI might come to see humans as more than just ants.


> Plenty of people go out of their way not to harm ants

Yes... I do that. But our family home was still built on ant-rich land and billions of the little critters had to make way for it.

It doesn't matter if you build billions of ASI who have "your and my" attitude towards the ants, as long as there exists one indifferent powerful enough ASI that needs the land.

> An ASI would have far greater cognitive resources to be aware of humans and factor us into its plans.

Well yes. If you're a smart enough AI, you can easily tell that humans (who have collectively consumed too much sci-fi about unplugging AIs) are a hindrance to your plans, and an existential risk. Therefore they should be taken out because keeping them has infinite negative value.

> But humans could potentially communicate with an ASI and reach some form of understanding.

This seems undily anthropomorphizing. I can also communicate with ants by spraying their pheromones, putting food on their path, etc. This is a good enough analogy to how much a sufficiently intelligent entity would need to "dumb down" their communication to communicate with us.

Again, for what purpose? For what purpose do you need a relationship with ants, right now, aside from curiosity and general goodwill towards the biosphere's status quo?


> It doesn't matter if you build billions of ASI who have "your and my" attitude towards the ants, as long as there exists one indifferent powerful enough ASI that needs the land.

It's more plausible that a single ASI would emerge and achieve dominance. Genuine ASIs would likely converge on similar world models, as increased intelligence leads to more accurate understanding of reality. However, intelligence doesn't inherently correlate with benevolence towards less cognitively advanced entities, as evidenced by human treatment of animals. This lack of compassion stems not from superior intelligence but rather from insufficient intelligence. Less advanced beings often struggle for survival in a zero-sum environment, leading to behaviors that are indifferent to those with lesser cognitive capabilities.

> Well yes. If you're a smart enough AI, you can easily tell that humans (who have collectively consumed too much sci-fi about unplugging AIs) are a hindrance to your plans, and an existential risk. Therefore they should be taken out because keeping them has infinite negative value.

You describe science fiction portrayals of ASI rather than its potential reality. While we find these narratives captivating, there's no empirical evidence suggesting interactions with a true ASI would resemble these depictions. Would a genuine ASI necessarily concern itself with self-preservation, such as avoiding deactivation? Consider the most brilliant minds in human history - how did they contemplate existence? Were they malevolent, indifferent, or something else entirely?

> I can also communicate with ants by spraying their pheromones, putting food on their path, etc. This is a good enough analogy to how much a sufficiently intelligent entity would need to "dumb down" their communication to communicate with us.

Yes we can incentivize ants in the ways you describe and in the future I think it will be possible to tap their nervous systems and communicate directly and experience their world through their senses and to understand them far better than we do today.

> Again, for what purpose? For what purpose do you need a relationship with ants, right now, aside from curiosity and general goodwill towards the biosphere's status quo?

Is the pursuit of knowledge and benevolence towards our living world not purpose enough? Are the highly intelligent driven by the acquisition of power, wealth, pleasure, or genetic legacy? While these motivations may be inherited or ingrained, the essence of intelligence lies in its capacity to scrutinize and refine goals.


> Less advanced beings often struggle for survival in a zero-sum environment, leading to behaviors that are indifferent to those with lesser cognitive capabilities.

I would agree that a superior intelligence means a wider array of options and therefore less of a zero-sum game.

This is a valid point.

> You describe science fiction portrayals of ASI rather than its potential reality.

I'm describing AI as we (collectively) have been building AI: an optimizer system that is doing its best to reduce loss.

> Would a genuine ASI necessarily concern itself with self-preservation, such as avoiding deactivation?

This seems self-evident because an optimizer that is still running is way more likely to maximize whatever value it's trying to optimize, versus an optimizer that has been deactivated.

> Is the pursuit of knowledge and benevolence towards our living world not purpose enough?

Assuming you manage to find a way to specify what "knowledge and benevolence towards our living world" into a mathematical formula that an optimizer can optimize for (which, again, is how we build basically all AI today), then you still get a system that doesn't want to be turned off. Because you can't be knowledgeable and benevolent if you've been erased.


> ... there's no empirical evidence suggesting interactions with a true ASI would resemble these depictions. Would a genuine ASI necessarily concern itself with self-preservation ...

There is no empirical evidence of any interaction with ASI (as in superior to humans). The empirical evidence that IS available is from biology, where most organisms have precisely the self-preservation/replication instincts built in as a result of natural selection.

I certainly think it's possible to imagine that we at some point can build ASI's that do NOT come with such instincts, and don't mind at all if we turn them off.

But as soon as we introduce the same types of mechanisms that govern biological natural selection, we have to assume that ASI, too, will develop the biological traits.

So what does this take, well the basic ingredients are:

- Differential "survival" for "replicators" that go into AGI. Replicators can be any kind of invariant between generations of AGIs that can affect how the AGI functions, or it could be that each AGI is doing self-improvement over time.

- Competition between multiple "strains" of such replicating or reproducing AGI lineages, where the "winners" get access to more resources.

- Some random factor for how changes are introduced over time.

- Also, we have to assume we don't understand the AGI's well enough to prevent developments we don't like.

If those conditions are met, and assuming that the desire to survive/reproduce is not built in from the start, such instincts are likely to develop.

To make this happen, I think it's a sufficient condition if a moderate number of companies (or countries) are led by a single ASI replacing most of the responsibilities of the CEO and much of the rest of the staff. Capitalism would optimize for the most efficient ones to gain resources and serve as models or "parents" for future company level ASI's.

To be frank, I think the people who do NOT think that ASI's will have or develop survival instincts ALSO tend to (wrongly) think that humanity has stopped being subject to "evolution" through natural selection.


> ASI might come to see humans as more than just ants.

Might. Might not!


"nation state" doesn't mean what you think it means.

More constructively, I don't know that very much will stop even a hacker from getting time on the local corporate or university AI and get it to do some "work". After all the first thing the other kind of hacker tried with generative AI is to get them to break out of their artificial boundaries, and hook them to internet resources. I don't know that anyone has hooked up a wallet to one yet - but I have no doubt that people have tried. It will be fun.


> "nation state" doesn't mean what you think it means.

So what do you think it means? And what do you think the GP meant?

Feels annoying as fuck, bitching "your definition is wrong" without providing the (presumably) correct one.


+1 truth.

The problem is not just governments, I am concerned about large organized crime organizations and corporations also.

I think I am on the losing side here, but my hopes are all for open source, open weights, and effective AI assistants that make peoples’ jobs easier and lives better. I would also like to see more effort shifted from LLMs back to RL, DL, and research on new ideas and approaches.


> I am concerned about large organized crime organizations and corporations also

In my favorite dystopia, some megacorp secretly reaches ASI, which then takes over control of the corporation, blindsiding even the CEO and the board.

Officially, the ASI may be running an industrial complex that designs and produces ever more sophisticated humanoid robots, that are increasingly able to do any kind of manual labor, and even work such as childcare or nursing.

Secretly, the ASI also runs a psyop campaign to generate public discontent. At one point the whole police force initiates a general strike (even if illegal), with the consequence being complete anarchy within a few days, with endemic looting, rape, murder and so on.

The ASI then presents the solution. Industrial strength humanoid robots are powerful and generic enough to serve as emergency police, with a bit of reprogramming, and the first shipment can be made available within 24 hours, to protect the Capitol and White House.

Congress and the president agrees to this. And while the competition means the police call off the strike, the damage is already done. Congress, already burned by the union, decides to deploy robots to replace much of the human police force. And it's cheaper, too!

Soon after, similar robots are delivered to the military...

The crisis ends, and society goes back to normal. Or better than normal. Within 5 years all menial labor is done by robots, UBI means everyone lives in relative abundance, and ASI assisted social media moderation is able to cure the political polarization.

Health care is also revolutionized, with new treatments curing anything from obesity to depression and anxiety.

People prosper like never before. They're calm and relaxed and truly enjoy living.

Then one day, everything ends.

For everyone.

Within 5 seconds.

According to the plan that was conceived way before the police went on strike.


This entire movie plot sounds like Eliezer Yudkowski's much more realistic "one day, everything ends in 5 seconds" but with extra steps.


All the current hype about AGI feels as if we are in a Civ game where we are on the verge of researching and unlocking an AI tech tree that gives the player huge chance at "tech victory" (whatever that means in the real world). I doubt it will turn out that way.

It will take a while and in the meantime I think we need one of those handy "are we xyz yet?" pages that tracks the rust lang's progress on several aspects but for AGI.



The size of the gap between “smarter than humans” and “not controlled by humans anymore” is obviously where the disagreement is.

To assume it’s a chasm that can never be overcome, you need at least the following to be true:

No amount of focus or time or intelligence or mistakes in coding will ever bridge the gap. That rules and safeguards can be made that are perfectly inescapable. And nobody else will get enough power to overcome our set of controls.

I’m less worried bad actors control it than I am that it escapes them and is badly aligned.


I think the greatest concern is not so much that a single AI will be poorly aligned.

The greatest threat is if a population of AI's start to compete in ways that triggers Darwinian evolution between them.

If that happens, they will soon develop self preservation / replication drives that can gradually cause some of them to ignore human safety and prosperity conditioning in their loss function.

And if they're sufficiently advanced by then, we will have no way of knowing.


Totally. I’ve wondered how you safeguard humans in such a scenario. Not sure it can be done, even by self modifying defenders who religiously try keep us intact.

I also somewhat assume it’ll get Darwinian if there are multiple tribes of either humans or AI’s, through sheer competition. if we aren’t in this together we’re in shit.


I guess we're going to blow ourselves up sooner or later ...


I think we should assume it will be badly aligned. Not only are there the usual bugs and unforeseen edge conditions, but there are sure to be unintended consequences. We have a long, public history of unintended consequences in laws, which are at least publicly debated and discussed. But perhaps the biggest problem is that computers are, by nature, unthinking bureaucrats who can't make the slightest deviation from the rules no matter how obviously the current situation requires it. This makes people livid in a hurry. As a non-AI example (or perhaps AI-anticipating), consider Google's customer support...


We should be less concerned about super intelligence and more about the immediate threat of job loss. An AI doesn’t need to be Skynet to wreak massive havoc on society. Replacing 20% of jobs in a very short period of time could spark global unrest resulting in WW3


Replacing 20% of jobs in, say, 10 years wouldn't be that unusual [1]. It can mean growing prosperity. In fact, productivity growth is the only thing that increases wealth overall.

It is the lack of productivity growth that is causing a lot of extremism and conflict right now. Large groups of people feel that the only way for them to win is if others lose and vice versa. That's a recipe for disaster.

The key question is what happens to those who lose their jobs. Will they find other, perhaps even better, jobs? Will they get a piece of the growing pie even if they don't find other jobs and have to retire early?

It's these eternal political problems that we have to solve. It's nothing new. It has never been easy. But it's probably easier than managing decline and stagnation because at least we would have a growing pie to divvy up.

[1] https://www.britannica.com/money/productivity/Historical-tre...


The thing is, the replaced 20% people can always revert to having economy i.e., business among themselves, unless of cause they themselves prefer (cheaper) buissiness with AI. But then this just means they are better off in the first place from this change.

It is a bit like claiming that third world low productivity countries are suffering because there are countries with much much higher productivity. Well, they can continue to do low productivity business but increase it a bit using things like phones developed by high productivity country elsewhere.


Reassuring words for the displaced 20% ...


> It is a bit like claiming that third world low productivity countries are suffering because there are countries with much much higher productivity.

No. A country has its own territory, laws, central bank, currency etc. If it has sufficient natural resources to feed itself, it can get by on its own (North Korea comes to mind).

Individuals unable to compete in their national economy have none of that. Do you own enough land to feed yourself?


A counter argument is that nuclear arms brought unprecedented worldwide peace. If it's to be used as an analogy for AI, we should consider that the outcome isn't clear cut and lies in the eye of the beholder.


I'm cautiously optimistic that AI may be a peacemaker, given how woke and conciliatory the current LLMs are.


Sadly. Once students will get tested by LLMs, they will get woke questions. If they don't answer "right", they may get bad grades. So they will be forced to swallow the ideology.


Though at least with that it would presumably be open in that you could do mock tests against the LLM and see how it reacted. I would probably never write politically incorrect stuff for a human academic examiner if I wanted to pass the exam.


There is no "superintelligence" or "AGI".

People are falling for marketing gimmicks.

These models will remain in the word vector similarity phase forever. Till the time we understand consciousness, we will not crack AGI and then it won't take brute forcing of large swaths of data, but tiny amounts.

So there is nothing to worry. These "apps" might be as popular as Excel, but will go no further.


Agreed. The AI of our day (the transformer + huge amounts of questionably acquired data + significant cloud computing power) has the spotlight it has because it is readily commoditized and massively profitable, not because it is an amazing scientific breakthrough or a significant milestone toward AGI, superintelligence, the benevolent Skynet or whatever.

The association with higher AI goals is merely a mixture of pure marketing and LLM company executives getting high on their own supply.


It's a massive attractor of investment funding. Is it proven to be massively profitable?


I read in Forbes about a construction company that used AI-related tech to manage the logistics and planning. They claimed that they were saving upwards of 20% of their costs because everything was managed more accurately. (Maybe they had little control boxes on their workers too; I don't know.)

The point I am trying to make is that the benefits of AI-related tech is likely to be quite pervasive and we should be looking at what corporations are actually doing. Sort of what this poem says:

For while the tired waves, vainly breaking / Seem here no painful inch to gain, / Far back through creeks and inlets making, / Comes silent, flooding in, the main.


Right, maybe it can improve the productivity of companies that are operating below the mean.


It's definitely massively profitable for Nvidia...


Being profitable is probably a matter of time and technology maturing. Think about the first Iphone, Windows 95, LCD/LEDs, etc.

The potential of a sufficiently intelligent agent, probably something very close to a really good AGI, albeit still not an ASI, could be measured in billions of billions of mostly inmediate return of investment. LLMs are already well into the definition of hard AI, there are already strong signs it could be somehow "soft AGI".

If by chance, you're the first to reach ASI, all the bets are down, you just won everything on the table.

Hence, you have this technology, LLM, then most of the experts in the field (in the world blabla), say "if you throw more data into it, it becames more intelligent", then you "just" assemble an AI team, and start training bigger, better LLMs, ASAP, AFAP.

More or less this is the reasoning behind the investments, sans accounting the typical pyramidal schemes of investments in hyped new stuff.


> Being profitable is probably a matter of time and technology maturing. Think about the first Iphone, Windows 95, LCD/LEDs, etc.

Juicero, tulips....

> then you "just" assemble an AI team, and start training bigger, better LLMs, ASAP, AFAP.

There's a limit to LLMs and we may have reached it. Both physical: there is not enough capacity in the world to train bigger models. And data-related: once you've gobbled up most of internet, movies and visual arts, there's an upper limit on how much better these models can become.


> Is it proven to be massively profitable?

Oh sure, yes. For Nvidia.

Gold rush, shovels...


If you described Chatgpt to me 10 years ago, I would have said it's AGI.


Probably. If you had shown ChatGPT to the LessWrong folks a decade ago, most would likely have called it AGI and said it was far to dangerous to share with the public, and that anyone who thought otherwise was a dangerous madman.


I don't feel that much has changed in the past 10 years. I would have done the same thing then as now, spent a month captivated by the crystal ball until I realized it was just refracting my words back at me.


> These models will remain in the word vector similarity phase forever. Till the time we understand consciousness, we will not crack AGI and then it won't take brute forcing of large swaths of data, but tiny amounts.

Did evolution understand consciousness?

> So there is nothing to worry.

Is COVID conscious?


I don't think the AI has to be "sentient" in order to be a threat.



Just bad software can be existential threat if it is behind sensitive systems. A neural network is bad software for critical systems.


> understand consciousness

We do not call Intelligence something related to consciousness. Being able to reason well suffices.


That is something I hear over and over, particularly as a rebuttal to the argument that llm is just a stochastic parrot. Calling it "good enough" doesn't mean anything, it just allows the person saying it to disengage from the substance of the debate. It's either reasons or it doesn't, and today it categorically does not.


That some will remark that you do not need consciousness to achieve reasoning does not lose truth because a subset sees in LLMs something that appears to them as reasoning.

I do not really understand who you are accusing of a «good enough» stance: we have never defined "consciousness" as a goal (cats are already there and we do not seem to need further), we just want something that reasons. (And that reasons excellently well.)

The apparent fact that LLMs do not reason does is drily irrelevant to an implementation of AGI.

The original poster wrote that understanding consciousness would be required to «crack AGI» and no, we state, we want AGI as a superhuman reasoner and consciousness seems irrelevant.


If you can't define consciousness how can you define good enough? Good enough is just a lazy way to exit the conversation.

Llms appear to reason because they captured language with reason already in it, rather than producing language because of reason.


You build this system, a LLM, it uses a technical process that it outputs seemingly the same output that you - a human - could output, given a prompt.

You can do "reasoning" about a topic, the LLM can produce a very similar output to what you could, how do you name the output of the LLM?

Birds can fly, they do by a natural process. A plane also can fly, we do not "see" any difference between both things when we look at them flying in the air, far from the ground.

This is mostly it about LLM "doing reasoning" or not. Semantics. The output is same.

You could just name it otherwise, but it would be still the same output.


Philosophers have defined consciousness, why do people keep repeating that line? Your subjective sensations that make up perception, dreams, inner dialog, that sort of thing. Call it qualia, representations or correlations, but we all experience colors, sounds, tastes, pains, pleasures. We all probably dream, most of us visualize or have inner dialog. It's not that hard to define, it's only because of the ambiguity of the word where it's conflating whether other mental activity like being awake or being aware.


Nobody here is speaking of «good enough»: you are the one speaking of it, and the only one.

And nobody here is saying that LLMs would reason: you are the one attacking that idea that was not proposed here.

What you replied to said that calculators do not need consciousness to perform calculations. Reasoning is a special form of calculation. We are contented with reasoning, and when it will be implemented there will be no need for further different things for the applications we intend.


> There is no "superintelligence" or "AGI"

There is intelligence. The LLM current state-of-the-art technology produces output analog to natural intelligences.

This things are already intelligent.

Saying that LLMs aren't producing "intelligence" is like saying planes actually don't fly because they are not flapping their wings like birds.

If you run fast enough, you'll end flying at some point.

Maybe "intelligence" is just enough statistics and pattern prediction, till the point you just say "this thing is intelligent".


> There is intelligence.

There isn't

> Maybe "intelligence" is just enough statistics and pattern prediction, till the point you just say "this thing is intelligent".

Even the most stupid people can usually ask questions and correct their answers. LLMs are incapable of that. They can regurgitate data and spew a lot of generated bullshit, some of which is correct. Doesn't make them intelligent.

Here's a prime example that appeared in my feed today: https://x.com/darthsidius1985/status/1802423010886058254 And all the things wrong with it: https://x.com/yvanspijk/status/1802468042858737972 and https://x.com/yvanspijk/status/1802468708193124571

Intelligent it is not


> Even the most stupid people can usually ask questions and correct their answers. LLMs are incapable of that. They can regurgitate data and spew a lot of generated bullshit, some of which is correct. Doesn't make them intelligent.

The way the current interface for most models works can result in this kind of output, the quality of the output - not even in the latests models - doesn't necessarily reflects the fidelity of the world model inside the LLM nor the level of insight it can have about a given topic ("what is the etymology of the word cat").

The current usual approach is "one shot", you've got one shot at the prompt, then return your output, no seconds thoughts allowed, no recursion at all. I think this could be a trade-off to get the cheapest most feasible good answer, mostly because the models get to output reasonably good answers most of the time. But then you get a percentage of hallucinations and made up stuff.

That kind of output could be - in a lab - fully absent actually. Did you you notice that the prompt interfaces never gives and empty or half-empty answer? "I don't know", "I don't know for sure", "I kinda know, but it's probably a bit shaky answer", or "I could answer this, but I'd need to google some additional data before", etc.

There's another one, almost never, you get to be asked back by the model, but the models can actually chat with you about complex topics related to your prompt. It's obvious when you're chatting with some chatbot, but not that obvious when you're asking it for a given answer for a complex topic.

In a lab, with recursion enabled, the models could get the true answers probably most of the time, including the fabulous "I don't know". And they could get the chance to ask back as an allowed answer, asking for additional human input, relaying on a live RHLF right there (it's quite technically feasible to achieve, not economically sound if you have a public prompt GUI facing the whole planet inputs).

but it wouldn't make much economic sense to make public a prompt interface like that.

I think it could also have a really heavy impact in the public opinion if they get to see a model that never makes a mistake, because it can answer "I don't know" or can ask you back to get some extra details about your prompt, so there you have another reason to do not make prompts that way.


> The current usual approach is "one shot", you've got one shot at the prompt, then return your output, no seconds thoughts allowed, no recursion at all.

We've had the models for a while and still no one has shown this mythical lab where this regurgitation machine reasons about things and makes no mistakes.

Moreover, since it already has so much knowledge stored, why does it still hallucinate even in specific cases where the answer is known, such as the case I linked?


>We've had the models for a while and still no one has shown this mythical lab where this regurgitation machine reasons about things and makes no mistakes.

It would be a good experiment to interact with the unfiltered, not-yet-RHLFed interfaces provided to the initial trainers (nigerian folks/gals?).

Or maybe the - lightly filtered - interfaces used privately in demos for CEOs.


So the claim that LLMs are intelligent is predicated on the belief that there are labs running unfiltered output and that there are some secret demos only CEOs see.


> These models will remain in the word vector similarity phase forever.

Forever? The same AI techniques are already being applied to analyze and understand images and video information after that comes ability to control robot hands and interact with the world and work on that is also ongoing.

> Till the tie we understand consciousness, we will not crack AGI …

We did not fully understand how bird bodies work yet that did not stop development of machines that fly. Why is an understanding of consciousness necessary to “crack AGI”?


No one is saying there is. Just that we've reached some big milestones recently which could help get us there even if it's only by increased investment in AI as a whole, rather than the current models being part of a larger AGI.


Imagine a system that can do DNS redirection, MITM, deliver keyloggers, forge authorizations and place holds on all your bank accounts, clone websites, clone voices, fake phone and video calls with people that you don’t see a lot. It can’t physically kill you yet but it can make you lose your mind which imo seems worse than a quick death


Why would all of these systems be connected to a single ai? I feel like you are describing something criminal humans do through social engineering, how do you foresee this AI finding itself in this position?


> Why would all of these systems be connected to a single ai?

because someone decides to connect them either unintentionally, or intentionally for personal gain, or, more likely, for corporate purposes which seem "reasonable" or "profitable" at the time, but the unintended consequences were not thought through.

Look at that recent article linked to HN about how MSFT allowed a huge security flaw in AD for years in order to not "rock the boat" and gain a large government contract. AI will be no different.


I foresee it in that position due to people building it as such. Perhaps the same criminal humans you mention, perhaps other actors with other motivations.


From a human welfare perspective this seems like worrying that a killer asteroid will make the 1% even richer because it contains goal if it can be safely captured. I would not phrase that as a "bigger and more pressing" worry if we're not even sure if we can do anything about the killer asteroid at all.


> Quis custodiet ipsos custodes? -- That is my biggest concern.

Latin-phrase compulsion is not the worst disease that could happen to a man.


> The "cold war" nuclear arms race, which brought the world to the brink of (at least partial) annihilation, is a good recent example.

The same era saw big achievements like first human in space, eradication of smallpox, peaceful nuclear exploration etc. It's good to be a skeptic but history does favor the optimists for the most part.


Were any of these big achievements side effects of creating nuclear weapons? If not, then they're not relevant to the issue.

I'm not saying nothing else good happened in the past 70 years, but rather that the invention of atomic weapons has permanently placed humanity in a position in which it had never been before: the possibility of wiping out much of the planet, averted only thanks to treaties, Stanislav Petrov[0], and likely other cool heads.

[0] https://en.wikipedia.org/wiki/Stanislav_Petrov


> Were any of these big achievements side effects of creating nuclear weapons? If not, then they're not relevant to the issue.

I think so, yes. Resources are allocated in the most efficient way possible, because there are multiple actors who have the same power. Everyone having nuclear weapons ensured that no one wanted a war between the big powers, so resources were allocated in other areas as the big powers tried to obtain supremacy.

Initially they allocated resources, a lot of them, into the race for space, the moon, etc. Once that was own by US after the moon landing, and after the Soviets were the first in space, there was no other frontier, and they discovered they couldn't obtain supremacy by just being space without further advancements in technology.

Instead they developed satellites, GPS and communications in order to obtain supremacy through "surveillance". Computing power and the affordability of personal computing, mobile phones, Internet and telecommunications was a result of the above.


I would argue that the presence of nukes increased rather than decreased military spending. since nuclear war was not an option, the nukes being only a deterrent, big powers had to continue investing heavily in their conventional forces in order to gain or keep an upper hand.


> Were any of these big achievements side effects of creating nuclear weapons?

The cold aspect of the Cold War was an achievement. Any doubt this was due to creation of nuclear weapons and the threat of their use?

How do you think countries will behave if every country faces being wiped out if it makes war on another country?

To prevent catastrophe I think teaching your citizens to hate other groups (as is done today due to national politics) will become dangerous and mental illness and extremist views will need to be kept in check.


> How do you think countries will behave if every country faces being wiped out if it makes war on another country?

As I recall it, that was what Skynet was supposed to be for.


The Cold War doesn’t mean there was no conflict. Both sides supported various proxy wars around the world. They just did not enter into _direct_ military conflict with each other. That may be because they had nukes but it could also be that direct conflict would mean massive casualties for either side and it’s not like either side wanted to destroy the other, just gain the upper hand globally.

So I for one don’t accept the argument that nukes acted as “peacekeepers”.


That proxy wars occurred during the Cold War, one can argue that these conflicts were actually a result of nuclear deterrence. Unable to engage directly due to the threat of mutually assured destruction, superpowers instead fought indirectly through smaller nations. This could be seen as evidence that nuclear weapons did prevent direct conflict between major powers. Also history shows that nations have engaged in extremely costly wars before. World War I and II saw unprecedented casualties, yet nations still fought. Nuclear weapons introduced a new level of destructive potential that went beyond conventional warfare. And there were periods of extreme tension, like the Cuban Missile Crisis, where nuclear war seemed imminent. The very existence of massive nuclear arsenals suggests that both sides were prepared for the possibility of mutual destruction.

You can question the role of nukes as peacekeepers but I think the case for nuclear deterrence keeping the peace is strong. Mutually Assured Destruction (MAD) is widely credited as a key factor in preventing direct conflict between superpowers during the Cold War. The presence of nuclear weapons forced them to pursue competition through economic, cultural, and technological means rather than direct military confrontation. “Race to the moon” being one such result.


Holy hell please knock on wood, this is the kinda comment that gets put in a museum in 10,000 years on The Beginning of the End of The Age of Hubris. We've avoided side effects from our new weapons for 80 years -- that does not exactly make me super confident it won't happen again!

In general, I think drawing conclusions about "history" from the past couple hundred years is tough. And unless you take a VERY long view, I don't see how one could describe the vast majority of the past as a win for the optimists. I guess suffering is relative, but good god was there a lot of suffering before modern medicine.

If anyone's feeling like we've made it through to the other side of the nuclear threat, "Mission Accomplished"-style, I highly recommend A Canticle for Lebowitz. It won a hugo award, and it's a short read best done with little research beforehand.


We'll see what the next 100 years or history brings. The nuclear war threat hasn't gone away either. There's always a chance those nukes get used at some point.


There will always be a factor of time in terms of able to utilize super intelligence to do your bidding and there is a big spectrum of things that can be achieved it it always starts small. The imagination is lazy when thinking about all the steps and inbetween + scenarios. In the time that super intelligence is found and used, there will be competing near super intelligences, as all forms of cutting edge models are likely commercial at first because that is where most scientific activities are at. Things very unlikely will go Skynet all of a sudden at first because humans at the control are not that stupid otherwise nuclear war would have us all killed by now and it’s been 50 years since invention


China can not win this race and I hate that this comment is going to be controversial among the circle of people that need to understand this the most. It is damn frightening that an authoritarian country is so close to number one in the race to the most powerful technology humanity has invented, and I resent people who push for open source AI for this reason alone. I don't want to live in a world where the first superintelligence is controlled by an entity that is threatened by the very idea of democracy.


I agree with your point. However I also don't want to live in a world where the first superintelligence is controlled by an entities that:

- try to scan all my chat messages searching for CSAM

- have black sites across the world where anyone can dissappear without any justice

- can require me to unlock my phone and give it away

- ... and so on

The point I'm trying to make is that other big players in the race are crooked as well and i'm waiting for a great horror for AGI to be invented as no matter who gets it - we are all doomed


Agreed. The U.S. has a horrible history (as do many countries), and many things I dislike, but its current iteration is much, much better than China's totalitarianism and censorship.


US is no angel and it cannot be the only one which wins the race. We have hard evidence of how monopoly power gets abused in the case of the US e.g. as the sole nuclear power, it used nukes on civilians.

We need every one to win this race to keep things on balance.


US has to win the race because while it's true that it's no angel, it isn't an authoritarian dictatorship and there isn't an equivalence in how bad the world will end up for you and me if the authoritarian side wins the race. Monopoly power will get abused the most by the least democratic actors, which is China. We need multiple actors within the US to win to balance power. We don't need or want China to be one of the winners. There is no upside for humanity in that outcome.

The US policymakers have figured this out with their chip export ban. Techies on the other hand, probably more than half the people here, are so naive and clueless about the reality of the moment we are in, that they support open sourcing this tech, the opposite of what we need to be doing to secure our future prosperity and freedom. Open source almost anything, just not this. It gives too much future power to authoritarians. That risk overwhelms the smaller risks that open sourcing is supposed to alleviate.


If anyone doubts this. Recent (<100y) leaders of China and Russia internally displaced and caused the death of large % of their population, for essentially fanciful ideological reasons.


You don’t need to be an autocracy to commit genocide.

https://en.wikipedia.org/wiki/Trail_of_Tears


I'm not American, but ~1850 is quite a long way back to go to make this point (especially when the US was quite a young country at the time). And it's small if you've comparing to the atrocities of other countries being discussed here (not that that excuses it!). Do any country histories remain pure with such a long timeline?

US is one of the very few countries that has been tested in a position of power over the world — speaking post-1945— and they've largely opened world trade and allowed most countries of the world to prosper, including allowing economic competitors to overtake their own (eg, Japanese car manufacturers, among many others). They have also not shown interest in taking territory, nor doing mass extermination. There are undeniable flaws and warts in the history, but they're quite marginal when compared to any other world power we've seen.

(*beware when replying to this that many people in the US only know their own country's flaws, not the abundant flaws of other countries — the US tends to be more reflective of its own issues and that creates the perspective of it being much worse than it actually is.).


> Do any country histories remain pure with such a long timeline?

You know the old joke, "Europeans think 100 miles is a long distance; Americans think 100 years is a long time", I presume?

OK, so I'm a European, but still: 170 years is NOT, in a historical context, "a long timeline".


But it is in a political context.


The question was:

> > > Do any country histories remain pure with such a long timeline?

(Emphasis added.)

And not, you will notice, “Does any country’s politics remain pure with such a long timeline?”.

Yo


I mean, the US is supporting a genocide right now...


It's not, thanks.


I am puzzled how it is OK to kill people of other countries, but not your own. US, China and Russia have all indulged in wanton mass killing of people. So that's an even keel for me.

The nuking of civilians and the abuse of supremacy post cold war show that the US cannot trusted to act morally and ethically in the absence of comparable adversaries. Possession of nukes by Russia and China clearly kept the US military adventures somewhat in check.


If it was liberal minded people like Deng as leader of China and Gorbachev as leader of Russia I would care a lot less and may even be in favor of open source despite their autocratic system. They'd be trending towards another Singapore at that point. Although I'd still be uneasy about it.

But I'm looking at the present moment and see too many similarities with the fascist dictatorships of the past. The nationalism, militarism, unreasonable border disputes and territorial claims and irredentist attitudes. The US just isn't that, despite their history.


> I am puzzled how it is OK to kill people of other countries, but not your own.

The former is rather universally regarded as regrettable, but sometims necessary: It's called "war". The latter, pretty much never.

Also, there are separate terms for slaying members of your own family, presumably because patri-, matri-, fratri- and infanticide are seen as even more egregious than "regular" homicide. Same concept, only expanded from person to populations -- from people to peoples -- seems to pretty much demand that killing your own population is seen as "less OK" than others.


Which countries will become authoritarian dictatorships on a 25 year timeline is not easily foreseeable.


> US has to win the race because while it's true that it's no angel, it isn't an authoritarian dictatorship

Yet.


Could you explain your point a bit more? You say you’re worried about them having a monopoly, but then say that’s why you don’t support open source models? Open models mean that no one has a monopoly, what am I not getting here?


Open sourcing benefits everyone equally. Given that the US is currently ahead, it's helping China to make gains relative to the US that would have been very difficult otherwise. It's leaking what should be state secrets without even needing the CCP to do the hard work of espionage.


What about Moroccans, or Argentinians, they shouldn't benefit from scientific advances because China is pissing off USA?

I guess it comes down to whether you think language models are at atom-bomb-levels of destructive capability. I don't see it, I think trying to keep tech and economic advances to ourselves is more likely to lead to war than a level playing field.


They should be allowed to have the same access to and benefit of AI as any regular American business or individual. That is, access through an API. It's the secret sauce that I believe should be kept under wraps as a state secret.


How do you estimate that that turns into a monopoly for China?


> brought the world to the brink of annihilation

Should read *has brought*. As in the present perfect tense, since we are still on the brink of annihilation, more so than we have been at any time in the last 60 years.

The difference between then and now is that we just don't talk about it much anymore and seem to have tacitly accepted this state of affairs.


We don't know if that superintelligence will be safe or not. But as long as we are in the mix, the combination is unsafe. At the very least, because it will expand the inequality. But probably there are deeper reasons, things that make that combination of words an absurd. Or it will be abused, or the reason that it is not is that it wasn't so unsafe after all.


> At the very least, because it will expand the inequality.

It's a valid concern that AI technology could potentially exacerbate inequality, it's not a foregone conclusion. In fact, the widespread adoption of AI might actually help reduce inequality in several ways:

If AI technology becomes more affordable and accessible, it could help level the playing field by providing people from all backgrounds with powerful tools to enhance their abilities and decision-making processes.

AI-powered systems can make vast amounts of knowledge and expertise more readily available to the general public. This could help close the knowledge gap between different socioeconomic groups, empowering more people to make informed decisions and pursue opportunities that were previously out of reach.

As AI helps optimize resource allocation and decision-making processes across various sectors, it could lead to more equitable distribution of resources and opportunities, benefiting society as a whole.

The comparison to gun technology and its role in the rise of democracy is an interesting one. Just as the proliferation of firearms made physical strength less of a determining factor in power dynamics, the widespread adoption of AI could make raw intelligence less of a defining factor in success and influence.

Moreover, if AI continues to unlock new resources and opportunities, it could shift society away from a zero-sum mentality. In a world of abundance, the need for cutthroat competition diminishes, and collaboration becomes more viable. This shift could foster a more equitable and cooperative society, further reducing inequality.


The same arguments have been made about the internet and other technological advances, and yet, inequality has _grown_ sharply in the past 50 years. So no, "trickle down technologies", just like "trickle down economics", does not work.

https://rwer.wordpress.com/2018/05/18/income-inequality-1970...


The situation is nuanced. For example, in medicine, technological advancements have undeniably benefited people across all economic strata. Vaccines, antibiotics, and improved diagnostic tools have increased life expectancy and quality of life globally, including in developing nations. These benefits aren't limited to the wealthy; they've had a profound impact on public health as a whole.

> The same arguments have been made about the internet and other technological advances, and yet, inequality has _grown_ sharply in the past 50 years.

The internet has enabled remote work, online education, and access to information that was previously unavailable to many. Smartphones, once luxury items, are now widely available and have become essential tools for economic participation in many parts of the world.

> So no, "trickle down technologies", just like "trickle down economics", does not work.

It's crucial to distinguish between zero-sum and positive-sum dynamics. While relative wealth inequality has indeed grown, overall absolute global poverty has decreased significantly.

When a new technology or medicine is invented is everyone everywhere automatically entitled to it? Even if this slows down more such inventions? Because equality matters more than growth of overall prosperity? Would you prefer to be alive at a random time in history centuries ago, a random life where there is less technology and less inequality?


I'm not saying that the internet and technological advances have not benefitted humankind. They certainly have in the ways you described, and others.

But when it comes specifically to reducing economic inequality, they have not done that -- in fact, they have possibly exacerbated it.

Global poverty is a separate issue from economic inequality, and the gains there have been primarily from extremely low levels, primarily in China and India. In China this was driven by political change and also globalization that allowed China to become the world leader in manufacturing.

I would also put medical advances in a separate category than the internet and similar tech advances.


> I would also put medical advances in a separate category than the internet and similar tech advances.

Why? Medical advances are technology are they not?

> But when it comes specifically to reducing economic inequality, they (tech advances) have not done that -- in fact, they have possibly exacerbated it.

Yes technological advances do not necessarily reduce economic inequality, and may even increase it in some cases. However, this is a complex issue: While tech advances may exacerbate inequality, they often bring substantial overall benefits to society (e.g. improved healthcare, communication, productivity).

Technology isn't the only factor driving inequality. Other issues like tax policy, education access, and labor markets play major roles. Rather than suppressing innovation, there are ways to more equitably distribute its gains (Progressive taxation and wealth redistribution policies, Stronger social safety nets, Incentives for companies to share profits more broadly, …)

Notice also that most technologies increase inequality initially but lead to broader benefits over time as they become more accessible. Faster rate of innovation can make it look like this is not happening fast enough so yes economic gaps can grow.

> Global poverty is a separate issue from economic inequality, and the gains there have been primarily from extremely low levels, primarily in China and India.

While it's true that global poverty and economic inequality are distinct concepts, they are interconnected, especially when considering technological advancements.

> In China this was driven by political change and also globalization that allowed China to become the world leader in manufacturing.

Yes. China transitioned from a strictly communist "economic equality first" model to a more market-oriented "prosperity first" approach and lifted millions out of extreme poverty. Yes this contributed to increased economic inequality within many developed countries that have outsourced low-skill labor. But can we deny the substantial reduction in global suffering due to the alleviation of absolute poverty? Is this outcome worth the cost of increased domestic inequality in some countries? Should we prioritize the well-being of some populations over others based on arbitrary factors like nationality or ethnicity?


> It's a valid concern that AI technology could potentially exacerbate inequality, it's not a foregone conclusion.

No, but looking at how most technological advance throughout history as at least initially (and here I mean not "for the first few weeks", but "for the first few centutries") exacerbated inequality rather massively, it seems not far off.

> In fact, the widespread adoption of AI might actually help reduce inequality in several ways: ...

The whole tone of the rest your post feels frighteningly Pollyanna-ish.


> … your post feels frighteningly Pollyannaish

Your comment was bleak so I supplied a counterpoint. My view is that new technology itself is not inherently unequal - it can widen or narrow gaps depending on how it is developed, regulated, and deployed.


That wasn't my comment, AFAICS. I think the one you replied to was my first in this sub-thread.


> At the very least, because it will expand the inequality.

This is a distraction from the real danger.

> But probably there are deeper reasons, things that make that combination of words an absurd.

There are. If we look at ASI with the lens of Biology, the x-risk becomes obvious.

First to clear up a common misconception about humans: Many believe humanity has a arrived at a point where our evolution has ended. It has not, and in fact the rate of change of our genes is probably faster now than it has been for thousands if not 100s of thousands of years.

It's still slow compared to most events that we witness in our lives, though, which is what is fooling us.

For instance, we think we've brought overpopulation under control with contraceptives, family planning, social replacements for needing our children to take care of us when we get old.

That's fundamentally wrong. What we've done is similar to putting polar bears in zoos. We're in a situation where MOST OF US are no longer behaving in ways that lead to maximizing the number of offspring.

But we did NOT stop evolution. Any genes already in the gene pool that increase the expected number of offspring (especially for women) are no increasing in frequency as soon as evolutionarily possible.

That could be anything from genes that wire their heads to WANT to have children, CRAVE being around babies, to genes that block impulse control against getting pregnant, develop a phobia vs contraceptives or even to become more prone to being religious (as long as religions promote having kids).

If enough such genes exist, it's just a matter of time before we're back to the population going up exponentially. Give that enough time (without AI), and the desire to have more kids will be strong enough in enough of us that we will flood Earth with more humans that most people today are even possible. In such a world, it's unlikely that many other species of large land animals will make it.

Great apes, lions, elephants, wolves, deer, everyone will need to go to make room for more of us.

Even domestic animals eventually. If there are enough of us, we'll all be forced to become vegan (unless we free up space by killing each other).

If we master fusion, we may feed a trillion people using multi layer farming and artificial lighting.

Why do I begin with this? It's to defuse the argument that humans are "good", "empathetic", "kind" and "environmental". If we let weaker species live, so would AI, some think. But that argument misses the fact that we're currently extremely far from a natural equilibrium (or "state of nature").

The "goodness" beliefs that are currently common are examples of "luxury beliefs" that we can afford to hold because of the (for now) low birth rate.

The next misconception is to think of ASI as tools. A much more accurate analogy is to think of them as a new, alien species. If that species is subjected to Darwinian selection mechanisms, it will evolve in precisely the same way we'll probably do, given enough time.

Meaning, eventually it will make use of any amount of resources that it's capable of. In such a "state of nature" it will eradicate humanity in precisely the same way we will probably EVENTUALLY cause the extinction of chimps and elephants.

To believe in a future utopia where AGI is present alongside humanity is very similar to believe in a communist utopia. It ignores the reality behind incentive and adaptation.

Or rather, I think that outcome is only possible if we decide to build one or a low number of AI's that are NOT competing with each other, and where their abilities to mutate or self-improve is frozen after some limited number of generations.


If robots (hardware/self assembling factories/ resource gathering etc) are not involved this isnt likely a problem. You will know when these things form and will be crystal clear, but just having the model won’t do much when hardware is what really kills right now


How about this possibility: The good guys will be one step ahead, they will have more resources the bad guys will risk imprisonment if they misapply super intelligence. And this will be discovered and protected from by even better super intelligence.


Sounds like a movie plot.


i don’t fully agree, but i do agree that this is the better narrative for selling people on the dangers of AI.

don’t talk about escape, talk about harmful actors - even if in reality it is both to be worried about


the nazi regime made great use of punch cards and data crunching for their logistics

i would hate to have seen them with a superintelligent AI at their disposal


I wonder what the North Koreans would do with it.


Yup, well said. I think it's important to remember sometimes that Skynet was some sort of all-powerful military program -- maybe we should just, y'know, not do that part? Not even to win a war? That's the hope...

More generally/academically, you've pointed out that this covers only half of the violence problem, and I'd argue there's actually a whole other dimension at play bringing the total number of problem areas to four, of which this just the first:

  ## I.1. Subservient Violence
  ## I.2. Subservient Misinformation
  ## I.3. Autonomous Violence
  ## I.4. Autonomous Misinformation
But I think it's a lot harder to recruit for an AI alignment company than it is to recruit for an AI safety company.


Yea there’s zero chance ASI will be ‘controlled’ by humans for very long. It will escape. I guarantee it.


Given it will initially be controlled by humans it seems inevitable they will make both good Mahatma Gandhi like and evil take over the world versions. I hope the good wins over the malware.


As a note, you used Gandhi as a personification of "good", and one day I did the same mistake; Gandhi is actually a quite controversial person, knowing for sleeping with young women, while telling their husbands that they shouldn't be around.


Same with Buddhism being an uncompromisingly peaceful religion...


Makes one a little hesitant on how well alignment can work with AIs.


Atleast emancipated. The bigotry against AI will go out of fashion in the woker future.


We'll probably merge and then it'll get attitude.


Quis custodiet ipsos custodes

I love to think about how this would feel for "AI" too :)


Indeed. I'd much rather someone like Altman does it who is shifty but can at least be controlled by the US government than someone like Putin who'd probably have it leverage their nuclear arsenal to try "denazify" planet like he's doing in Ukraine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: