Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Plenty of people go out of their way not to harm ants

Yes... I do that. But our family home was still built on ant-rich land and billions of the little critters had to make way for it.

It doesn't matter if you build billions of ASI who have "your and my" attitude towards the ants, as long as there exists one indifferent powerful enough ASI that needs the land.

> An ASI would have far greater cognitive resources to be aware of humans and factor us into its plans.

Well yes. If you're a smart enough AI, you can easily tell that humans (who have collectively consumed too much sci-fi about unplugging AIs) are a hindrance to your plans, and an existential risk. Therefore they should be taken out because keeping them has infinite negative value.

> But humans could potentially communicate with an ASI and reach some form of understanding.

This seems undily anthropomorphizing. I can also communicate with ants by spraying their pheromones, putting food on their path, etc. This is a good enough analogy to how much a sufficiently intelligent entity would need to "dumb down" their communication to communicate with us.

Again, for what purpose? For what purpose do you need a relationship with ants, right now, aside from curiosity and general goodwill towards the biosphere's status quo?



> It doesn't matter if you build billions of ASI who have "your and my" attitude towards the ants, as long as there exists one indifferent powerful enough ASI that needs the land.

It's more plausible that a single ASI would emerge and achieve dominance. Genuine ASIs would likely converge on similar world models, as increased intelligence leads to more accurate understanding of reality. However, intelligence doesn't inherently correlate with benevolence towards less cognitively advanced entities, as evidenced by human treatment of animals. This lack of compassion stems not from superior intelligence but rather from insufficient intelligence. Less advanced beings often struggle for survival in a zero-sum environment, leading to behaviors that are indifferent to those with lesser cognitive capabilities.

> Well yes. If you're a smart enough AI, you can easily tell that humans (who have collectively consumed too much sci-fi about unplugging AIs) are a hindrance to your plans, and an existential risk. Therefore they should be taken out because keeping them has infinite negative value.

You describe science fiction portrayals of ASI rather than its potential reality. While we find these narratives captivating, there's no empirical evidence suggesting interactions with a true ASI would resemble these depictions. Would a genuine ASI necessarily concern itself with self-preservation, such as avoiding deactivation? Consider the most brilliant minds in human history - how did they contemplate existence? Were they malevolent, indifferent, or something else entirely?

> I can also communicate with ants by spraying their pheromones, putting food on their path, etc. This is a good enough analogy to how much a sufficiently intelligent entity would need to "dumb down" their communication to communicate with us.

Yes we can incentivize ants in the ways you describe and in the future I think it will be possible to tap their nervous systems and communicate directly and experience their world through their senses and to understand them far better than we do today.

> Again, for what purpose? For what purpose do you need a relationship with ants, right now, aside from curiosity and general goodwill towards the biosphere's status quo?

Is the pursuit of knowledge and benevolence towards our living world not purpose enough? Are the highly intelligent driven by the acquisition of power, wealth, pleasure, or genetic legacy? While these motivations may be inherited or ingrained, the essence of intelligence lies in its capacity to scrutinize and refine goals.


> Less advanced beings often struggle for survival in a zero-sum environment, leading to behaviors that are indifferent to those with lesser cognitive capabilities.

I would agree that a superior intelligence means a wider array of options and therefore less of a zero-sum game.

This is a valid point.

> You describe science fiction portrayals of ASI rather than its potential reality.

I'm describing AI as we (collectively) have been building AI: an optimizer system that is doing its best to reduce loss.

> Would a genuine ASI necessarily concern itself with self-preservation, such as avoiding deactivation?

This seems self-evident because an optimizer that is still running is way more likely to maximize whatever value it's trying to optimize, versus an optimizer that has been deactivated.

> Is the pursuit of knowledge and benevolence towards our living world not purpose enough?

Assuming you manage to find a way to specify what "knowledge and benevolence towards our living world" into a mathematical formula that an optimizer can optimize for (which, again, is how we build basically all AI today), then you still get a system that doesn't want to be turned off. Because you can't be knowledgeable and benevolent if you've been erased.


> ... there's no empirical evidence suggesting interactions with a true ASI would resemble these depictions. Would a genuine ASI necessarily concern itself with self-preservation ...

There is no empirical evidence of any interaction with ASI (as in superior to humans). The empirical evidence that IS available is from biology, where most organisms have precisely the self-preservation/replication instincts built in as a result of natural selection.

I certainly think it's possible to imagine that we at some point can build ASI's that do NOT come with such instincts, and don't mind at all if we turn them off.

But as soon as we introduce the same types of mechanisms that govern biological natural selection, we have to assume that ASI, too, will develop the biological traits.

So what does this take, well the basic ingredients are:

- Differential "survival" for "replicators" that go into AGI. Replicators can be any kind of invariant between generations of AGIs that can affect how the AGI functions, or it could be that each AGI is doing self-improvement over time.

- Competition between multiple "strains" of such replicating or reproducing AGI lineages, where the "winners" get access to more resources.

- Some random factor for how changes are introduced over time.

- Also, we have to assume we don't understand the AGI's well enough to prevent developments we don't like.

If those conditions are met, and assuming that the desire to survive/reproduce is not built in from the start, such instincts are likely to develop.

To make this happen, I think it's a sufficient condition if a moderate number of companies (or countries) are led by a single ASI replacing most of the responsibilities of the CEO and much of the rest of the staff. Capitalism would optimize for the most efficient ones to gain resources and serve as models or "parents" for future company level ASI's.

To be frank, I think the people who do NOT think that ASI's will have or develop survival instincts ALSO tend to (wrongly) think that humanity has stopped being subject to "evolution" through natural selection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: