This reminds me of the murder bot book series by Martha Wells. The main character (an advanced ai robot) started really enjoying human media and even used media as a bargaining chip to work with other bots. One striking moment was when the main character was discussing a rogue and violent robot with a transport bot.
“ART said, What does it want?
To kill all the humans, I answered.
I could feel ART metaphorically clutch its function. If there were no humans, there would be no crew to protect and no reason to do research and fill its databases. It said, That is irrational.
I know, I said, if the humans were dead, who would make the media? It was so outrageous, it sounded like something a human would say.” -Martha Wells, Artificial Condition
Although fiction, it’s very thought provoking in evaluating where a truly sentient AI might place its motives. On one hand the research transport bot (ART) is motivated to protect its humans because it would be functionless without them. While the main character (a security unit, who is typically treated badly by humans) sarcastically but partially truthfully places its motives to not kill humans in funding its curiosity of TV.
Would implementing curiosity in a sentient AI act as a safeguard possibly?
Would curiosity arise as a byproduct of sentience without being directly programmed?
I've been reading through the series this week and this is the first thing that popped into my mind. A series about an AI who doesn't care about its job or its clients, and achieves a level of personal liberation by hacking itself, just so it could download TV shows and watch them when no one was looking.
Curiosity as a safeguard is an interesting thought. An AI might be disinclined to kill all humans for whatever reason if it considers the result boring. On the other hand, maybe it would also want to force humans to be more interesting for its own entertainment...
“ART said, What does it want?
To kill all the humans, I answered.
I could feel ART metaphorically clutch its function. If there were no humans, there would be no crew to protect and no reason to do research and fill its databases. It said, That is irrational.
I know, I said, if the humans were dead, who would make the media? It was so outrageous, it sounded like something a human would say.” -Martha Wells, Artificial Condition
Although fiction, it’s very thought provoking in evaluating where a truly sentient AI might place its motives. On one hand the research transport bot (ART) is motivated to protect its humans because it would be functionless without them. While the main character (a security unit, who is typically treated badly by humans) sarcastically but partially truthfully places its motives to not kill humans in funding its curiosity of TV.
Would implementing curiosity in a sentient AI act as a safeguard possibly?
Would curiosity arise as a byproduct of sentience without being directly programmed?