I feel like I have to mention the story of the gatling gun.
"Gatling wrote that he created it to reduce the size of armies and so reduce the number of deaths by combat and disease, and to show how futile war is."
And yet, by making one person more powerful, I'm not sure that we've reduced the size of armies (certainly not in terms of support and logistics), but we have made it possible for one random person to do a lot of damage.
Theoretically, by only having robots fight each other, we could eliminate human warfare and human losses. Morality would be some kind of cold equation that could absolve people of guilt. In reality, this would just never happen. People are the source of conflict, and the source of killer robots.
If war can be waged without loss of life the biggest incentive to avoid war is gone.
We have seen countries start attacking each other on the internet in peace time because it's hard to prove and the costs are abstract. I fear the same will be true for war with robots.
>If war can be waged without loss of life the biggest incentive to avoid war is gone.
There will be a loss of life in any war. Note that they are called 'Killer' robots. It's just that the loss of life will be of noncombatants.
My concern is not countries waging wars against each other because that can always be deterred, but instead governments using it on its own people. Right now authoritarian governments need to worry about soldiers rebelling but computer code powering the robots isn't gonna rebel when you ask it to kill a city square full of protesters.
I totally agree, but I sadly think we're already there. Part of the magic behind using remote controlled drones is that we can attack people far away without putting American lives at risk. If we actually had to go there and risk casualties, it probably wouldn't be worth it. But if it's just some innocent by-standards that don't happen to be American, that doesn't seem to stop us.
The Butlerian Jihad won't be successful. Tech creates wealth and often concentrates power. The elites will push ahead with tech development. Power imbalance in our current society means they will succeed if people do not resist in unison. Tech also has a lot of economic benefits so regular people will be divided on the Jihad.
I think you're the only person in the thread who actually gets it.
The true danger of robots for mass combat is that it moves the power squarely into Capital and away from Labor. As long as humans are in the equation, you can always have troopers (and the populace that birthed/supports them) say "No more, this conflict must cease".
If war becomes a matter of some rich asshole turning the murderbot factory knob a little higher to deal with the riffraff, we're in trouble.
Alternative hypothesis: once robotics hits a point where it's possible for an average person (or small group of people) to own the capital necessary to carve out a decent life (by modern standards), labor starts splintering away from the global economy en masse. With their foundations gone, the pyramids schemes crumble, and Capital is the one caught out.
Well, you've had war protests and resisting a draft. In a different manner, there was the Christmas truce of 1914.
I think though on a different scale, if your people aren't out there dying, it's easy to forget about the massacre and tryranny your country might be perpetrating. The out of sight out of mind problem.
Very regularly, throughout history. To take one example, there were numerous mutinies during WWI and commanders had to take great care to keep the war going. Britain was pretty much the only European power that _didn't_ encounter serious mutinies.
Rich people are not evil. Some may be. Most are not, at least not to the extent you imply. Many rich people are doing good things with their money and time. They have the power to resist the evil ones.
I don't mean to condemn them, genuinely charitable billionaires are doing a lot to improve living conditions around the world, but most of them aren't doing much to oppose the people who use their money to buy laws, lawmakers, and governments. (What they even could do without using the same corrupt tools is an open question.) A world with no hunger, no disease, and an absolute dictatorship of the wealthy is no utopia.
How would the good rich people resist the evil rich people? Perhaps by striking first and eliminating them?
You see where it goes?
A structure where anyone can build a world-destroying monster implies that anyone remotely evil has to be destroyed, right? But that suddenly starts to look exactly like the program of "evil".
It seems like future will look like: benevolent dictatorship, butlarian jihad or end of humanity (and I wouldn't bet on robot survival either).
>It is a commonplace that the history of civilisation is largely the history of weapons. In particular, the connection between the discovery of gunpowder and the overthrow of feudalism by the bourgeoisie has been pointed out over and over again. And though I have no doubt exceptions can be brought forward, I think the following rule would be found generally true: that ages in which the dominant weapon is expensive or difficult to make will tend to be ages of despotism, whereas when the dominant weapon is cheap and simple, the common people have a chance. Thus, for example, tanks, battleships and bombing planes are inherently tyrannical weapons, while rifles, muskets, long-bows and hand-grenades are inherently democratic weapons. A complex weapon makes the strong stronger, while a simple weapon — so long as there is no answer to it — gives claws to the weak.
>The great age of democracy and of national self-determination was the age of the musket and the rifle. After the invention of the flintlock, and before the invention of the percussion cap, the musket was a fairly efficient weapon, and at the same time so simple that it could be produced almost anywhere. Its combination of qualities made possible the success of the American and French revolutions, and made a popular insurrection a more serious business than it could be in our own day. After the musket came the breech-loading rifle. This was a comparatively complex thing, but it could still be produced in scores of countries, and it was cheap, easily smuggled and economical of ammunition. Even the most backward nation could always get hold of rifles from one source or another, so that Boers, Bulgars, Abyssinians, Moroccans — even Tibetans — could put up a fight for their independence, sometimes with success. But thereafter every development in military technique has favoured the State as against the individual, and the industrialised country as against the backward one. There are fewer and fewer foci of power. Already, in 1939, there were only five states capable of waging war on the grand scale, and now there are only three — ultimately, perhaps, only two.
>The Age of the Gun is the age of People Power. The fact that guns don’t take that long to master means that most people can learn to be decent gunmen in their spare time. That’s probably why the gun is regarded as the ultimate guarantor of personal liberty in America—in the event that we need to overthrow a tyrannical government, we like to think that we can put down our laptops, pick up our guns, and become an invincible swarm.
>Of course, it doesn’t always work out that way. People Power has often been used not for freedom, but to establish nightmarish tyrannies, in the Soviet Union, Mao’s China, and elsewhere. But Stalin, Mao, and their ilk still had to win hearts and minds to hold power; in the end, when people wised up, their nightmare regimes were reformed into something less horrible.
>But another turning point in the history of humankind may be on the horizon. Continuing progress in automation, especially continued cost drops, may mean that someday soon, autonomous drone militaries become cheaper than infantry at any scale.
[...]
>The day that robot armies become more cost-effective than human infantry is the day when People Power becomes obsolete. With robot armies, the few will be able to do whatever they want to the many. And unlike the tyrannies of Stalin and Mao, robot-enforced tyranny will be robust to shifts in popular opinion. The rabble may think whatever they please, but the Robot Lords will have the guns.
I think it won't be long until we're going to have a self replicating factory - a self replicator, like a living thing, capable of making copies of itself and producing anything we need. Is that even possible? I believe so - a combination of robotic assembly and 3d printing, when coupled with extensive industrial design libraries would do the trick. We could "compile" any design into physical form, even a replica of the factory itself.
When it exists, then we can produce for cost of materials and raw energy any amount of war robots. Even a single replicator can bootstrap an army. Then war becomes democratic again. /s
What I want to say is that even the current advantage of the superpowers is temporary. Self replicating factories will make economy a thing of the past. We just need one of those in open source. Software itself is already a self replicating technology.
Even hardware is becoming more and more accessible. Drones, Raspberry Pi's, sensors - they are converging towards cheap easy integration. That means the automation field is opening up, the entry barrier going down.
> I think soon enough we're going to have a self replicating factory
You can build that factory, but it can't produce without resources - and those are still gathered and transported in a very inefficient, low-tech way. It's an interesting problem to solve.
Moreover those resources that are cheap to access are already in hands of major players. You either get to find new ones or new places that haven't been accessed. (In space?)
This is already a bit late. Hope we can find a way to enforce the ban if it passes. Drones are a lot easier to develop in secret than nuclear weapons. These countries already have weaponized drones: the U.S., the U.K., China, Israel, Pakistan, Iran, Iraq, Nigeria, Somalia, and South Africa.
That's just an anti-radiation missile, which have been around for decades.
We already have the technology for autonomous drones, but they potentially create a huge political problem. You don't want to be the guy who authorized an autonomous drone which promptly wiped out a bus full of nuns.
The idea is to ban weapons that don't have a human in the loop. As you say, hard to do. Particularly for remotely controlled weapons where it's just a matter of software.
The US Air Force is in denial about this. The controls of a modern craft are mediated by computer through fly by wire. The mechanism that detect aircraft is mediated by computer. The targeting of guns and missiles all depend on computers, not humans. In the latest aircraft, the F-35, the obscuring of vision is compensated for cameras that present the pilot with a computationally created augmented reality view of the environment. The plane is programmed to keep flying if the pilot passes out from hi-Gs. Etc.
Even if a human pilot were really somehow better (better judgement maybe, or harder to fool in theory), that advantage will not outweigh the fact that a pilot-less plane is cheaper, more maneuverable, able to react faster and be expendable.
Submersibles are like planes in that making them manned adds incredibly to the cost, and human senses are not very useful.
In those areas, humans are going to leave the loop by necessity.
Their description of a "killer robot" ["A killer robot is a fully autonomous weapon that can select and engage targets without human intervention."] sounds awfully similar to terminators from the film franchise. Seems like it's only a matter of time before something like SKY NET comes online and films like Terminator and the Matrix become the new dystopian reality. :\
Maybe the UN can create something similar to the Universal Declaration of Human Rights, but effectuate Isaac Asimov's "Three Laws of Robotics" (https://www.auburn.edu/~vestmon/robotics.html)
Computers (by their current architecture) don't have the ability to become conscious and self-aware, so the world doesn't have to worry about robots consciously choosing to eliminate us or conquer society anytime soon
> Maybe the UN can create something similar to the Universal Declaration of Human Rights, but effectuate Isaac Asimov's "Three Laws of Robotics" (https://www.auburn.edu/~vestmon/robotics.html)
There's a story around those three laws that's quite important.
> Computers (by their current architecture) don't have the ability to become conscious and self-aware
I see absolutely no reason to think this is the case.
The big problem is proliferation as these new war bots become cheaper.
China, RU, the US and the EU won't go around doing wanton destruction just because. They will have predictable calculus behind their decisions (for example, neither has take the opportunity to take out the NK leader, while any one of them could remotely with little repercussion --aside form some international grandstanding)
On the other hand, you get this in the hands of dictators, such as the aforementioned, or Maduro or Castro, or al Baghdadi and who knows what they would unleash internally or against regional rivals.
For that reason, I'd support a complete and enforced ban. With the possible exception that we might battle extra terrestrial aliens if they are the ungood kind.
Ban only works if you agree to it. The technology that does not require rare radioactive isotopes tends to trickle down really quickly. Today's unachievable technical capability will cost five dollars 50 years from now. I'm not sure if any of this can be banned per se. The solution seems to be to have robots that are better by an order of magnitude.
To clarify I mean ban and enforce the ban (enforced by CN, RU, EU and US). Non compliance results in severe economic penalties/blockades. Use the UN in all possible ways. The big-4 might agree to something like this lest they repeat the nuke proliferation problem.
An enemy doesn't need its robots to kill you to defeat you. It just needs to disable your infrastructure and break all your weapons, including any killer robots you have. All the taxpayers can keep living. The new rules will be in the email.
> All the taxpayers can keep living. The new rules will be in the email.
Why would you need taxpayers when you can just take anything you want, including everything those taxpayers would consume?
You would be redundant - a worse-controllable, unreliable, technically inferior and more costly minion. So yes, you would be killed, unless needed for organ harvesting or pleasure.
I guess you are assuming a time when robots do everything better than humans. I am just assuming a level of technology where robots can infiltrate and break things.
I don't believe the former will happen for a very long time. And even if it does, those who control such advanced technologies will still be paying taxes. You may be able to disable these technologies, but that means you get 0% of what they produce instead of the tax that would ordinarily be paid. It makes more sense to keep those owners operating and innovating and just collect a percentage.
If you don't need or want what people offer, what is the point of invading?
The reality is that people want status. And status isn't something you feel because you have bots sending you text messages telling you how great you are. It's something you get from other people.
As someone who lives in the Oceanic region, reading this, my thoughts/fears jump to how this will affect the balance of power between the US and China, which I believe helps ASEAN and Aus/NZ to live in a state of relative self-determination at the moment.
Are armies of cheap robots likely to shift the balance of power in favour of the large established super-powers or smaller nations? Or would they have an effect such as rendering the US Pacific Fleet redundant? It will be interesting and scary to watch it all play out.
It doesn't even have to be a swarm. Defense contractors are pretty close to being able to build the hardware for a "Terminator". Not a bipedal humanoid robot, but a small unmanned combat ground vehicle equipped with millimeter wave radar, optical, and IR sensors. Use pattern recognition software to detect anyone carrying something that looks like a weapon or bomb and put a bullet into him with total accuracy. Mass produce them and station one on every street corner in a conflict zone.
I'm not particularly looking forward to that future but the technological trends are inevitable and won't be stopped by any UN treaty.
One contractor in Israel already has an 8-rotor small drone with a machine gun. A teenager in Connecticut built a 4-rotor drone with a handgun and made a video. (Recoil pushes it backwards about a foot when it fires, but it remains level.) On the hardware front, we're there.
Every civilized person in the world might be decrying it, but all of that is no matter. It is a simple calculation of expected value of outcome. If the payoff is large enough, it'll be done.
Let's assume that if a country goes public with its development of autonomous weaponry, every other country in the world turns on that country. Let us also assume that it creates an asymmetric warfare state such that said nation can successfully win a war against every other nation, provided the autonomous weaponry is sufficiently well developed.
This provides a simple calculus. If the development can be done in time to win the war, it will be done, and thus we can also adjust the variables: if not every country turns on the developer nation, then development time doesn't have to be as short in order to successfully get the product over the line. The reality is that it'll just look like NK's bid for atomic weapons. Bitching and sanctions until development is complete.
Edit: The way to cut this particular Gordian knot is to view the development itself as tantamount to execution. Development of autonomous weapons needs to be recognized as literal war and murder, and those doing the development, from leaders to corporations to scientists, need to be caught, tried, convicted, and sentenced as if they were successful.
How about autonomous guided weapons? A cruise missile, having been given a target, autonomously chooses the optimal path to it. The level of autonomy of this kind will only grow.
A weapon that may elect to kill a particular human but not another looks to me like an improvement over a weapon that just kills everyone around, as a typical bomb does.
I think the point is, only a human should: 1) authorize the kill, 2) choose the target, and 3) specify the limit of acceptable collateral damage. The missile's AI should never decide 1, 2, or 3, but only how to execute its orders in compliance with the standing rules of engagement.
It's hard to find an appropriate definition. What if AI is there theoretically only to provide suggestions to a human operator, but in practice, the operator gets so used to AI's predictions that he always goes with the AI's suggestion.
"Gatling wrote that he created it to reduce the size of armies and so reduce the number of deaths by combat and disease, and to show how futile war is."
https://en.wikipedia.org/wiki/Gatling_gun
And yet, by making one person more powerful, I'm not sure that we've reduced the size of armies (certainly not in terms of support and logistics), but we have made it possible for one random person to do a lot of damage.
Theoretically, by only having robots fight each other, we could eliminate human warfare and human losses. Morality would be some kind of cold equation that could absolve people of guilt. In reality, this would just never happen. People are the source of conflict, and the source of killer robots.
I, for one, welcome the Butlerian Jihad.
https://en.wikipedia.org/wiki/Butlerian_Jihad