What are the chances of a robot holocaust?
In the past week, I’ve interviewed two of the world’s leading scientists in the field of robotics – Professor Kevin Warwick of the University of Reading, and Hod Lipson of Cornell. I know very little about the real-life science of robots, so I found the conversations really eye-opening.
From the moment that the word robot was invented (Karel Capek’s 1919 play R.U.R. – Rossum’s Universal Robots), we have been obsessed with the idea that as soon as we create artificial intelligence, it will wipe us out (at the end of R.U.R, only one man is left alive a world of robots). The Terminator, The Matrix, Battlestar Galactica – again and again we see the robots rise up and do their damnedest to kill us all. I find it fascinating that we’re so focused on this particular possibility, and I have a lot of theories as to why. So of course I asked both scientists why people are so convinced that robots will want to exterminate us, and how likely this scenario was.
In science fiction, there’s a real fear of artificial intelligence – this popular belief that as soon as machines become intelligent, they become a threat to us, and their first instinct will be to wipe us out. Why do you think people have that knee-jerk reaction?
Professor Hod Lipson:
“I agree that hostility to artificial intelligence is most people’s response, and I’m not sure why. Basically, I don’t think that a robot uprising is the way it’s going to go. As robots become more and more complex in their thinking, they’re also going to inherit all the aspects that come with increased intelligence. Intelligence is not just being smarter. Humans are more emotional than other animals – they can be depressed, they can question their existence – they are also more compassionate – they can feel empathy, and identify with other humans, whereas animals cannot identify with other animals. So as machines become more intelligent, you’ll see all these same things evolve. In fiction, future robotics systems generally do not take this into account. But Battlestar Galacticaactually captures some of that idea. The cylons have internal controversy, within themselves as individuals and as a society. There’s no reason why an intelligent race would be unified or monolithic in its thinking… Anything with that level of complexity is going to have the same kind of diversity of opinions and passions as humans do.”
And Kevin Warwick said:
“Well, I actually think it is a realistic possibility. If you base it on humans, and look at how we have been ourselves, when one group has come across another one, there has almost always been some kind of a battle, with one side trying to destroy or consume the other. Even with the Aztecs and Incas, often one group is wiped out. Looking at the group that was destroyed, from the outside, you can reflect and say they are typically culturally more advanced… they had better drainage systems, education, social order; but the others came along with better weapons and they wiped them out. So particularly if the machines or cyborgs we are looking at were created from humans, and even more particularly if it was created from military background, they could very well say “What are we listening to the humans for? They can stay in zoos or colonies, but if they try and fight back we’ll destroy them.” And if that happens, I’m afraid the humans have no chance.”
For the complete interviews:
File under: Blog, Uncategorized