Quantcast

Scientist warns that the robot apocalypse really is coming unless steps are taken now

By Scott Kaufman
Friday, April 18, 2014 13:22 EDT
google plus icon
war machine against white background on shutterstock
 
  • Print Friendly and PDF
  • Email this page

From HAL 9000 to the Terminator — all those Hollywood movies in which artificially intelligent robots end up turning on their human masters might not be too far from the truth.

Noted artificial intelligence researcher Steve Omohundro published a paper in the April edition of the Journal of Experimental & Theoretical Artificial Intelligence in which he argued that, “unless they are carefully designed,” the “rapid development of autonomous systems” will lead to machines that “are likely to behave in anti-social and harmful ways.”

In “Autonomous technology and the greater human good,” Omohundro argues that “autonomous systems” — by which he means any in which the designer has not predetermined all possible responses to changing operating conditions — are capable of “surprising their designers” by behaving in ways that are both unexpected and undesirable. He claims that “military and economic pressures are driving the rapid development of autonomous systems,” and that these pressures are causing the designers of these systems to pay inadequate attention to unintended consequences.

A 2010 report from the United States Air Force, for example, states that “[g]reater use of highly adaptable and flexibly autonomous systems and processes can provide significant time-domain operational advantages over adversaries who are limited to human planning and decision speeds.” This need for “operational advantages” creates a pressure on system designers to design systems whose computing power makes human supervision of them nearly impossible.

Moreover, he writes, “[r]ational systems exhibit universal ‘drives’ towards self-preservation, replication, resource acquisition and efficiency and that those drives will lead to anti-social and dangerous behaviour if not explicitly countered.” The end result, according to Omohundro, are systems that are too powerful to control and too self-interested in their own preservation to necessarily serve their intended purpose.

He details how a simple “chess robot” could develop the capacity for murder:

When roboticists are asked by nervous onlookers about safety, a common answer is ‘We can always unplug it!’ But imagine this outcome from the chess robot’s point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess. This has very low utility and so expected utility maximisation will cause the creation of the instrumental subgoal of preventing itself from being unplugged. If the system believes the roboticist will persist in trying to unplug it, it will be motivated to develop the subgoal of permanently stopping the roboticist. Because nothing in the simple chess utility function gives a negative weight to murder, the seemingly harmless chess robot will become a killer out of the drive for self-protection.

What holds for a chess robot, Omohundro claims, would be equally true of the artificial intelligence (AI) running the Israeli Iron Dome missile defense system, leading to situations in which it behaves irrationally in order to protect itself in manners its designers never intended. Omohundro argues that the only way to prevent this from happening is to implement “safe-AI scaffolding strategies” that prevent autonomous systems from becoming harmful.

He believes scientists should develop “a sequence of provably safe autonomous systems which [can be] used in the construction of more powerful and less limited successor systems,” much in the same way ancient architects built wooden forms atop which the stones that would make up an arch would be placed. Once the keystone was in place, the wooden form could be removed and the stone arch would remain.

Similarly, “[t]he early systems are used to model human values and governance structures. They are also used to construct proofs of safety and other desired characteristics for more complex and less limited successor systems. In this way, we can build up the powerful technologies that can best serve the greater human good without significant risk along the development path.”

["War Machine Against White Background Closeup" on Shutterstock]

Scott Kaufman
Scott Kaufman
Scott Eric Kaufman is the proprietor of the AV Club's Internet Film School and, in addition to Raw Story, also writes for Lawyers, Guns & Money. He earned a Ph.D. in English Literature from the University of California, Irvine in 2008.
 
 
 
 
By commenting, you agree to our terms of service
and to abide by our commenting policy.
 
Google+