Researchers at the University of Cambridge plan to create a committee dedicated to studying the risks of future technologies in hopes of preventing artificially intelligent robots from one day becoming humanity’s ultimate foe.
The idea is a common theme in science-fiction and is the basis of popular movies like The Matrix and Terminator. But Huw Price, a philosophy professor at the University of Cambridge, said that the concept shouldn’t be dismissed as merely a fanciful plot for Hollywood films.
“To the extent — presently poorly understood — that there are significant risks, it’s an additional danger if they remain for these sociological reasons outside the scope of ‘serious’ investigation,” he said on Sunday.
Price has co-founded the Centre for the Study of Existential Risk along with Skype co-founder Jaan Tallinn and astrophysics professor Martin Rees. The center will study “extinction-level” risks posed by nanotechnology, biotechnology, artificial intelligence and climate change.
In August, Price and Tallinn wrote that so-called “narrow AI,” or computers designed for specific tasks such as playing chess, were not of particular concern. The problem was AI that matched or exceeded human intelligence and could write software on its own.
“The concern is that by creating computers that are as intelligent as humans (at least domains that matter to technological progress), we risk yielding control over the planet to intelligences that are simply indifferent to us, and to things that we consider valuable — things such as life and a sustainable environment,” they explained, comparing the scenario to human’s indifference towards endangered gorillas.
The center will launch next year.
[War machine via Shutterstock]