Billionaire investor Elon Musk took to Twitter over the weekend to urge people to be "super careful with [artificial intelligence]," claiming that that it is "potentially more dangerous than nukes," the Verge reported.


Musk was reading Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, a professor of philosophy at Oxford University, and founding director of both the Future of Humanity Institute and the Program on the Impacts of Future Technology.

In his book, Bostrom argues that humanity will soon face an unparalleled situation in its history as a species -- it will not be the most intelligent "animal" on the planet. Bostrom councils humanity to prepare for what it will be like to co-exist with a super-intelligent computer, and Musk was apparently convinced that more attention needs to be paid to this issue:

The notion that humanity will be extinguished by the descendents of computers it designed and programmed is a popular one, manifesting itself in films such as The Matrix and the Terminator franchise. But to have someone of Musk's stature lend credence to the possibility of human extinction by these means is significant, even if it is just on Twitter:

Musk has already made private space exploration possible, a viable electric car a reality, and is in talks to create a "hyperloop" that could ferry passengers from Los Angeles to San Francisco in less than 35 minutes.

["Robot cyborg war machine" via Shutterstock]