Massachusetts Institute of Technology (MIT) scientists claim to have invented the "world's first psychopath" Artificial Intelligence, or AI, The Daily Mail reports.
The team of researchers named their creation "Norman" after the character in the 1960 Alfred Hitchcock psychological horror film Psycho.
"Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms," the Scalable Cooperation! project explained on their website.
The reason the researchers spawned a "psycho" AI was to study the dangers often represented by the acronym GIGO, which stands for "Garbage In, Garbage Out."
After prolonged exposure to "the darkest corners of Reddit," Norman was presented with inkblot images in a 'Roschach Test,' the psychological tool created by Hermann Rorschach in 1921.
The "psycho" AI Norman was asked to write a caption for each inkblot, which were compared to a standard image captioning neural network.
Norman determined far darker captions than the standard AI.
[caption id="attachment_1290506" align="aligncenter" width="640"] Difference in interpretation between the 'Psycho' Artificial Intelligence compared to a neutral AI.[/caption]
Prof. Iyad Rahwan of MIT's Media Lab spoke to the BBC about the project.
"Data matters more than the algorithm," Rahwan concluded. "It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves."
[caption id="attachment_1290508" align="aligncenter" width="640"] Another example of the difference in interpretation between the 'Psycho' Artificial Intelligence compared to a neutral AI.[/caption]
"There is a growing belief that machine behaviour can be something you can study in the same way as you study human behaviour," he added.
Microsoft's ex-chief envisioning officer Dave Coplin told the BBC that Norman reveals how bias can find its way into software programs that are presented as impartial.
"We are teaching algorithms in the same way as we teach human beings so there is a risk that we are not teaching everything right," Coplin noted. "When I see an answer from an algorithm, I need to know who made that algorithm," he added.