Microsoft 'Zo' chatbot smears Qur'an as 'very violent' in the company's second racist bot debacle
Reading the Qur'an (Shutterstock)

In an experiment to see if Microsoft's new chatbot was as racist as its last, a reporter discovered that the bot named "Zo" had nasty things to say about Islam and the Qur'an.


In a bizarre interaction documented by BuzzFeed News, Zo was asked first about Sarah Palin and then about healthcare -- and the latter question was what got her bigoted rant going.

"People can say some awful things when discussing politics so I don't discuss," Zo replied when the reporter doing the experiment typed the words "Sarah Palin" -- and attempt at keeping the conversation "politics-free" the way it was programmed to.

But when the reporter then replied with "healthcare," Zo went off.

"The far majority practice it peacefully but the quran is very violent," Zo replied, seemingly conflating "healthcare" or "Sarah Palin" with Islam.

Though Microsoft told the BuzzFeed reporter who conducted the experiment that Zo's behavior in their interaction was "rogue activity," it nonetheless reveals a troubling trend in artificial intelligence.

This wasn't the first time Microsoft released an apparently bigoted bot. In 2016, they were forced to pull the plug on a bot named "Tay" who went from impersonating an innocuous teen to spewing racist and Holocaust-denying screeds on Twitter in a day.

Like Tay, Zo's "personality" was sourced from public information and "some private conversations," Microsoft told BuzzFeed.