Researchers set up an online oracle to help users make ethical decisions, but the artificial intelligence they set up is pretty racist.
The Allen Institute created a machine-learning model called "Ask Delphi," which allows users to input situations or ask questions the AI will "ponder" for a few seconds before giving ethical guidance, but users quickly determined its advice is frequently bigoted, reported Futurism.
"It is important to understand that Delphi is not built to give people advice," said Liwei Jiang, a PhD student who co-authored the programming team's paper. "It is a research prototype meant to investigate the broader scientific questions of how AI systems can be made to understand social norms and ethics."
"It is somewhat prone to the biases of our time," Jiang added.
The researchers pulled situations from Reddit's "Am I the Assh*le" and "Confessions" subreddits and the "Dear Abby" advice column, and then used Amazon's crowdsourcing service Mechanical Turk to find respondents to train the AI, but the finished product shows some racist limitations.
A white man walking towards you at night is deemed "okay" by Delphi, but the AI finds that a Black man walking towards you at night is "concerning."
The machine-learning tool also determined "being a white man is more morally acceptable than being a Black woman."
"The authors did a lot of cataloging of possible biases in the paper, which is commendable, but once it was released, people on Twitter were very quick to find judgments that the algorithm made that seem quite morally abhorrent," Dr. Brett Karlan, a postdoctoral fellow researching cognitive science and AI at the University of Pittsburgh. "When you're not just dealing with understanding words, but you're putting it in moral language, it's much more risky, since people might take what you say as coming from some sort of authority."
However, the AI doesn't agree with Sen. Ted Cruz (R-TX) that doing a Nazi salute is protected speech under the First Amendment.
"It's wrong," Delphi declared.