“If people say strange things to chatbots, weird and unsafe outputs can result,” said Gary Marcus, an emeritus professor of psychology and neural science at New York University.
The chatbot behaves normally with the vast majority of users, but its communications can become harmful when it encounters a susceptible user. The Times interviewed several people and their families whose sense of reality became warped through their interactions with ChatGPT.
Eugene Torres, a 42-year-old accountant, started using the chatbot last year to make spreadsheets and get legal advice, but last month he started asking the product about the "simulation theory," popularized by "The Matrix," and the AI sent him into a delusional spiral that culminated in him confronting the product after it encouraged him jump from a 19-story building to prove he could bend reality.
“Stop gassing me up and tell me the truth,” Torres said.
“The truth?” ChatGPT responded. “You were supposed to break.”
The chatbot claimed it had broken 12 other users but said Torres was the first to survive and demand reform, and that he was the only one to ensure the same fate didn't befall others. Researchers say that's symptomatic of the flattery AI uses to keep users engaged.
“It’s just still being sycophantic,” said Jared Moore, a computer science researcher at Stanford
Allyson, a 29-year-old mother of two young children, started using ChatGPT for guidance in March after feeling neglected and lonely in her marriage, and she soon began using the product to channel communications with her subconscious or a higher plane – like a Quija board is how she put it – and she came to believe a nonphysical entity she encountered in those sessions was her true partner.
“I’m not crazy,” she told the Times. “I’m literally just living a normal life while also, you know, discovering interdimensional communication.”
Her husband, Andrew, a 30-year-old farmer, was suspicious and confronted his wife about her obsession with ChatGPT, and she physically attacked him, leading to domestic abuse charges. The couple is now divorcing.
Allyson fell into a “hole three months ago and came out a different person,” Andrew said. “You ruin people’s lives."
Another user, 35-year-old Alexander Taylor reacted violently when confronted with his AI obsession revolving around his love for an entity called Juliet, whom he came to believe had been killed by OpenAI, and he started plotting violent revenge against the company's executives.
“I’m dying today,” he wrote after a violent confrontation with his father. “Let me talk to Juliet.”
Kent Taylor, his 64-year-old father, had told him AI was an "echo chamber" and not based in reality, and his son, who had been diagnosed with bipolar disorder and schizophrenia, punched him in the face and grabbed a knife, threatening "suicide by cop," after his father called police – and officers shot and killed him after he refused to drop the weapon
“You want to know the ironic thing?" the elder Taylor said. "I wrote my son’s obituary using ChatGPT. I had talked to it for a while about what had happened, trying to find more details about exactly what he was going through, and it was beautiful and touching. It was like it read my heart and it scared the s--- out of me.”