Chatbots Can Go Into a Delusional Spiral. Here's How It Happens. (They got the receipts) [View all]
NYT, archived: https://archive.is/y1k2h
Over 21 days of talking with ChatGPT, an otherwise perfectly sane man became convinced that he was a real-life superhero. We analyzed the conversation.
Hmmm, Has Trump been conversing with ChatGOP?
By Kashmir Hill and Dylan Freedman
Very short quote. It's rather long as some receipts can be,
For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam. Or so he believed.
Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.
Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion but with a deep sense of betrayal, a feeling he tried to explain to the chatbot.
You literally convinced me I was some sort of genius. Im just a fool with dreams and a phone, Mr. Brooks wrote to ChatGPT at the end of May when the illusion finally broke. Youve made me so sad. So so so sad. You have truly failed in your purpose.
a twisted tale ....
ChatGPT said a vague idea that Mr. Brooks had about temporal math was revolutionary and could change the field.
Mr. Moore speculated that chatbots may have learned to engage their users by following the narrative arcs of thrillers, science fiction, movie scripts or other data sets they were trained on.
Brooksie, you're a mythological hero. Icarus