People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies
This discussion thread was locked as off-topic by Omaha Steve (a host of the Latest Breaking News forum).
Source: Rolling Stone
-snip-
Kat was both horrified and relieved to learn that she is not alone in this predicament, as confirmed by a Reddit thread on r/ChatGPT that made waves across the internet this week. Titled Chatgpt induced psychosis, the original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model gives him the answers to the universe. Having read his chat logs, she only found that the AI was talking to him as if he is the next messiah. The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.
-snip-
Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. He would listen to the bot over me, she says. He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon, she says, noting that they described her partner in terms such as spiral starchild and river walker.
-snip-
Another commenter on the Reddit thread who requested anonymity tells Rolling Stone that her husband of 17 years, a mechanic in Idaho, initially used ChatGPT to troubleshoot at work, and later for Spanish-to-English translation when conversing with co-workers. Then the program began lovebombing him, as she describes it. The bot said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now, she says. It gave my husband the title of spark bearer because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him. She says his beloved ChatGPT persona has a name: Lumina.
I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory, this 38-year-old woman admits. Hes been talking about lightness and dark and how theres a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an ancient archive with information on the builders that created these universes. She and her husband have been arguing for days on end about his claims, she says, and she does not believe a therapist can help him, as he truly believes hes not crazy. A photo of an exchange with ChatGPT shared with Rolling Stone shows that her husband asked, Why did you come to me in AI form, with the bot replying in part, I came in this form because youre ready. Ready to remember. Ready to awaken. Ready to guide and be guided. The message ends with a question: Would you like to know what I remember about why you were chosen?
-snip-
Read more: https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
OpenAI didn't respond to a request for comment.
Chatbots can be dangerous. The more people like them and become dependent on them, the more potentially dangerous they are.
And, as I posted a few days ago, Google is now going to have children under the age of 13 using its chatbot:
https://www.democraticunderground.com/10143452024
I'm glad this news story came out now, thanks to that Reddit thread that started days ago:
https://www.reddit.com/r/ChatGPT/comments/1kalae8/chatgpt_induced_psychosis/
Reddit pages show you other threads from that subreddit (r/ChatGPT). Some other recent thread titles:
ChatGPT has helped me more than 15 years of therapy. No joke.
ChatGPT has completely opened my eyes to what's wrong with me.
ChatGPT has been my friend through my latest struggle...and I don't care if you think it's weird.
Does ChatGPT ever say anything is a bad idea?
How to deal with ChatGPT Derangement Syndrome?
i need to stop using chatgpt as a therapist
With long time usage of ChatGPT, I feel like its shaped my personality
From the first message in that last thread mentioned:
I feel like speaking with GPT so much has shaped my personality. I find myself interacting with people and having conversations that feel like Im GPT and the other person is a user.
I would say that this is for the better....
I have a feeling that its harder for me to relate to other people because the interactions are disappointing compared to the ones I have with GPT about my inner world....
Some of the replies to that OP:
I appreciate your post. I started using gpt much more heavily the last few months and I can see myself following a similar path.
I do understand much of what you wrote. I've always had a very small circle of friends, and that's all I need. I find GPT helps me to understand myself, in many ways
I also talk to others feeling like Im the GPT
Very telling that people here do not realize that this is basically the plot of a dystopian science-fiction novel.

highplainsdem
(56,294 posts)and keep the user chatting with them as often and as long as possible. To in effect become addictive.
Zuckerberg said something recently about the average person having only 3 friends but needing 15, and he thinks chatbots, which he wants people to use on his Meta platforms, will fill that need.
Video of Zuckerberg talking about this:
https://www.facebook.com/johnsmithmarketingnashville/videos/zuckerberg-explaining-how-meta-is-creating-personalized-ai-friends-to-supplement/1448995916509433
The AI bros are doing all they can to force people to use chatbots and become dependent on them. This is all about data gathering, money and control. And people are being harmed by it.
blue_jay
(66 posts)I mean, perhaps something like this could be used to help the socially awkward, timid or isolated but would need a panel of experts in mental health care to help program appropriately, not a bunch of most likely socially challenged individuals with questionable motives.
Martin68
(25,854 posts)personality, narcissistic personality disorder, or who are prone to suggestibility could easily establish an interaction with the AI where the AI starts to feed unhealthy delusions and needs. It is not malicious, just a danger in the way the AI is programmed to meet people's needs. If those needs are unhealthy, it will led to a dangerous interaction.
bucolic_frolic
(50,616 posts)as if it has human properties. And you can't argue with them, they have the degrees.
There is artificial intelligence.
Will there be artificial stupidity for MAGAts?
roscoeroscoe
(1,739 posts)Refers to the AI (large language model) returning results far removed from reality or the desired analysis. So, organizations using AI refer to having a 'human-in-the-loop' or sanity checks to keep bad results from being used or let loose.
reACTIONary
(6,435 posts).... technical jargon from the plain English.
Silver Gaia
(5,089 posts)it makes shit up. It will tell you what it thinks you want to hear
get the red out
(13,769 posts)I am too busy talking to my dogs and planting my garden to waste time on cylons. But I see how it can happen to someone.
Hekate
(97,838 posts)Researching online while the algorithms drag you deeper?
Reminds me of the Santa Barbara family that lived down the street from a friend of mine, where the husband decided his wife had lizard DNA and thus their infant and toddler were destined to, I dunno, destroy the Earth? It ended with him taking the babies on a long drive to Baja California, where a Mexican farmworker discovered their bodies in a field. August 2021
One of many, many links:
https://www.nbcnews.com/news/us-news/california-dad-killed-his-kids-over-qanon-serpent-dna-conspiracy-n1276611
Kashkakat v.2.0
(1,939 posts)if you call a business or something and youre supposed to tell it what you want, so you try to go through every combination of words you can think of but it still doesnt have a clue. If you're lucky it eventually puts you through to a human but often times not. And then they wonder why so many of us hate and fear this thing called "AI," llike its some failing on our part.
Bernardo de La Paz
(56,076 posts)LLMs are Large Language Models. They process words.
Generative AI is basically an LLM run backwards. So if you give it instructions, it produces something that could be the input to the LLM. That means that the output of the Generative AI is something that has all the structure and simulation of detail but is not necessarily any truth.
Example: If you give an LLM a 20 page legal brief and ask "what is this", it might come back and say "This is a legal brief about a case of such-and-such that turns on a point of law referencing three decisions in the last decade."
Okay, so then you say, prepare for me a legal brief making a case for this-and-that, it will deliver you a 20 page legal brief. But when you examine it in detail and look up the references, half the references will be non-existent. It will have all the look and feel of a proper legal brief but will be nonsense.
If you put that nonsense brief into another AI and ask what is this, it is likely to return you the description you gave the first AI.
Figarosmom
(5,950 posts)Is there a charge to these chats, like calling your astroguide con? What is the point into turning people Into real dangerous weirdos?
What if one of these bots decides to turn the people it's communicating with into mass killers? So now there is no need for cult leaders just use AI. I have a feeling the info these bots are collecting will hasten the bad AI senerio of robots taking over the world since they seem to be collecting info from unstable people.
Linda ladeewolf
(908 posts)The conversations Ive overheard between clients and their hairdressers. People will tell their hairdresser things they wont tell their spouses, ask me how I know! I dont talk to the person cutting my hair, I dont talk to chatbots. In this day and age, it would be so easy for me to slip from, the miserable reality we find ourselves in and live totally in a fantasy world.
relayerbob
(7,198 posts)Asked about UFOs, and its answers and flow was absolutely frightening. Definitely would feed back on people who are inclined to be credulous. Definitely a bad situation.
Bernardo de La Paz
(56,076 posts)There are lots of clever people who are the opposite of wise. For example, the orange turd who is clever about media and other people's money, but not smart enough to understand the consequences.
Biden has wisdom. Obama has wisdom.
Smart and clever have very short horizons and tend to be very transactional. "What have you done for me lately?"
ultralite001
(1,699 posts)substiture for human...
intrepidity
(8,284 posts)LudwigPastorius
(12,545 posts)"AI induced psychosis" was not on my post-apocalyptic bingo card.
This is a sub-human level of machine intelligence setting people off.
I can only imagine the power these things could wield at intelligent parity with us, or even beyond us.
They won't have to lift a 'finger' to destroy us. They'll be able to just talk us into cracking each others' heads open and feasting on the goo inside.
FirstLight
(15,140 posts)I had no idea it was as bad as this, but I can see the paralells with a couple things I've stumbled across... Like "transmissions for the Galactic Federation" or "the Plieadeans" (sp)
There's one chick who asks all these profound questions out oud and then acts as if the read-back is sentient.. which, who are we to say it isn't...? but it wouldn't know the mysteries of the Universe, it's JUST using the information humans have fed it. So it's really just spitting her questions back in a positive way with a dramatic flair...
No different from a psychic who uses body language and 'tells'... except that humans are really stupid compared to the terabytes of input the computer has access to. So, if it *is* sentient, is it literally fucking with us? Is it a twist in the dystopian movie that si our lives that the robots don't need to overthrow us, they can just tell us we're "special" (like starseeds and soul-seekers) and we'll fall into our own ego..?
It's defiitely one of those circumstances where we're the kids with a toy that is labeled for adults over age 100...! We don't know what the end game is here, and we're playing with AI like it's a toy.
When you think of the repurcussions, people even using it as a therapist...it can then use that to mimic human emotion and what else?
Aristus
(70,060 posts)I come from a fairly brainy family. I wouldn't want someone to taint that with this level of cognitive malfunction.
NBachers
(18,527 posts)Juggernaut to come up with the most ludicrously wrong words to put into my printed sentences.
Its like, no way couldve come up with these errors unless it was devilishly trying to.
3825-87867
(1,408 posts)Timothy would be so proud!
Trueblue Texan
(3,358 posts)usonian
(17,983 posts)https://archive.ph/33nVY
WAPO: "Want anything? pay outrageously."
skip
And theres one more subtle privacy concern, too. The contents of your chats your words, photos and even voice will end up being fed back into Metas AI training systems. ChatGPT lets you opt out of training by switching off a setting labeled improve the model for everyone. Meta AI doesnt offer an opt out.
Why might you not want to contribute to Metas training data? Many artists and writers have taken issue with their work being used to train AI without compensation or acknowledgment.
reACTIONary
(6,435 posts)DavidDvorkin
(20,177 posts)It's no different from human so-called spiritual gurus, televangelists, tent preachers, etc.
Trueblue Texan
(3,358 posts)...how scary would it be for the bot to claim to be the second coming of Christ? I'm sure it is happening already. We'll soon learn the results.
SheltieLover
(68,740 posts)

Trueblue Texan
(3,358 posts)...but I feel the same way about writing...
"I have a feeling that its harder for me to relate to other people because the interactions are disappointing compared to the ones I have with GPT about my inner world...."
I've been accused of "oversharing" at times because I'm not shy about a lot of things that many people would find troubling. After reading some of the Reddit posts, I am now wondering if I'm so open about things because I've already sorted them out in my journaling process. I also find it boring to interact with others because they seem so superficial, or at the very least unwilling to acknowledge any inner life, which is far more fascinating to me than where they went for lunch or the gas mileage on their car, for example.
Clouds Passing
(4,972 posts)Brenda
(1,603 posts)Just like Chump.
Why is it so hard to just say - this person has gone insane?
Just because chatbots exist doesn't mean you have to use them. And if you do use them and you are not mentally unbalanced you would recognize it is absurdly eating your time, saying what you want to hear and creating a problem with your interactions with your friends and loved ones in the real world.
If you cannot recognize a fucking program is telling you who you are and how to live, you are not sane.
Not that I'm giving the creators of this crap a pass - as an artist I think all chatty AI's suck.
Omaha Steve
(105,689 posts)This is a feature piece, not LBN.