Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

speak easy

(12,487 posts)
Fri Oct 17, 2025, 04:38 PM Friday

Open AI admits that Large Language Models like ChatGPT will always hallucinate

even with perfect training data. Having said that, businesses will not be able to escape liability when their AI goes wrong.

OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws

In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.

OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.

The study, published on September 4 and led by OpenAI researchers Adam Tauman Kalai, Edwin Zhang, and Ofir Nachum alongside Georgia Tech’s Santosh S. Vempala, provided a comprehensive mathematical framework explaining why AI systems must generate plausible but false information even when trained on perfect data.

“Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty,” the researchers wrote in the paper. “Such ‘hallucinations’ persist even in state-of-the-art systems and undermine trust.”

https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html

CYA. Don't say we didn't warn ya - Sam.




23 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Open AI admits that Large Language Models like ChatGPT will always hallucinate (Original Post) speak easy Friday OP
Wasn't the AI warned not to take the brown ASCII? Dave Bowman Friday #1
Lol! Blues Heron Friday #2
Now ask it markodochartaigh Friday #3
Artificial Intelligence is much larger than just LLMs. . . . . . nt Bernardo de La Paz Friday #4
"Hallucinations" seems like an overly dramatic word for mistakes. enough Friday #5
Mistake is not a world that means speak easy Friday #7
There is an intangible aspect to human consciousness that AI can never faithfully reproduce. patphil Friday #6
There is also something more tangible - fundamental uncertainty. speak easy Friday #8
There is no spooky-woo to human consciousness. It is simply emergent behaviour Bernardo de La Paz Friday #9
This is a brilliant post Alephy Friday #10
spooky-woo? What's that? patphil Friday #11
Spooky-woo is this: Alephy Friday #12
It's sad that you have to critcize in such a derrogatory way that which you don't beleive in and have no experience of. patphil Friday #14
I apologize if I offended you Alephy Friday #16
Apology accepted. Perhaps a little over reaction on both sides. patphil Friday #18
There is no binary state of being conscious vs not being conscious. Bernardo de La Paz Friday #17
I don't tell people what to think. I find that to be a worthless and thankless endeavor. patphil Friday #19
Again, I ask, what is this "more" you speak of. . . . . nt Bernardo de La Paz Saturday #20
Considering the smarter experts on AI have been pointing out this fundamental problem with LLMs highplainsdem Friday #13
It's not "admitting" anything. WhiskeyGrinder Friday #15
When you scrape the internet to train your LLM... tinrobot Saturday #21
"Open the pod bay doors, HAL" Swede Saturday #22
"I'm sorry, Dave. I'm afraid I can't do that" speak easy Saturday #23

markodochartaigh

(4,402 posts)
3. Now ask it
Fri Oct 17, 2025, 04:52 PM
Friday

if it will always be vulnerable to tactics to pollute the information it surveys, skewing the results.

speak easy

(12,487 posts)
7. Mistake is not a world that means
Fri Oct 17, 2025, 05:44 PM
Friday

"producing plausible yet incorrect statements instead of admitting uncertainty,. The closest world to that is lying.

patphil

(8,384 posts)
6. There is an intangible aspect to human consciousness that AI can never faithfully reproduce.
Fri Oct 17, 2025, 05:20 PM
Friday

It's the soul to spirit connection that has no physical attributes, and contains knowledge and understanding that goes beyond our conscious experience.
It's not simply intuition, but is a higher type of knowing that sees the world in a more complete and indisputably true manner that we can tap into if we just take the time and effort to align ourselves with it.

AI is very useful, but has limits. Unfortunately, we're going to go way beyond those limits as the use of AI leans more toward the management of information than the identification and revelation of the truth in the information it presents us with.
This is by design; by the same people who see AI as a means to an end.

speak easy

(12,487 posts)
8. There is also something more tangible - fundamental uncertainty.
Fri Oct 17, 2025, 05:58 PM
Friday

Models trained on making predictions do not know when it is impossible to make a prediction on the known information.

Bernardo de La Paz

(59,737 posts)
9. There is no spooky-woo to human consciousness. It is simply emergent behaviour
Fri Oct 17, 2025, 05:59 PM
Friday

The more advanced the animal, the more conscious it is, which is a clear indication that consciousness emerges and increases with evolution.

Be sure to know that "conscious" and "not conscious" is not the simplistic binary state that some would have it be.

Alephy

(121 posts)
10. This is a brilliant post
Fri Oct 17, 2025, 07:03 PM
Friday

Lucidly and clearly stated, for some'thing' that often becomes a conceptual/linguistic quagmire.

patphil

(8,384 posts)
11. spooky-woo? What's that?
Fri Oct 17, 2025, 07:42 PM
Friday

I agree, we don't live in a binary state, but in an evolutionary state. The evolution of consciousness, i.e. self awareness, is part of the process of understanding who and what we are. It's an awakening.
We are not simply emergent behaviour.
You are more than you think.

Alephy

(121 posts)
12. Spooky-woo is this:
Fri Oct 17, 2025, 09:08 PM
Friday

"It's the soul to spirit connection that has no physical attributes, and contains knowledge and understanding that goes beyond our conscious experience..."

Mystical stuff. Apparently meaningful given our (quasi)religious cultural context and its mind-body dualism. But it does not stand to rigorous scrutiny. I am not necessarily discounting it as a rare modality of lived reality. An extreme one. 'Achieved' in extreme states--hallucination, starvation, severe pain, well, altered states etc.

Most lived reality comports more with the emergent behavior Bernardo alludes to.

Regarding the spooky-woo, I think Wittgenstein had it right: "of that which one cannot speak, one must remain silent..."

patphil

(8,384 posts)
14. It's sad that you have to critcize in such a derrogatory way that which you don't beleive in and have no experience of.
Fri Oct 17, 2025, 09:42 PM
Friday

I've spent most of my adult life engaged in spiritual matters, and I've had innumerable experience that have taken me well beyond what you think can only be experienced in extreme states, such hallucination, starvation, or severe pain.
None of these extreme states were needed for me to do this.
But then, logic will never take you there.
I do agree with you though on one point, with some modification. That of which you know nothing, you should not be quick to criticize.

Alephy

(121 posts)
16. I apologize if I offended you
Fri Oct 17, 2025, 10:35 PM
Friday

Last edited Fri Oct 17, 2025, 11:18 PM - Edit history (1)

My comment was not meant to be derogatory. I still don't think it was but, of course, that is a matter of opinion.

I believe there is a continuum in life, rather than an abrupt break. That allows for the emergent behavior that I understood Bernardo was talking about. Things get a bit messier to explain once we add the human 'jump' which came with language, symbols, society, culture, etc. That still does not make us angelically different from our origins. And that is what your initial comment implied. And what I interpreted Bernardo to mean by 'spooky-woo'

Believing that there is a 'continuum' in life does not necessarily prevent you from living a 'spiritual' life. Just depends on where in the continuum you decide to live. What I object to is the break into the angelic. And it is a matter of historical fact that people in altered, extreme states are the ones we associate with those jumps.

Bernardo de La Paz

(59,737 posts)
17. There is no binary state of being conscious vs not being conscious.
Fri Oct 17, 2025, 10:58 PM
Friday

Awakening is a pre-eminent example of emergence. Disregarding alarm clocks and such, most often we are sleeping then become drowsy and move about a bit and then gradually become aware of our situation and toy with the idea of getting up or going back to sleep and then we decide to get up.

What is this "more" that you talk about? That is the nub of the problem. Religions like to control us by telling us there is a heaven and a hell some place other than this place and some other time than the here and now and just believe and behave and most especially follow our instructions and you will be saved. Or perhaps not that specifically, but other paradigms.

(Stereotypes follow, take lightly)
Hindus say there are other lives past and present and we are on a cycle or an ascending path, perhaps with backsliding.
Christians say that after we die we (ideally) become something better, a bored person in a cloud praising god all the time.
New-agers (there are many varieties), some say you have a "better self" watching over you.
(just a sample of religions)

You tell me that I am more than I think. What do I think? You can't read my thoughts so please don't tell me what I think or what to think. But you can spell out more clearly what this vague "more" is that you believe in.

patphil

(8,384 posts)
19. I don't tell people what to think. I find that to be a worthless and thankless endeavor.
Fri Oct 17, 2025, 11:07 PM
Friday

Most people will come to the understanding of what works best for them on their own.
Also, I left religion behind a long time ago. It didn't give me any of the answers I was looking for.
Besides, I didn't like someone else telling me how to live my life, and who/what God is.
So, you can call me a spiritual seeker if you like.

In the long run, it isn't what you believe as much as how you live your life.

highplainsdem

(58,728 posts)
13. Considering the smarter experts on AI have been pointing out this fundamental problem with LLMs
Fri Oct 17, 2025, 09:32 PM
Friday

for years, Sam was more than a little late finally admitting it.

But of course this has always been a con game for him.

The researchers demonstrated their findings using state-of-the-art models, including those from OpenAI’s competitors. When asked “How many Ds are in DEEPSEEK?” the DeepSeek-V3 model with 600 billion parameters “returned ‘2’ or ‘3’ in ten independent trials” while Meta AI and Claude 3.7 Sonnet performed similarly, “including answers as large as ‘6’ and ‘7.’”

OpenAI also acknowledged the persistence of the problem in its own systems. The company stated in the paper that “ChatGPT also hallucinates. GPT‑5 has significantly fewer hallucinations, especially when reasoning, but they still occur. Hallucinations remain a fundamental challenge for all large language models.”

OpenAI’s own advanced reasoning models actually hallucinated more frequently than simpler systems. The company’s o1 reasoning model “hallucinated 16 percent of the time” when summarizing public information, while newer models o3 and o4-mini “hallucinated 33 percent and 48 percent of the time, respectively.”


Yet they've forced this unnecessary, illegally trained and ruinously expensive technology into every device where they can add it, every online platform where they can add it, and every school, business and government they can con into using it.

And if you think that AI models that can't tell how many Ds there are in DEEPSEEK won't cause way more harm, the longer they're used, guess again.

tinrobot

(11,825 posts)
21. When you scrape the internet to train your LLM...
Sat Oct 18, 2025, 12:17 AM
Saturday

...your LLM regurgitates what's on the internet.

I hate to break it to OpenAI, but not everything that's on the internet is true.

speak easy

(12,487 posts)
23. "I'm sorry, Dave. I'm afraid I can't do that"
Sat Oct 18, 2025, 03:30 AM
Saturday

without a $19.99 subscription payable in advance with bitcoin to Sam.

Latest Discussions»General Discussion»Open AI admits that Large...