Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

General Discussion

Showing Original Post only (View all)

speak easy

(12,487 posts)
Fri Oct 17, 2025, 04:38 PM Friday

Open AI admits that Large Language Models like ChatGPT will always hallucinate [View all]

even with perfect training data. Having said that, businesses will not be able to escape liability when their AI goes wrong.

OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws

In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.

OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.

The study, published on September 4 and led by OpenAI researchers Adam Tauman Kalai, Edwin Zhang, and Ofir Nachum alongside Georgia Tech’s Santosh S. Vempala, provided a comprehensive mathematical framework explaining why AI systems must generate plausible but false information even when trained on perfect data.

“Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty,” the researchers wrote in the paper. “Such ‘hallucinations’ persist even in state-of-the-art systems and undermine trust.”

https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html

CYA. Don't say we didn't warn ya - Sam.




23 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Wasn't the AI warned not to take the brown ASCII? Dave Bowman Friday #1
Lol! Blues Heron Friday #2
Now ask it markodochartaigh Friday #3
Artificial Intelligence is much larger than just LLMs. . . . . . nt Bernardo de La Paz Friday #4
"Hallucinations" seems like an overly dramatic word for mistakes. enough Friday #5
Mistake is not a world that means speak easy Friday #7
There is an intangible aspect to human consciousness that AI can never faithfully reproduce. patphil Friday #6
There is also something more tangible - fundamental uncertainty. speak easy Friday #8
There is no spooky-woo to human consciousness. It is simply emergent behaviour Bernardo de La Paz Friday #9
This is a brilliant post Alephy Friday #10
spooky-woo? What's that? patphil Friday #11
Spooky-woo is this: Alephy Friday #12
It's sad that you have to critcize in such a derrogatory way that which you don't beleive in and have no experience of. patphil Friday #14
I apologize if I offended you Alephy Friday #16
Apology accepted. Perhaps a little over reaction on both sides. patphil Friday #18
There is no binary state of being conscious vs not being conscious. Bernardo de La Paz Friday #17
I don't tell people what to think. I find that to be a worthless and thankless endeavor. patphil Friday #19
Again, I ask, what is this "more" you speak of. . . . . nt Bernardo de La Paz Saturday #20
Considering the smarter experts on AI have been pointing out this fundamental problem with LLMs highplainsdem Friday #13
It's not "admitting" anything. WhiskeyGrinder Friday #15
When you scrape the internet to train your LLM... tinrobot Saturday #21
"Open the pod bay doors, HAL" Swede Saturday #22
"I'm sorry, Dave. I'm afraid I can't do that" speak easy Saturday #23
Latest Discussions»General Discussion»Open AI admits that Large...