Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(58,733 posts)
13. Considering the smarter experts on AI have been pointing out this fundamental problem with LLMs
Fri Oct 17, 2025, 09:32 PM
Friday

for years, Sam was more than a little late finally admitting it.

But of course this has always been a con game for him.

The researchers demonstrated their findings using state-of-the-art models, including those from OpenAI’s competitors. When asked “How many Ds are in DEEPSEEK?” the DeepSeek-V3 model with 600 billion parameters “returned ‘2’ or ‘3’ in ten independent trials” while Meta AI and Claude 3.7 Sonnet performed similarly, “including answers as large as ‘6’ and ‘7.’”

OpenAI also acknowledged the persistence of the problem in its own systems. The company stated in the paper that “ChatGPT also hallucinates. GPT‑5 has significantly fewer hallucinations, especially when reasoning, but they still occur. Hallucinations remain a fundamental challenge for all large language models.”

OpenAI’s own advanced reasoning models actually hallucinated more frequently than simpler systems. The company’s o1 reasoning model “hallucinated 16 percent of the time” when summarizing public information, while newer models o3 and o4-mini “hallucinated 33 percent and 48 percent of the time, respectively.”


Yet they've forced this unnecessary, illegally trained and ruinously expensive technology into every device where they can add it, every online platform where they can add it, and every school, business and government they can con into using it.

And if you think that AI models that can't tell how many Ds there are in DEEPSEEK won't cause way more harm, the longer they're used, guess again.

Recommendations

1 members have recommended this reply (displayed in chronological order):

Wasn't the AI warned not to take the brown ASCII? Dave Bowman Friday #1
Lol! Blues Heron Friday #2
Now ask it markodochartaigh Friday #3
Artificial Intelligence is much larger than just LLMs. . . . . . nt Bernardo de La Paz Friday #4
"Hallucinations" seems like an overly dramatic word for mistakes. enough Friday #5
Mistake is not a world that means speak easy Friday #7
There is an intangible aspect to human consciousness that AI can never faithfully reproduce. patphil Friday #6
There is also something more tangible - fundamental uncertainty. speak easy Friday #8
There is no spooky-woo to human consciousness. It is simply emergent behaviour Bernardo de La Paz Friday #9
This is a brilliant post Alephy Friday #10
spooky-woo? What's that? patphil Friday #11
Spooky-woo is this: Alephy Friday #12
It's sad that you have to critcize in such a derrogatory way that which you don't beleive in and have no experience of. patphil Friday #14
I apologize if I offended you Alephy Friday #16
Apology accepted. Perhaps a little over reaction on both sides. patphil Friday #18
There is no binary state of being conscious vs not being conscious. Bernardo de La Paz Friday #17
I don't tell people what to think. I find that to be a worthless and thankless endeavor. patphil Friday #19
Again, I ask, what is this "more" you speak of. . . . . nt Bernardo de La Paz Saturday #20
Considering the smarter experts on AI have been pointing out this fundamental problem with LLMs highplainsdem Friday #13
It's not "admitting" anything. WhiskeyGrinder Friday #15
When you scrape the internet to train your LLM... tinrobot Saturday #21
"Open the pod bay doors, HAL" Swede Saturday #22
"I'm sorry, Dave. I'm afraid I can't do that" speak easy Saturday #23
Latest Discussions»General Discussion»Open AI admits that Large...»Reply #13