Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

usonian

(17,793 posts)
2. But Trump himself is a chatbot. Perished in PA and Elon built a robotrump.
Sat May 17, 2025, 06:55 PM
May 17
https://www.techdirt.com/2025/04/29/the-hallucinating-chatgpt-presidency/

Judge for yourself.

Tue, Apr 29th 2025 09:34am - Mike Masnick

We generally understand how LLM hallucinations work. An AI model tries to generate what seems like a plausible response to whatever you ask it, drawing on its training data to construct something that sounds right. The actual truth of the response is, at best, a secondary consideration.

snip

But over the last few months, it has occurred to me that, for all the hype about generative AI systems “hallucinating,” we pay much less attention to the fact that the current President does the same thing, nearly every day. The more you look at the way Donald Trump spews utter nonsense answers to questions, the more you begin to recognize a clear pattern — he answers questions in a manner quite similar to early versions of ChatGPT. The facts don’t matter, the language choices are a mess, but they are all designed to present a plausible-sounding answer to the question, based on no actual knowledge, nor any concern for whether or not the underlying facts are accurate.

snip

This is not the response of someone working from actual knowledge or policy understanding. Instead, it’s precisely how an LLM operates: taking a prompt (the question about job losses) and generating text based on some core parameters (the “system prompt” that requires deflecting blame and asserting greatness).

The hallmarks of AI generation are all here:
• Confident assertions without factual backing
• Meandering diversions that maintain loose semantic connection to the topic
• Pattern-matching to previous responses (“ripped off,” “billions of dollars”)
• Optimization for what sounds good rather than what’s true


Great article and hard to summarize, because the author gives so many spot-on examples.

What "the media" gets entirely wrong is treating his statements (generated responses) as carefully crafted political strategy, when we have learned to treat AI hallucinations as meaningless babble. By elevating them they participate in the sanewashing of his only cognitive skills: revenge, grift and autocracy.

Wouldn't it be smarter and better to boycott his press conferences entirely, since he spends unbearable amounts of time expanding on a bad idea, and a chatbot could do that time expansion faster and better? And it would make a point rather than dignify the babble. He does like a good turnout. No doubt, a complete boycott, except by Fox would be touted as "the biggest press conference ever"

Followed by a ketchup barrage.

That ketchup's made for throwing
That's what you're gonna do, and
"One of these days, that ketchup's gonna splash all over you"

Recommendations

0 members have recommended this reply (displayed in chronological order):

Latest Discussions»Latest Breaking News»FBI warns public about AI...»Reply #2