Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

AZJonnie

(2,537 posts)
6. Yes, I realize most of them have such filters, and I'm not arguing outright that they should not
Sat Dec 6, 2025, 04:23 PM
Saturday

But I would entertain an argument for why they perhaps shouldn't be, mostly because of concerns about "moderation" being based on politics.

And I can easily see the inherent difficulty in doing said moderation, because, again, a lot of times in order for an AI to decide what information is proper to divulge would involve it's "knowing who is asking and what their real intent is".

Another easy example is someone saying they're writing an article about the means by which people kill themselves, but their real intent is to kill themselves. Obviously you could easily program the AI to not tell someone who's plainly suicidal how to go about it, but what if the person asking actually was writing an article?

Even in cases where literally everyone would agree there is one and only one "right" response, it can get tricky, let alone the myriad cases where what is "right" depends entirely on one's political/moral views. Where does the line get drawn? What happens when the government declares it's illegal for any public AI to provide any answer to the question "Why do some people not believe God is real" apart from "only people who are mentally ill don't understand that God is real and JC is his Son and the Savior of Mankind"?

It may turn out to be short-sighted, IMHO, to jump completely onto the "AI must be moderated to never give out (certain information)" bandwagon. In fact, we (i.e. the public) may be being led down this exact road purposefully. Like we think this story reflects badly on Musk/AI/Grok on particular, but Musk actually WANTS to build the public outcry to moderate these platforms, at least the ones that are publicly exposed. That will make it easier to increase *inappropriate* moderation without much outcry, because now everyone demands AI's to be subject to all kinds of censorship.

Recommendations

1 members have recommended this reply (displayed in chronological order):

Latest Discussions»Latest Breaking News»Elon Musk's Grok Is Provi...»Reply #6