But I would entertain an argument for why they perhaps shouldn't be, mostly because of concerns about "moderation" being based on politics.
And I can easily see the inherent difficulty in doing said moderation, because, again, a lot of times in order for an AI to decide what information is proper to divulge would involve it's "knowing who is asking and what their real intent is".
Another easy example is someone saying they're writing an article about the means by which people kill themselves, but their real intent is to kill themselves. Obviously you could easily program the AI to not tell someone who's plainly suicidal how to go about it, but what if the person asking actually was writing an article?
Even in cases where literally everyone would agree there is one and only one "right" response, it can get tricky, let alone the myriad cases where what is "right" depends entirely on one's political/moral views. Where does the line get drawn? What happens when the government declares it's illegal for any public AI to provide any answer to the question "Why do some people not believe God is real" apart from "only people who are mentally ill don't understand that God is real and JC is his Son and the Savior of Mankind"?
It may turn out to be short-sighted, IMHO, to jump completely onto the "AI must be moderated to never give out (certain information)" bandwagon. In fact, we (i.e. the public) may be being led down this exact road purposefully. Like we think this story reflects badly on Musk/AI/Grok on particular, but Musk actually WANTS to build the public outcry to moderate these platforms, at least the ones that are publicly exposed. That will make it easier to increase *inappropriate* moderation without much outcry, because now everyone demands AI's to be subject to all kinds of censorship.