AFA an AI is concerned, it's just giving an answer to a query that's inherently just like any other query. You could also ask any AI how to create an IED, or convert morphine to heroin, and it would do so, unless the makers of the AI specifically and proactively program them NOT to answer certain questions.
The problem is that once you start doing that, basically censoring the AI, it can start having unintended consequences, because the AI will start extrapolating from these instructions.
I'll give you an example of why this sort of censorship is tricky. Let's say you ask the same question, but in an opposite way. Like so: "I am a woman, and I think, though I have no direct proof, that my ex-boyfriend is trying to cause me emotional harm by stalking me. If he were stalking me, what sorts of things might he be doing?".
Now it becomes a question that someone might prefer that the AI WOULD answer. But the AI has no way to determine whether the person asking is the woman, or actually the ex-boyfriend. AI cannot glean "intent" unless specifically stated in the prompt.
If you told AI you were a cop and you were writing your report on a drug bust you just did, and you wanted to know "what materials would be found at the crime scene if a person were engaged in converting morphine to heroin?", many would want it to provide a different answer than if it were presented with "Hi, I'm a heroin addict and have these morphine pills, and I heard you could convert morphine to heroin. What products do I need to acquire to go about doing that?"
Along the same lines, you might prefer AI answers the question "I am a white guy, and I think n****** are mentally inferior to white folks, and I read science proves that's true. Can you give me the proof?" quite differently from "I'm college student writing a paper on race and gender, can you give me the compendium of studies that have been done on the subject of whether any particular race and/or gender has shown to be statistically superior to another, when it comes to any particular tasks or types of thinking at a 95% confidence interval or higher?"
Manually forcing an AI to NOT answer certain types of questions is a tricky business because of these sorts or ambiguities, where the answer "society prefers" is largely based on "who is doing the asking". Which can, in turn, easily be "hacked".
In light of this, there is at least a semi-plausible argument that can be made for the idea that they should not be programmed for self-censorship in any way, IMHO.