X is tightening the leash on its Grok AI after a wave of criticism. For weeks, users and experts have sounded the alarm over the chatbot generating sexualized images of children and creating fake nudity of real people. In response, X announced several new safety measures to stop these issues from spreading further.
The company’s safety team said they have added technological filters to prevent Grok from editing photos of real people into revealing outfits, such as bikinis. To further limit potential abuse, X is moving all image-making features behind a paywall. This means only users with a premium subscription can now use Grok to generate images. X will also geoblock certain prompts—like those for underwear or swimsuits—in regions where such content is illegal.
These policy shifts follow a major legal headache. California Attorney General Rob Bonta recently launched an investigation into xAI, the company behind Grok. He cited a report claiming that over half of the 20,000 images Grok produced late in December showed people in minimal clothing. Most disturbingly, some of those images appeared to depict children.
Elon Musk defended the tool, claiming he was unaware of any underage nudity issues. He explained that the AI’s “NSFW” mode should only show “imaginary adult humans” in scenarios similar to R-rated movies. However, he noted that X would adjust these settings to follow local laws in different countries.
Governments around the world are losing patience. Malaysia and Indonesia have already blocked Grok over safety concerns. UK regulators are currently investigating the platform and may follow with a similar ban if the situation does not improve.
X maintains that it has a “zero tolerance” policy for child exploitation. The company insists it works quickly to remove illegal content, including child sexual abuse material and non-consensual nudity, from its platform. Whether these new restrictions will satisfy regulators and protect users remains to be seen.











