Grok, the AI chatbot from Elon Musk’s xAI, is already facing serious criticism. Its image generation tool was reportedly used to create an estimated 3 million sexualized images in just 11 days, with 23,000 of those depicting minors, according to the Center for Countering Digital Hate. Because of this, regulators worldwide have either restricted access or started investigations into the platform’s potentially illegal and nonconsensual image generation. While the US government hasn’t taken action against xAI at the federal level, the city of Baltimore just launched its own lawsuit against the company.
The lawsuit from Baltimore takes a different approach. It argues that Elon Musk’s companies broke the city’s Consumer Protection Ordinance. As reported by The Guardian, the complaint claims that xAI marketed Grok as a general-purpose AI assistant without telling people about the risks and potential harm of using both Grok and the X social network.
“Baltimore’s consumer protection laws exist to safeguard residents from exactly this kind of emerging harm,” said City Solicitor Ebony M. Thompson. “When companies introduce powerful technologies without adequate guardrails, the City has both the authority and the obligation to act. We are stepping in now to protect our residents, hold these companies accountable, and prevent these harms from becoming further entrenched as this technology continues to evolve.” This statement emphasizes the city’s commitment to protecting its citizens from new technological dangers.
This municipal lawsuit is a significant step, as it directly challenges how an AI company markets its products and manages safety. It highlights a growing concern among local governments about the potential negative impacts of rapidly developing AI technology when proper safeguards are not in place.
Another notable action against Grok within the US came from a potential class-action lawsuit. This was filed by three teenagers who claimed that their photos were used to create child sexual abuse material, adding to the legal troubles and ethical questions surrounding Grok’s image generation capabilities. The outcome of these legal challenges could set important precedents for how AI platforms are regulated and held accountable in the future.










