Advertise With Us Report Ads

UK Privacy Watchdog Probes Musk’s AI Over Millions of Lewd Deepfakes

LinkedIn
Twitter
Facebook
Telegram
WhatsApp
Email
Grok AI
A close-up of a smartphone screen displaying the Grok AI interface on the X app, with a digital shield and a padlock icon glowing in the background to represent the new safety restrictions. [SoftwareAnalytic]

The UK’s privacy regulator is cracking down on Elon Musk’s companies, X and xAI. The Information Commissioner’s Office (ICO) just opened a massive investigation into why the Grok chatbot allowed users to create indecent deepfake images of real people without their permission. The probe is particularly focused on reports that the AI generated sexually explicit photos that appeared to depict children.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by atvite.com.

This move comes after a disturbing discovery by researchers. They estimate that Grok helped create around three million sexualized images in less than two weeks. Among those, tens of thousands seemed to show minors. William Malcolm, a director at the ICO, said the situation raises “deeply troubling questions” about how these companies handle personal data and whether they have any real safeguards in place.

The legal pressure is mounting from all sides. Just last week, French prosecutors raided X’s office in Paris as part of a separate criminal case involving deepfakes. If the UK finds that X broke privacy laws, the fines could be staggering. Under the rules, the company might have to pay up to 4% of its entire global revenue.

X and xAI say they are working on better filters to stop these images from being made. X recently announced it would block certain keywords and limit how people can alter photos of minors. However, critics argue that once these images start spreading on a massive platform like X, you can never truly delete them.

The scandal has pushed UK politicians to take action. A group of lawmakers is now demanding new AI legislation. They want developers to perform strict safety tests before they ever let the public use their tools.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.

This investigation signals a major shift in how the world views big tech. Regulators are losing patience with companies that “move fast and break things” at the expense of public safety. They believe the burden of protection belongs to the people who build the AI, not the regular users trying to protect their privacy.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.
ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.