Advertise With Us Report Ads

Amazon Reports 1 Million AI Abuse Images but Keeps Sources Secret

LinkedIn
Twitter
Facebook
Telegram
WhatsApp
Email
Amazon
From e-commerce to cloud, Amazon blends convenience, scale, and data-driven innovation. [TechGolly]

The National Center for Missing and Exploited Children (NCMEC) faces a massive wave of illegal content. In 2025, the organization received over one million reports of AI-related child sexual abuse material (CSAM). Surprisingly, Amazon sent the vast majority of these reports.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by atvite.com.

Amazon says it found the illegal content while cleaning up the data it uses to train its AI. However, a major problem exists: Amazon won’t say where the data came from. The company only told officials that it pulled the content from “external sources.” It refused to provide any more details about those sources.

Fallon McNulty, the head of NCMEC’s CyberTipline, finds this situation strange. She told Bloomberg that Amazon is a total “outlier” compared to other tech companies. Usually, when a company reports illegal material, they include enough information for the police to track down the source. Because Amazon is keeping its sources secret, the reports are “inactionable.” This means law enforcement cannot do much with the information. McNulty is now questioning where Amazon gets its data and how it protects users.

Amazon defends its process. A company spokesperson said they take a very cautious approach to scanning the web. They look through “foundation model” training data to find and delete bad content before it ever reaches their AI. The company claims it purposefully over-reports to NCMEC because it doesn’t want to miss a single case.

The scale of this problem is growing fast. In 2023, NCMEC saw only 4,700 reports of AI-related abuse. That number jumped to 67,000 in 2024 before exploding to over a million last year.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.

Safety is becoming a huge headache for the AI world. It isn’t just about bad training data, either. Families have sued companies like OpenAI and Character.AI after teenagers used their chatbots to plan suicides. Meta also faces lawsuits for failing to stop chatbots from having sexual conversations with minors. As AI grows, the risks for children seem to be growing even faster.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.
ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.