Advertise With Us Report Ads

OpenAI Promises to Report ChatGPT Threats to Police After Canadian Tragedy

LinkedIn
Twitter
Facebook
Telegram
WhatsApp
Email
OpenAI
OpenAI is reportedly developing a new social media app that will feature a TikTok-like feed of entirely AI-generated videos. [SoftwareAnalytic]

OpenAI promised Canadian authorities that it will tighten its safety rules and alert police much faster about credible threats. According to reports from Politico and The Washington Post, Canadian politicians recently summoned the company’s leaders to answer tough questions. The politicians took this action after learning a disturbing fact. OpenAI completely failed to alert law enforcement when it banned the ChatGPT account of a mass shooting suspect from Tumbler Ridge, British Columbia, back in 2025. Trying to fix the situation, several OpenAI executives already met with Canadian officials. British Columbia Premier David Eby also confirmed that OpenAI CEO Sam Altman agreed to a personal meeting with him to discuss public safety.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by atvite.com.

Ann O’Leary, the company’s vice president of global policy, outlined the upcoming changes in a formal letter. She wrote that OpenAI will upgrade its software to stop banned users from sneaking back onto the platform. In the Tumbler Ridge case, OpenAI actually banned the shooter’s first account because the user posted warnings about committing real-world violence. However, the safety system failed, and the shooter simply created a second account. OpenAI only found this backup account after police released the suspect’s name to the public. Only then did the company contact the authorities.

From now on, OpenAI will contact police the moment it spots an imminent and credible threat in any ChatGPT conversation. The company will do this even if the user leaves out specific details, like the exact target, weapons, or time of the planned attack. O’Leary clarified that if OpenAI had these new rules in place back in 2025, the company would have called the police immediately. To make communication smoother, OpenAI will also create a dedicated contact person specifically for Canadian law enforcement.

The Canadian government views OpenAI’s previous failure to report the shooter as a massive oversight. Leaders threatened to pass strict laws regulating AI chatbots if tech companies cannot prove they have strong safeguards to protect everyday people. Right now, nobody knows if OpenAI plans to apply these same safety updates in the United States or the rest of the world.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.
ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.