Philippe Dufresne, Canada’s Privacy Commissioner, has determined that OpenAI did not follow Canadian federal and provincial privacy laws when training its AI models. After an investigation, Dufresne and his counterparts in Alberta, Quebec, and British Columbia stated that OpenAI’s methods for collecting data and getting consent broke several laws, including Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA). This law controls how businesses collect and use personal information in their regular operations.
The commissioners involved in the investigation found several privacy problems with OpenAI’s approach. They noted that the company “gathered huge amounts of personal information without proper safeguards to stop that information from being used to train its models.” They also found that OpenAI failed to get permission to collect and use that personal information in the first place. While ChatGPT warns that conversations with the AI might be used for training,
OpenAI also bought or “scraped” third-party data that includes personal details people probably don’t even know about. Another issue the commissioners found was that ChatGPT users have no way to see, correct, or delete that data, according to a summary of the findings. They also pointed out OpenAI’s weak efforts to admit when some of ChatGPT’s answers were wrong.
Canada’s Privacy Commissioner stated that OpenAI was cooperative and responded to the investigation. The company has already promised to make several changes to ChatGPT to follow Canadian privacy laws. OpenAI has stopped using older models that broke Canadian privacy rules.
The Commissioner also said that OpenAI now uses “a tool to find and hide personal information (like names or phone numbers) in publicly available internet data and licensed datasets used to train its models.” The company has also agreed to add a new message to the logged-out version of ChatGPT within three months. This message will explain that chats can be used for training and that sensitive information should not be shared. Within six months, OpenAI will also:
Make its data export tools easier to understand and use, and better explain how users can challenge the accuracy of information ChatGPT provides. Confirm to the Privacy Commissioners that it has put strong protections in place for future datasets that are no longer used, so they cannot be used for active development. Test protective measures for the minor relatives of public figures who are not public figures themselves. This is to ensure that the models refuse requests to share their name or date of birth.
Canada’s investigation into OpenAI’s privacy policies began in 2023. More recently, the company has faced scrutiny from regulators due to its connection to a mass shooting in Tumbler Ridge in February 2026. OpenAI reportedly flagged the alleged shooter’s account in 2025 for containing warnings of real-world violence but failed to report those concerns to Canadian law enforcement. After the shooting, regulators demanded the company change its approach to safety. OpenAI eventually agreed to work more closely with Canadian law enforcement and health agencies in the future.










