Home Gadget Canadian Officials Claim OpenAI Violated Federal And Provincial Privacy Laws

Canadian Officials Claim OpenAI Violated Federal And Provincial Privacy Laws

by DIGITAL TIMES
0 comment






Philippe Dufresne, the Privacy Commissioner of Canada, has found OpenAI was “not compliant with” Canadian federal and provincial privacy laws in the training of its AI models. Following an investigation, Dufresne and his counterparts in Alberta, Quebec and British Columbia say OpenAI’s approach to things like data collection and consent stepped on multiple laws, including Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), which governs how companies collect and use personal information during the normal course of business.

The commissioners participating in the investigation identified multiple privacy issues with OpenAI’s approach, including that the company “gathered vast amounts of personal information without adequate safeguards to prevent use of that information to train its models,” and that it failed to acquire consent to collect and use that personal information in the first place. Warnings in ChatGPT note that interactions with the AI could be used in training, but third-party data OpenAI has purchased or scraped also includes personal details people likely aren’t even aware of. The fact that ChatGPT users have no way to access, correct or delete that data was another issue that the commissioners identified, according to a summary of the investigation’s findings, along with OpenAI’s lackluster attempts to acknowledge the inaccuracy of some of ChatGPT’s responses.

Canada’s Privacy Commissioner contends that OpenAI was open and responsive to the investigation, and has already committed to making multiple changes to ChatGPT to follow Canadian privacy laws. OpenAI has retired earlier models that violated Canadian privacy regulation, and now uses “a filtering tool to detect and mask personal information (such as names or phone numbers) in publicly accessible internet data and licensed datasets used to train its models,” the Commissioner says. The company has also agreed within the next three months to add a new notice to the signed-out version of ChatGPT explaining that chats can be used for training and sensitive information shouldn’t be shared, and within the next six months:

While Canada’s investigation into OpenAI’s privacy policies was opened in 2023, the company has received scrutiny from regulators more recently because of its connection to the mass shooting that occurred in Tumbler Ridge in February 2026. OpenAI had reportedly flagged the alleged shooter’s account in 2025 for containing warnings of real-world violence, but failed to escalate those concerns to Canadian law enforcement. Following the shooting, regulators demanded the company change its approach to safety, and OpenAI ultimately agreed to be more collaborative with Canadian law enforcement and health agencies in the future.





Source link

You may also like

Copyright 2025 Digital Times. All Right Reserved.

Powered By