OpenAI’s efforts to produce fewer factually false results from its ChatGPT chatbot are not enough to ensure full compliance with European Union data rules, the EU privacy watchdog’s task force said.
“While the measures taken to comply with the principle of transparency are useful to avoid misinterpretation of ChatGPT results, they are not sufficient to comply with the principle of data accuracy,” the working group said in a report published on its website on Friday. .
The body uniting Europe’s national privacy watchdogs set up a working group on ChatGPT last year after national regulators led by Italian authorities raised concerns about the widely used artificial intelligence service.
OpenAI did not immediately respond to Reuters’ request for comment.
Various investigations launched by national privacy supervisors in some member states are still ongoing, the report said, adding that it was therefore not yet possible to give a full description of the results. The findings were to be understood as a ‘common denominator’ among national authorities.
Data accuracy is one of the guiding principles of the EU’s set of data protection rules.
“In fact, due to the probabilistic nature of the system, the current training approach leads to models that can also produce biased or contrived results,” the report said.
“Furthermore, the results provided by ChatGPT are likely to be believed by end users to be factually accurate, including information relating to individuals, regardless of their actual accuracy.”
© Thomson Reuters 2024
(This story was not edited by NDTV staff and was auto-generated from a syndicated feed.)
For the latest tech news and reviews, follow Gadgets 360 at x, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and technology, subscribe to our YouTube channel. If you want to know all about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.

Atlas Review: Feels like a sci-fi movie you’ve seen before