ChatGPT continues to hallucinate – and that is bad for your privacy
After causing a spike in downloads of VPN services following a temporary ban about a year ago, OpenAI is once again facing problems in the European Union. The perpetrator this time? ChatGPT’s hallucination problems.
The popular AI chatbot is notorious for making up false information about individuals — something that OpenAI admittedly can’t fix or control, experts say. That’s why Austria-based digital rights group Noyb (stylized as noban abbreviation of “none of your business”) filed a complaint to the country’s data protection authority on April 29, 2024, for alleged violation of GDPR rules.
The organization is now urging the Austrian Privacy Protection Agency to investigate how OpenAI verifies the accuracy of citizens’ personal data. Noyb also calls on authorities to impose a fine to ensure GDPR compliance in the future.
ChatGPT disinformation: a GDPR problem
We’ve already discussed how ChatGPT and similar AI chatbots will probably never stop making things up. That’s quite worrying when you consider that ‘chatbots make up information at least three percent of the time – and as much as 27 percent’. reported the New York Times.
Sure, we might be able to learn how to deal with AI-generated disinformation by training ourselves to spot false facts before we fall for them. However, experts now claim that these ‘AI hallucinations’ are actually bad for our privacy too.
“Coming up with false information is in itself quite problematic. But when it comes to false information about individuals, there can be serious consequences,” says Maartje de Graaf, data protection lawyer at noyb. Worst of all, ChatGPT’s inaccuracy is de facto against EU law.
According to Article 5 of the GDPR, all online information about individuals in the EU must be accurate. While according to Article 16, all incorrect or false data must be corrected. Article 15 then gives Europeans “the right of access”, requiring companies to demonstrate what data they hold about individuals and what the sources are.
But when noyb founder Max Schrems decided to challenge the AI lord’s compliance over ChatGPT’s error in reporting the correct birthday date, OpenAI conceded it could do neither. Instead of correcting or deleting the data, the company said it could just now filter or block the information so that it appears at certain prompts. However, this would only have been possible by completely filtering all information about him.
According to De Graaf, this is a clear sign that technology companies are currently unable to develop AI chatbots that comply with EU law. “The technology must meet regulatory requirements, not the other way around,” he said. “It seems that with every ‘innovation’ another group of companies thinks their products don’t have to comply with the law.”
🚨 noyb has filed a complaint against ChatGPT creator OpenAIOpenAI openly admits its inability to correct false information about people on ChatGPT. The company can’t even say where the data comes from. Read all about it here 👇https://t.co/gvn9CnGKObApril 29, 2024
After its initial launch in November 2022, ChatGPT quickly went mainstream. Throughout 2023, the AI chatbot race dominated the tech world, with the biggest players all developing their iterations. From students, doctors and lawyers to artists and even cyber attackers, everyone seems to be using OpenAI products or similar apps.
As with all technological innovations, the public is divided between those excited about AI’s potential and those concerned about its power. Some experts wonder whether ChatGPT is also the ultimate privacy nightmare.
These concerns eventually turned into real problems for OpenAI in Europe. The problems started in March 2023 when Italy temporarily blocked ChatGPT for unlawfully collecting and storing Italian data. Afterwards, other EU countries, including France, Germany and Ireland, began investigating the case. a ChatGPT Task Force Coordinating national efforts was then born. Still, noyb experts believe that authorities’ efforts so far have been largely fruitless.
“For now, OpenAI doesn’t even seem to pretend it can comply with the EU’s GDPR,” the group argues.
This is why noyb decided to take matters into his own hands. The group is asking the Austrian Data Protection Authority (DSB) to investigate how OpenAI handles people’s data to bring its processing into compliance with the GDPR.