AI-generated content should be labelled, EU commissioner says

Vera Jourova, deputy head of the European Commission, says precautions should be taken to counter disinformation.

Companies using AI tools that can generate disinformation, such as ChatGPT and Bard, should label such content as part of their efforts to combat fake news, said European Commission Deputy Chief Vera Jourova.

Unveiled late last year, Microsoft-powered OpenAI’s ChatGPT has become the fastest-growing consumer application in history, sparking a race among technology companies to bring generative AI products to market.

However, concerns are growing about potential misuse of the technology and the possibility that bad actors and even governments could use it to produce much more disinformation than before.

“Signatories that integrate generative AI into their services, such as Bingchat for Microsoft, Bard for Google, must build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation,” Jourova told a news conference Monday.

“Signers who have services with the potential to spread AI-generated disinformation should in turn deploy technology to recognize such content and clearly label it to users,” she said.

Companies such as Google, Microsoft and Meta Platforms that have joined the EU code of practice to tackle disinformation must report next month on the AI ​​protections they have put in place, Jourova said.

She warned Twitter, which discontinued its Code of Practice last week, to expect more regulatory scrutiny.

“By leaving the code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be vigorously and urgently investigated,” Jourova said.