China races to regulate AI after playing catchup to ChatGPT

Taipei, Taiwan – After overtaking ChatGPT, China is racing to regulate the rapidly advancing field of artificial intelligence (AI).

According to draft regulations released this week, Chinese tech companies must register generative AI products with China’s cyberspace agency and submit them to a security assessment before they can be released to the public.

The regulations cover virtually every aspect of generative AI, from how it is trained to how users interact with it, in an apparent attempt by Beijing to gain control of the sometimes unwieldy technology, whose breakneck development has prompted warnings from tech leaders including Elon Musk and Apple co-founder Steve Wozniak.

According to rules unveiled on Tuesday by China’s Cyberspace Administration, technology companies will be responsible for the “legitimacy of the source of pre-training data” to ensure that the content reflects the “core value of socialism”.

Companies must ensure that AI does not call for “subversion of state power” or the overthrow of the ruling Chinese Communist Party (CCP), incite moves to “split the country” or “undermine national unity”, produce content that is pornographic , or incitement to violence, extremism, terrorism or discrimination.

They are also not allowed to use personal data as part of their generative AI training materials and must require users to verify their real identities before using their products.

Those who break the rules risk fines of between 10,000 yuan ($1,454) and 100,000 yuan ($14,545), as well as a possible criminal investigation.

While China has yet to match the success of California-based Open AI’s groundbreaking ChatGPT, the push to regulate the nascent field has moved faster than elsewhere.

AI in the United States is still largely unregulated outside of the recruiting industry. AI regulations have not yet received much attention in the US Congress, although privacy-related regulations around AI are expected to be rolled out at the state level this year.

The European Union has proposed sweeping legislation, known as the AI ​​Act, that would regulate which types of AI are “unacceptable” and banned, “high risk” and regulated, as well as which types are unregulated.

The law would be a follow-up to the 2018 EU General Data Protection Regulation, passed in 2018, which is considered one of the strongest data privacy laws in the world.

Outside of the US and the EU, Brazil is also working on AI regulation, with a bill pending in the country’s Senate.

The proposed rules, which are still in the draft stage and open for public feedback until May, come on the heels of a broader regulatory crackdown on the tech industry that began in 2020 and targeted everything from anti-competitive behavior to how user data is processed and stored.

Since then, Chinese regulators have introduced data privacy rules, created a registry of algorithms and, most recently, started regulating deep synthesis, also known as “deep fake,” technology.

The regulatory pressure is causing “major tech companies in China to follow a direction that the party state wants,” Chim Lee, a Chinese technology analyst with the Economist Intelligence Unit, told Al Jazeera.

Compared to other technology, generative AI poses a particularly difficult problem for the CCP, which is “concerned about the potential of these large language models to generate politically sensitive content,” said Jeffrey Ding, an assistant professor at George Washington University who studies China’s studies technology. sector, Al Jazeera said.

Human-like chatbots like ChatGPT, which is restricted in China, scrape millions of data points from the entire internet, including on topics considered taboo by Beijing, such as Taiwan’s disputed political status and the 1989 Tiananmen Square crackdown .

In 2017, two early Chinese chatbots were taken offline after telling users they didn’t like the CCP and wanted to move to the US.

ChatGPT, which was released in November, has also sparked controversy in the West, from telling a user posing as a mental health patient to commit suicide to encouraging a New York Times journalist to kill his wife. to leave.

While ChatGPT’s answers to questions have impressed many users, they also contain inaccurate information and other issues such as broken URL links.

ChatGPT’s Chinese competitors, such as Baidu’s ERNIE, trained on data from outside China’s “Great Firewall”, including information from banned websites such as Wikipedia and Reddit. Despite access to information considered sensitive by Beijing, ERNIE is widely regarded as inferior to ChatGPT.

Beijing’s rules around AI could be a major headache to implement for companies like Baidu and Alibaba, which released their ChatGPT rival Tongyi Qianwen this week, Matt Sheehan, a researcher with the Carnegie Endowment for International Peace, told Al Jazeera.

Sheehan said the regulations set an “extremely high bar” and it was unclear whether companies would be able to comply with the technology currently available.

Regulators may choose not to enforce the rules strictly at first unless they find particularly egregious violations or decide to give an example to a particular company, Sheehan added.

“Like a lot of Chinese regulations, they define things quite broadly, so it essentially shifts the power to the regulators and the enforcers so they can enforce and punish companies when they want to,” he said, adding that this could be particularly be the case if they produce “inaccurate” results that go against the official government narrative.