ChatGPT boss tells US legislators regulation ‘critical’ for AI

Sam Altman, the CEO of ChatGPT’s OpenAI, has told lawmakers in the United States that government regulation of artificial intelligence is “critical” because of the potential risks it poses to humanity.

Altman used his appearance before a U.S. Senate subcommittee on Tuesday to urge Congress to impose new regulations on big tech, despite deep political divisions that have for years blocked legislation aimed at regulating the internet.

“If this technology goes wrong, it could go pretty wrong,” Altman, who has become the global face of AI, told the hearing.

“OpenAI is based on the belief that artificial intelligence has the potential to improve almost every aspect of our lives, but also that it carries serious risks,” he said, but given concerns about disinformation, job security and other dangers, “Think we believe that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models”.

Altman proposed the creation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “revoke that license and ensure that safety standards are adhered to.”

Altman’s San Francisco-based startup gained a lot of public attention after it released ChatGPT late last year, a free chatbot tool that answers questions with convincingly human-like answers.

But initial concerns about how students might use ChatGPT to cheat assignments have spread to wider concerns about the ability of the latest generation of “generative AI” tools to trick people, spread untruths, violate copyright protections, and kill some jobs. to disrupt.

Lawmakers highlighted their deepest fears about AI developments, with a leading senator opening the Capitol Hill hearing in a computer-generated voice remarkably similar to his own, reading a text written by the bot.

“If you listened at home, you might think that voice was mine and the words were mine, but in fact that voice wasn’t mine,” said Senator Richard Blumenthal.

Artificial intelligence technologies “are more than just research experiments. They are no longer sci-fi fantasies, they are real and present,” said Blumenthal, a Democrat.

“What if I had asked, and what if it had approved the surrender of Ukraine or [Russian President] Vladimir Putin’s leadership?”

Global action needed

Altman recognized the huge potential of AI tools, but suggested that the US government might consider a combination of licensing and testing requirements before releasing more powerful models.

He also advised labeling and increased global coordination in the regulation of the technology.

“I think the US should take the lead here and do things first, but to be effective we need something global,” he added.

Senator Josh Hawley, a Republican, said the technology has major implications for elections, jobs and national security and that the hearing marked “a critical first step to understanding what Congress should be doing.”

Blumenthal noted that Europe had already made significant progress on the AI ​​bill, which will be voted on in the European Parliament next month.

A sprawling legislative text, the European Union move could see a ban on biometric surveillance, emotion recognition and certain police AI systems.

Co-founded by Altman in 2015 with backing from tech billionaire Elon Musk, OpenAI has grown from a nonprofit research lab with a security-focused mission into a company. Other popular AI products include the image generator DALL-E. Microsoft has invested billions of dollars in the startup and has integrated the technology into its own products, including its search engine Bing.

Altman also plans a global tour this month to national capitals and major cities on six continents to talk about AI with policymakers and the public.

On Capitol Hill, politicians also heard warnings that the technology was in its infancy.

“More spirits are coming for more bottles,” says New York University Professor Emeritus Gary Marcus, another panelist.

“We don’t have machines that can really improve themselves. We don’t have machines that have self-awareness, and we may never want to go there,” he said.

Related Post