White House tells tech CEOs they have ‘moral duty’ on AI

US President Joe Biden tells CEOs their work on AI has “huge potential and huge danger.”

Tech executives in the United States have been told at a meeting at the White House that they have a “moral” duty to ensure that artificial intelligence does not harm society.

The CEOs of Google, Microsoft, OpenAI and Anthropic attended the two-hour meeting on AI development and regulation at the invitation of US Vice President Kamala Harris on Thursday.

US President Joe Biden, who briefly attended the meeting, told the CEOs that the work they were doing had “huge potential and huge danger.”

“I know you understand that,” Biden said, according to a video later posted by the White House.

“And I hope you can teach us what you think is most needed to protect society and move forward.”

Harris said in a statement after the meeting that tech companies must “comply with existing laws to protect the American people” and “ensure the safety and security of their products.”

The meeting included “candid and constructive discussion” about the need for tech companies to be more transparent with the government about their AI technology, as well as the need to ensure the safety of such products and protect them from malicious attacks, the White said. House. .

Sam Altman, CEO of OpenAI, told reporters after the meeting that “surprisingly, we’re on the same page about what needs to be done.”

The meeting came as the Biden administration announced a $140 million investment in seven new AI research institutes, the creation of an independent committee to conduct public reviews of existing AI systems, and plans for guidelines for the use of AI by the federal government.

The staggering pace of progress in AI has sparked excitement in the tech world, as well as concerns about social harm and the possibility of the technology eventually slipping out of developers’ hands.

Despite being in its infancy, AI has already been embroiled in numerous controversies, from fake news and non-consensual pornography to the case of a Belgian man who reportedly committed suicide after being encouraged by an AI-powered chatbot.

In a Stanford University survey of 327 natural language processing experts last year, more than a third of researchers said they believed AI could lead to a “nuclear-level catastrophe.”

In March, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak were among the 1,300 signatories to an open letter calling for a six-month pause in training AI systems as “powerful AI systems should not be developed until we are sure that their effects will be positive and their risks will be manageable”.