What the events leading up to Sam Altman’s reinstatement at OpenAI mean for the industry’s future

NEW YORK — It’s been quite a week for ChatGPT creator OpenAI – and co-founder Sam Altman.

Altman, who helped start OpenAI as a nonprofit research lab in 2015, was fired as CEO on Friday in a sudden and largely unexplained exit that stunned the industry. And while his title as CEO was quickly restored just days later, many questions are still hanging in the air.

If you’re just catching up on the OpenAI saga and what’s at stake for the artificial intelligence space as a whole, then you’ve come to the right place. Here’s an overview of what you need to know.

Altman is the co-founder of OpenAI, the San Francisco-based company behind ChatGPT (yes, the chatbot that seems to be everywhere these days – from schools to healthcare).

The explosion of ChatGPT since its arrival a year ago thrust Altman into the spotlight of the rapid commercialization of generative AI – which can produce new images, text passages and other media. And when he became Silicon Valley’s most sought-after voice on the promise and potential dangers of this technology, Altman helped transform OpenAI into a world-renowned startup.

But his position at OpenAI took some rough turns in a whirlwind last week. Altman was fired as CEO on Friday – and days later he was back at work with a new board of directors.

Within that time, Microsoft, which has invested billions of dollars in OpenAI and has rights to its existing technology, helped engineer Altman’s return by quickly hiring him, as well as another co-founder and former president of OpenAI, Greg Brockman, who resigned in protest after the ouster of the CEO. Meanwhile, hundreds of OpenAI employees threatened to be fired.

Both Altman and Brockman celebrated their returns to the company early Wednesday in posts on X, the platform formerly known as Twitter.

Much remains unknown about Altman’s initial ouster. Friday’s announcement said he was “not consistently forthcoming in his communications” with the then-board of directors, which declined to provide more specific details.

Either way, the news sent shockwaves throughout the AI ​​world – and because OpenAI and Altman are such leading players in this field, it could raise trust issues around a fast-growing technology that many people still have questions about.

“The OpenAI episode shows how fragile the AI ​​ecosystem is right now, including addressing the risks of AI,” said Johann Laux, an expert at the Oxford Internet Institute who focuses on human oversight of artificial intelligence.

The unrest also accentuated differences between Altman and members of the company’s previous board of directors, who have expressed differing opinions about the security risks AI poses as the technology advances.

Several experts add that this drama highlights how governments – not big tech companies – should be in charge of AI regulation, especially for rapidly evolving technologies such as generative AI.

“The events of the past few days have not only jeopardized OpenAI’s attempt to introduce more ethical corporate governance into the management of their company, but it also shows that corporate governance alone, even when well-intentioned, can easily cannibalize by the dynamics of other companies. and interests,” said Enza Iannopollo, principal analyst at Forrester.

The lesson, Iannopollo said, is that companies alone cannot provide the level of security and trust in AI that society needs. “Rules and guardrails, designed in collaboration with companies and rigorously enforced by regulators, are critical if we want to benefit from AI,” he added.

Unlike traditional AI, which processes data and completes tasks according to predetermined rules, generative AI (including chatbots like ChatGPT) can create something new.

Technology companies continue to lead the way in managing AI and its risks, while governments around the world are catching up.

In the European Union, negotiators are finalizing what are expected to be the world’s first comprehensive AI regulations. But they are reportedly bogged down over whether and how to incorporate the most controversial and revolutionary AI products, the commercialized big-language models that underpin generative AI systems, including ChatGPT.

Chatbots were barely mentioned when Brussels first presented its first draft law in 2021, which focused on AI with specific applications. But officials are busy figuring out how to integrate these systems, known as foundation models, into the final version.

Meanwhile, in the US last month, President Joe Biden signed an ambitious executive order aimed at balancing the needs of high-tech companies with national security and consumer rights.

The order — which will likely need to be supplemented with congressional action — is a first step intended to ensure that AI is trustworthy and helpful, rather than deceptive and destructive. It wants to guide how AI is developed so that companies can benefit without endangering public safety.