OpenAI forms safety committee as it starts training latest artificial intelligence model

OpenAI says it is setting up a new safety and security committee and has begun training a new artificial intelligence model to replace the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on “critical safety and security decisions” for its projects and operations.

The security committee arrives as debate swirls around AI security at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and criticized OpenAI for letting security “take a back seat.” compared to shiny products.”

OpenAI said it “recently began training its next frontier model” and that its AI models are industry leading in capacity and security, although it made no mention of the controversy. “We welcome robust debate at this important time,” the company said.

AI models are prediction systems trained on massive data sets to generate on-demand text, images, video and human-like conversations. Frontier models are the most powerful and advanced AI systems.

Members of the safety committee include OpenAI CEO Sam Altman and Chairman Bret Taylor, along with two other board members, Adam D’Angelo, the CEO of Quora, and Nicole Seligman, a former general counsel of Sony. OpenAI said the company also has four technical and policy experts as members.

The committee’s first task will be to evaluate and further develop OpenAI’s processes and safeguards and make recommendations to the board within 90 days. The company said it will then publicly release the recommendations it adopts “in a manner consistent with safety and security.”