The board of OpenAI has set up a safety and security committee to evaluate its operations as the company begins training its next artificial intelligence model.
The company announced this in a statement on Tuesday, noting that the committee’s first task is to evaluate and further develop its processes and safeguards over the next 90 days.
This comes amidst global safety concerns as AI models become powerful in creating texts and generating images.
The committee is to be led by OpenAI’s board members, Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). According to OpenAI, the committee will be responsible for making recommendations to the full board on critical safety and security decisions for the company’s projects and operations.
New model training
OpenAI has made rapid advances in AI recently and this is raising concerns about how it manages the technology’s potential dangers.
The company said it has begun training a new model, which may be more powerful than the current ChatGPT-4 and ChatGPT-4o, hence the need for the committee to review its operations and make recommendations.
“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI.
“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company stated.
It added that after 90 days, the safety and security committee will share their recommendations with the full Board.
“Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security,” it said.
According to OpenAI, other members of the committee include its technical and policy experts, Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist).
The company said it will retain and consult with other safety, security, and technical experts to support this work, including former cybersecurity officials, Rob Joyce, who advises OpenAI on security, and John Carlin.
Backstory
The establishment of a new safety committee followed the recent dissolution of a team focused on ensuring the safety of possible future ultra-capable artificial intelligence systems. The team was dissolved after the departure of the group’s two leaders, including OpenAI co-founder and chief scientist, Ilya Sutskever.
The team was formed less than a year ago under the leadership of Sutskever and Jan Leike, another OpenAI veteran. The scientists ran OpenAI’s superalignment team, which focused on long-term threats of superhuman AI.
Leike, who resigned, wrote afterwards that his division was “struggling” for computing resources within OpenAI.