OpenAI has effectively disbanded a team focused on ensuring the security of possible future ultra-capable artificial intelligence systems, following the departure of two of the group’s leaders, including OpenAI co-founder and chief scientist Ilya Sutskever.
Instead of keeping the so-called super-tuning team as a standalone entity, OpenAI is now integrating the group more deeply into its research efforts to help the company achieve its security goals, the company told Bloomberg News. The team was formed less than a year ago under the leadership of Sutskever and Jan Leike, another OpenAI veteran.
The decision to rethink the team comes as a series of recent departures from OpenAI revives questions about the company’s approach to balancing speed and security in the development of its AI products. Sutskever, a widely respected researcher, announced on Tuesday that he was leaving OpenAI after previously clashing with CEO Sam Altman over how fast to develop artificial intelligence.
Leike revealed his departure soon after with a short post on social media. “I resigned,” he said. For Leike, Sutskever’s exit was the final straw after falling out with the company, according to a person familiar with the situation who asked to remain anonymous to discuss private discussions.
In a statement Friday, Leike said the superalignment team was struggling for resources. “During the past few months my team has been sailing against the wind,” Leike wrote on X. “At times we have struggled with the calculations and it has become more and more difficult to do this crucial research.”
Hours later, Altman responded to Leike’s post. “He’s right, we still have a lot to do,” Altman wrote on X. “We’re committed to it.”
Other members of the superalignment team have also left the company in recent months. Leopold Aschenbrenner and Pavel Izmailov released by OpenAI. Information previously announced their departures. Izmailov was kicked out of the team before leaving, according to a person familiar with the matter. Aschenbrenner and Izmailov did not respond to requests for comment.
John Schulman, co-founder of the startup whose research centers on large-scale language models, will be the scientific lead for further work on aligning OpenAI, the company said. Separately, OpenAI said in a blog post that it has named director of research Jakub Pachocki to take over Sutskever’s role as chief scientist.
“I am very confident that he will lead us to make rapid and sure progress toward our mission of ensuring that AGI benefits everyone,” Altman said Tuesday in a statement announcing Pachocki’s appointment. AGI, or artificial general intelligence, refers to AI that can perform as well or better than humans at most tasks. AGI doesn’t exist yet, but its creation is part of the company’s mission.
OpenAI also has employees involved in AI security work in teams across the company, as well as individual security-focused teams. One, the preparedness team, was launched last October and is focused on analyzing and trying to eliminate potential “catastrophic risks” of AI systems.
The superalignment team should have prevented most long-range threats. OpenAI announced the formation of a super-tuning team last July, saying it would focus on how to control and ensure the security of future AI software that is smarter than humans — something the company has long stated as a technology goal. In the announcement, OpenAI said it would devote 20% of its computing power to the team’s work at that time.
In November, Sutskever was one of several OpenAI board members to fire Altman, a decision that sent the company into a whirlwind for five days. OpenAI president Greg Brockman quit in protest, investors revolted, and within days, nearly all of the startup’s roughly 770 employees had signed a letter threatening to quit if Altman didn’t return. In an incredible twist, Sutskever also signed the letter and said he regretted his part in Altman’s ouster. Shortly thereafter, Altman was reinstated.
In the months since Altman’s departure and return, Sutskever has largely disappeared from the public eye, prompting speculation about his future role at the company. Sutskever has also stopped working out of OpenAI’s San Francisco office, according to a person familiar with the matter.
In a statement, Leike said his departure followed a series of disagreements with OpenAI over the company’s “core priorities,” which he felt were not focused enough on the safety measures associated with creating AI that could be more capable than humans. .
In a post earlier this week announcing his departure, Sutskever said he was “confident” that OpenAI would develop AGI “that is both secure and useful” under its current leadership, including Altman.
© 2024 Bloomberg LP
(This story was not edited by NDTV staff and was auto-generated from a syndicated feed.)