
Then OpenAI’s” superalignment group” is no more, the firm confirms. That comes after the absences of some researchers involved, Tuesday’s announcement that Sutskever was leaving the company, and the departure of the player’s another colead. The team’s work will be absorbed into OpenAI’s another research work.
Sutskever’s exit made stories because he was also one of the four committee members who fired Altman in November, helping to launch OpenAI in 2015 and setting the course of the research that led to ChatGPT. Five tumultuous days later, after a widespread uprising by OpenAI staff and the negotiation of a deal that saw Sutskever and two additional company directors step down from the table, Altman was reinstated as CEO.
Days after Sutskever’s withdrawal was announced on Tuesday, Jan Leike, the original DeepMind researcher who was the superalignment player’s another colead, posted on X that he had resigned.
Sutskever and Leike did not respond to requests for comment. Sutskever supported OpenAI’s existing course of action in a blog on X, but he did not provide an explanation for his decision to leave. ” The company’s path has been nothing short of miraculous, and I’m convinced that OpenAI will create AGI that is both safe and beneficial” under its current command, he wrote.
Leike explained in a post on X on Friday that his determination was influenced by a conflict over the company’s objectives and the amount of resources being given to his team.
” For quite some time, I have been disagreeing with OpenAI management regarding the company’s main priorities until we eventually reached a breaking level,” Leike wrote. ” Over the past several weeks, my team has been sailing against the weather.” Maybe we were having trouble with computation and getting this important study done.
The disintegration of OpenAI’s superalignment group is a further confirmation of new evidence of a company shakeout following previous November’s governance crisis. Two experts on the staff, Leopold Aschenbrenner and Pavel Izmailov, were dismissed for leaking business strategies, The Knowledge reported last month. Another member of the team, William Saunders, left OpenAI in February, according to an online forum post in his title.
Two more OpenAI researchers with a focus on AI leadership and policy both appear to have just left the company. According to Linked In, Cutillen O’Keefe left his position as the research result for policy boundaries in April. According to a post on an online community in his name, Daniel Kokotajlo, an OpenAI scientist who has coauthored some papers on the risks of more worthy AI versions, “quit OpenAI because he lost faith that it would behave appropriately around the moment of AG I.” None of the analysts who appear to have left have responded to comments on their websites.
OpenAI declined to comment on the absences of Sutskever or other users of the superalignment staff or the status of its research on long-term AI challenges. John Schulman will now be in charge of the team responsible for fine-tuning AI concepts after training, which will be doing research on the dangers associated with more powerful models.
Although it was widely known as the team with the most advanced knowledge of how to retain AI under control, the superalignment group was not the only team to consider how to solve that issue. The superalignment team’s announcement on a blog post from last summer stated:” Currently, we do n’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”
OpenAI’s contract binds it to properly developing thus- called artificial general intelligence, or technologies that rivals or exceeds humans, properly and for the benefit of humanity. Sutskever and other local leaders have frequently stressed the importance of being cautious. However, OpenAI has also been early in developing and making public beta versions of its AI projects.
In spite of the eagerness with which research leaders like Sutskever discussed creating superhuman AI and the potential for such technology to turn on humanity, OpenAI was once unusual among well-known AI labs. That kind of anti-AI rumors became much more common last year as ChatGPT transformed OpenAI into the most well-known and closely-watched technology company on the planet. It became less contentious to worry about AI harming humans or humanity as a whole as researchers and policymakers grappled with the ramifications of ChatGPT and the prospect of significantly more capable AI.
The need for AI regulation continues to be a hot topic despite the fact that the existential angst has since subsided and AI has not yet advanced significantly. Additionally, OpenAI unveiled a new version of ChatGPT this week that has the potential to fundamentally alter people’s perceptions of the technology in potent, albeit controversial, ways.
Sutskever and Leike’s departures follow OpenAI’s most recent major announcement, a new “multimodal” AI model called GPT-4o, which makes it possible for ChatGPT to interact with others in a more human-like and natural way. The new version of ChatGPT was simulated by a live-streamed demonstration that attempted to flirt with users and mimicked human emotions. Within a few weeks, OpenAI has stated that it will make the new interface accessible to paid users.
The recent departures are not related to OpenAI’s efforts to create more human-like AI or to ship products, in any way. But the latest advances do raise ethical questions around privacy, emotional manipulation, and cybersecurity risks. Another research group, known as the Preparedness team, is run by OpenAI and is focused on these issues.
Update 5/17/24 12: 23 ET: This story has been updated to include comments in X thread from Jan Leike.