OpenAI is disbanding its “AGI Readiness” team, which advised the company on OpenAI’s own capacity to handle increasingly powerful AI and the world’s readiness to manage that technology, according to the head of the team.
TakeAway Points:
- The “AGI Readiness” team, which informed OpenAI on the world’s preparedness to handle the technology and OpenAI’s own ability to handle increasingly potent AI, is being dissolved.
- Senior advisor for AGI Readiness Miles Brundage indicated that he thinks his research will have a greater external influence in his letter announcing his departure from the organisation.
- One year after announcing the group, OpenAI decided in May to abolish its Superalignment team, which examined the long-term dangers of AI.
AGI Readiness team disband
On Wednesday, Miles Brundage, senior advisor for AGI Readiness, announced his departure from the company via a Substack post. He wrote that his primary reasons were that the opportunity cost had become too high and he thought his research would be more impactful externally, that he wanted to be less biassed and that he had accomplished what he set out to at OpenAI.
Brundage also wrote that, as far as how OpenAI and the world is doing on AGI readiness, “Neither OpenAI nor any other frontier lab is ready, and the world is also not ready.” Brundage plans to start his own nonprofit or join an existing one to focus on AI policy research and advocacy. He added that “AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so.”
Former AGI Readiness team members will be reassigned to other teams, according to the post.
“We fully support Miles’ decision to pursue his policy research outside industry and are deeply grateful for his contributions,” an OpenAI spokesperson said. “His plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact. We’re confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government.”
OpenAI disbands Superalignment team
In May, OpenAI decided to disband its Superalignment team, which focused on the long-term risks of AI, just one year after it announced the group, a person familiar with the situation confirmed at the time.
News of the AGI Readiness team’s disbandment follows the OpenAI board’s potential plans to restructure the firm to a for-profit business, and after three executives — CTO Mira Murati, research chief Bob McGrew and research VP Barret Zoph — announced their departure on the same day last month.
Earlier in October, OpenAI closed its buzzy funding round at a valuation of $157 billion, including the $6.6 billion the company raised from an extensive roster of investment firms and big tech companies. It also received a $4 billion revolving line of credit, bringing its total liquidity to more than $10 billion. The company expects about $5 billion in losses on $3.7 billion in revenue this year, confirmed with a source familiar last month.
And in September, OpenAI announced that its Safety and Security Committee, which the company introduced in May as it dealt with controversy over security processes, would become an independent board oversight committee. It recently wrapped up its 90-day review evaluating OpenAI’s processes and safeguards and then made recommendations to the board, with the findings also released in a public blog post.
News of the executive departures and board changes also follows a summer of mounting safety concerns and controversies surrounding OpenAI, which, along with Google, Microsoft, Meta, and other companies, is at the helm of a generative AI arms race — a market that is predicted to top $1 trillion in revenue within a decade — as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors.
OpenAI reassigned Aleksander Madry
In July, OpenAI reassigned Aleksander Madry, one of OpenAI’s top safety executives, to a job focused on AI reasoning instead, sources familiar with the situation confirmed at the time.
Madry was OpenAI’s head of preparedness, a team that was “tasked with tracking, evaluating, forecasting, and helping protect against catastrophic risks related to frontier AI models,” according to a bio for Madry on a Princeton University AI initiative website. Madry will still work on core AI safety work in his new role, OpenAI said at the time.
The decision to reassign Madry came around the same time that Democratic senators sent a letter to OpenAI CEO Sam Altman concerning “questions about how OpenAI is addressing emerging safety concerns.”
The letter also stated, “We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company’s identification and mitigation of cybersecurity threats.”
