Artificial intelligence

OpenAI Dissolves AI Risks Focused Team

OpenAI has dissolved its team devoted to the long-term hazards of artificial intelligence just one year after the business launched the group, according to a CNBC report on Friday.

TakeAway Points:

  • OpenAI has dissolved its team that was devoted to studying the long-term dangers of AI.
  • The announcement of Ilya Sutskever’s and Jan Leike’s exits from the Microsoft-backed startup OpenAI comes days after the announcements of both team leaders.
  • In 2023, OpenAI said that its Superalignment team was aiming to accomplish “scientific and technology breakthroughs to guide and manage AI systems considerably smarter than us.”
  • During the four years leading up to the project, OpenAI announced it would dedicate 20% of its processing capacity.

OpenAI Dissolves Team

The individual, who requested anonymity, stated that several team members are being moved to various other teams inside the organisation.

The announcement of the team leaders’ resignations from the Microsoft-backed startup, OpenAI co-founder Ilya Sutskever and Jan Leike, came a few days ago. OpenAI’s “safety culture and protocols have taken a backseat to shiny goods,” according to a Friday post by Leike.

The goal of OpenAI’s Superalignment team, which was revealed last year, has been to “steer and govern AI systems considerably smarter than us through scientific and technical advances.” OpenAI announced at the time that it will devote 20% of its processing capacity over the next four years to the project.

Instead of responding to a request for comment, OpenAI pointed CNBC to a recent post on X by co-founder and CEO Sam Altman, in which he expressed his sadness about Leike’s departure and stated that the business still had work to accomplish. In a statement published on X on Saturday, OpenAI co-founder Greg Brockman claimed joint authorship with Altman, saying the business had “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”

Leike and Sutskever Exit from OpenAI

Leike and Sutskever announced their exits from the firm on social media site X on Tuesday, a few hours apart. However, on Friday, Leike provided additional information regarding his reasons for leaving.

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike stated in his letter that he thinks the organisation should devote a lot more of its resources to security, monitoring, readiness, safety, and societal impact.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavour. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.” He wrote.

Leadership Crisis in OpenAI

The well-known exits occur months after Altman experienced a leadership crisis at OpenAI.

After claiming in a statement that Altman had not been “consistently truthful in his dealings with the board,” the OpenAI board dismissed Altman in November.

With The Wall Street Journal and other media outlets reporting that Sutskever focused his attention on making sure artificial intelligence would not harm humans, while others, including Altman, were more eager to push ahead with delivering new technology, the problem appeared to be becoming more complicated by the day.

Following Altman’s dismissal, nearly every employee of OpenAI submitted an open letter threatening to resign, and investors, including Microsoft, voiced their disapproval.

After voting to remove Altman from the board, Helen Toner, Tasha McCauley, and Ilya Sutskever were removed, and Altman returned to the corporation within a week. At that time, Sutskever remained employed, but he was no longer serving as a board member. Adam D’Angelo was kept on the board despite having voted to remove Altman as well.

When Altman was asked about Sutskever’s status on a Zoom call with reporters in March, he said there were no updates to share. “I love Ilya … I hope we work together for the rest of our careers—my career, whatever,” Altman said. “Nothing to announce today.”

On Tuesday, Altman shared his thoughts on Sutskever’s departure.

“This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend,” Altman wrote on X. “His brilliance and vision are well known; his warmth and compassion are less well known but no less important.” Altman said research director Jakub Pachocki, who has been at OpenAI since 2017, will replace Sutskever as chief scientist.

OpenAI Intends to Increase Product Usage

The company’s most recent attempt to increase the usage of its well-liked chatbot, OpenAI, recently unveiled a new AI model, a desktop version of ChatGPT, and an overhauled user interface. Days later, news of Sutskever and Leike’s departures, as well as the dissolution of the superalignment team, was announced.

The GPT-4 model is now available to all users, even those who use OpenAI for free, according to a livestreamed event on Monday by technology leader Mira Murati. She went on to say that the GPT-4o, the new model, is “far faster” and has better text, video, and audio capabilities.

OpenAI stated that it eventually intends to enable video communication between users and ChatGPT. 

“This is the first time that we are really making a huge step forward when it comes to the ease of use,” Murati said.

Comments
To Top

Pin It on Pinterest

Share This