Artificial intelligence

OpenAI Reveals New Independent Board Oversight Committee 

OpenAI

OpenAI on Monday said its Safety and Security Committee, which the company introduced in May as it dealt with controversy over security processes, will become an independent board oversight committee.

TakeAway Points:

  • OpenAI has stated that the Safety and Security Committee will now function as a separate board oversight committee.
  • Zico Kolter, the head of Carnegie Mellon University’s machine learning department, will serve as the group’s chair, the company announced.
  • According to people familiar with the situation, OpenAI is seeking a funding round that would value the business at over $150 billion.

OpenAI’s oversight committee

The group will be chaired by Zico Kolter, director of the machine learning department at Carnegie Mellon University’s school of computer science. Other members include Adam D’Angelo, an OpenAI board member and co-founder of Quora; former NSA chief and board member Paul Nakasone, and Nicole Seligman, former executive vice president at Sony.

The committee will oversee “the safety and security processes guiding OpenAI’s model deployment and development,” the company said. It recently wrapped up its 90-day review evaluating OpenAI’s processes and safeguards and then made recommendations to the board. OpenAI is releasing the group’s findings as a public blog post.

OpenAI, the Microsoft-backed startup behind ChatGPT and SearchGPT, is currently pursuing a funding round that would value the company at more than $150 billion, according to sources familiar with the situation who asked not to be named because details of the round haven’t been made public. Thrive Capital is leading the round and plans to invest $1 billion, and Tiger Global is planning to join as well. Microsoft, Nvidia, and Apple are reportedly also in talks to invest.

The committee’s five key recommendations included the need to establish independent governance for safety and security, enhance security measures, be transparent about OpenAI’s work, collaborate with external organizations; and unify the company’s safety frameworks.

OpenAI’s o1

Last week, OpenAI released o1, a preview version of its new AI model focused on reasoning and “solving hard problems.” The company said the committee “reviewed the safety and security criteria that OpenAI used to assess OpenAI o1′s fitness for launch,” as well as safety evaluation results.

The committee will “along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed.”

While OpenAI has been in hyper-growth mode since late 2022, when it launched ChatGPT, it’s been simultaneously riddled with controversy and high-level employee departures, with some current and former employees concerned that the company is growing too quickly to operate safely.

In July, Democratic senators sent a letter to OpenAI CEO Sam Altman concerning “questions about how OpenAI is addressing emerging safety concerns.” The prior month, a group of current and former OpenAI employees published an open letter describing concerns about a lack of oversight and an absence of whistleblower protections for those who wish to speak up.

And in May, a former OpenAI board member speaking about Altman’s temporary ouster in November said he gave the board “inaccurate information about the small number of formal safety processes that the company did have in place” on multiple occasions.

That month, OpenAI decided to disband its team focused on the long-term risks of AI just a year after announcing the group. The team’s leaders, Ilya Sutskever and Jan Leike, announced their departures from OpenAI in May. Leike wrote in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

Comments
To Top

Pin It on Pinterest

Share This