In November 2023, OpenAI appeared to be going through its Apple 1985 moment. That is, the company’s spearhead, Sam Altman, was ousted from the company in a way reminiscent of when Steve Jobs was fired from Apple 41 years ago after a similar fallout with the board of directors.
Dr. Ilya Sutskever was one of the board directors who decided to force his fellow co-founder out of the company.
Former chief scientist of OpenAI, Dr. Sutskever had decided that he could no longer trust Mr Altman with the company’s mission to create a system that can do things like the human brain can.
It is safe to say that Dr. Sutskever’s decision was a fatal one. But not, as it turned out, for Sam Altman.
After an employee revolt, a media frenzy, and industry leaders seriously questioning this seemingly bizarre change of tide, Sam Altman was reinstated just days after he was booted out.
After Altman was reinstated he gained a board seat that hitherto had eluded him, and the company was firmly under his command.
Dr. Sutskever came out and said that he regretted his decision and essentially resigned from the board.
On June 19th, 2024, Dr. Sutskever announced in an X post that he was starting a new company: Safe Superintelligence Inc (SSI Inc).
According to their own X post (and their company name), the mission of Sutskever’s new company is also focused on creating a machine that can do things the human brain can, however it places safety as its primary priority.
According to Dr. Sutskever and his co-founders Daniel Gross and Daniel Levy:
‘We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.’
This the company will ‘scale in peace’.
AI hippies speaking of hyper-engineered ‘superintelligence’ and ‘peace’ in the same sentence may come as a cringy Californian cliché for most. However, hold your grimaces, this is important.
After the world had started to process the release of ChatGPT back in 2022, lawmakers and citizens alike began to imagine that this new frontier of intelligence could plunge our societies into unmitigated hellscapes littered with images of 20-fingered politicians committing heinous crimes.
Back in December 2023, European lawmakers reached a provisional deal on landmark European Union rules that would introduce new rules governing the use of AI, including governments’ use of AI in biometric surveillance and how to regulate AI products like ChatGPT.
Indeed, per reports, Apple is unlikely to launch their new ‘Apple Intelligence’ in the EU. The Wall Street Journal reported that many of Apple Intelligence’s key features would seemingly not be released in the EU due to its Digital Markets Act regulations (DMA).
Of course, it is Dr. Sutskever’s ex-company, OpenAI’s ChatGPT that won that contract with Apple to ensure that their product was the chosen system to be integrated into AppleOS.
In short, the DMA enacted broad restrictions on big tech and digital competition. According to The Wall Street Journal, one key requirement of the DMA was for interoperability.
So, software should be developed so that it can function across multiple operating systems and hardware developed so that consumers can move data or switch providers much more easily.
So, what do this have to do with SSI Inc and Dr. Sutsekever?
The US did take steps forward with their approach to AI security in October 2023, when President Biden signed an executive order on AI that required companies to report to the federal government if their technology could ‘aid countries or terrorists to make weapons of mass destruction’ and it sought to lessen the dangers of ‘deep fakes’.
However, since then there has been little noise in the US about AI security aimed at protecting consumers and not politicians or governments.
Europe has spent the last 6 months creating an AI ecosystem that is designed to protect the consumer, and companies like SSI Inc. are poised to fill this niche as companies like OpenAI come under increasing criticism for, as Elon Musk put it, serving shareholders and not humanity.
AI expert Rotem Farkash recently said ‘we are definitely seeing a disaggregation in approaches to AI safety and the regulations placed on companies by Europe compared to the US’.
‘Europe is gaining a reputation for serving consumers before the big tech companies. For example,’ Farkash continued, ‘when they ruled that Apple should make iPhones USB-C based to prevent consumers from having to purchase multiple different cables for their devices.’
‘Companies like Dr. Sutsekever’s Safe Superintelligence Inc. will absolutely look to leverage this niche and plug product gaps that companies like Apple and OpenAI have not been able to plug because of safety legislation and broader concerns.’
Jan Leike, a fellow departee from OpenAI recently that Dr. Sutsekever’s differences with the AI giant’s leadership had ‘reached a breaking point’ as ‘safety culture and processes have taken a back seat to shiny products.’
Safe Superintelligence Inc. is not a hippie offshoot from OpenAI. It is a serious attempt at course-correction. We should take it very seriously. As companies like OpenAI face increasingly direct scrutiny from humanity, Dr. Ilya Sutskever will be able to say, ‘I told you so’.