top of page
Search

This Scientist Left OpenAI Last Year. His Startup Is Already Worth $30 Billion.

  • snitzoid
  • Mar 5
  • 4 min read

I've decided to take a computer class at Harper Community College (& start an AI think tank).


This Scientist Left OpenAI Last Year. His Startup Is Already Worth $30 Billion.

Safe Superintelligence has become one of the most valuable companies in tech thanks to the reputation of former OpenAI researcher Ilya Sutskever

By Berber Jin and Deepa Seetharaman, WSJ

March 4, 2025 5:30 am ET


Silicon Valley’s hottest investment isn’t a new app or hardware product. It’s one man.


AI researcher Ilya Sutskever is the primary reason venture capitalists are putting some $2 billion into his secretive company Safe Superintelligence, according to people familiar with the matter. The new funding round values SSI at $30 billion, making it one of the most valuable AI startups in the world.


Sutskever became one of the industry’s most revered AI researchers as chief scientist at OpenAI, where he helped develop the technology behind ChatGPT. He left OpenAI last year following a painful rupture with the company’s chief executive, Sam Altman.


SSI says it doesn’t plan to release any products until it develops super intelligence—an industry term for an AI that can outsmart experts in nearly every field. Competitors like Google, OpenAI and Anthropic are trying to develop similarly advanced systems, but are releasing consumer chatbots and business applications in the interim to generate revenue and demonstrate their progress.


Sutskever has told associates he isn’t developing advanced AI using the same methods he and colleagues used at OpenAI. He has said he has instead identified a “different mountain to climb” that is showing early signs of promise, according to people close to the company.


“Everyone is curious about exactly what he’s pushing and exactly what the insight is,” said James Cham, a partner at venture firm Bloomberg Beta, which hasn’t invested in SSI. “It’s super-high risk, and if it works out, maybe you have the potential to be part of someone who is changing the world.”


Top secret

Most AI startups work hard to get attention, hoping it will help attract employees and investors in a highly competitive space. SSI operates as secretly as it can out of offices in Silicon Valley and Tel Aviv.


Its bare-bones website contains little more than a 223-word mission statement. Its approximately 20 employees—a fraction of the 1,000-plus at OpenAI and Anthropic—are discouraged from mentioning SSI on their LinkedIn profiles, according to knowledgeable people.


Candidates who secure an in-person interview are instructed to leave their phone in a Faraday cage, a container that blocks cellular and Wi-Fi signals, before entering SSI’s offices, one of the knowledgeable people said.



Sutskever has told associates he isn’t developing advanced AI using the same methods he and colleagues used at OpenAI.


Most of its staffers aren’t well known in Silicon Valley because the company is looking for promising technologists whom Sutskever can mentor, rather than experienced people likely to jump between employers and take what they have learned with them.


Still, Silicon Valley’s top investors, including Sequoia Capital and Andreessen Horowitz, have poured money into the company. The latest financing, which marks a sixfold increase from its $5 billion valuation in September, is being led by Greenoaks Capital.


SSI’s talks to raise funding at a $30 billion valuation, which doesn’t include the new cash it is collecting, were previously reported by Bloomberg.


Sutskever was born in the former Soviet Union, grew up in Israel, and made his name as a graduate student in Canada, after co-authoring a paper about deep-learning AI algorithms, which use a process called scaling to become smarter by processing massive amounts of data.


He later joined Google, but left in 2015 to become one of the first employees at OpenAI. He was attracted to Altman and fellow co-founder Elon Musk’s vision of a nonprofit dedicated to developing artificial general intelligence—AI as capable as most people at most things—for the public good.


Colleagues jokingly described Sutskever as a prophet prone to musing about what a world with AGI might look like and how to prevent it from causing catastrophic disasters.


“Our goal is to make a mankind-loving AGI,” he said at the company’s 2022 holiday party.


Splitting with Sam Altman

After ChatGPT was released as a test in late 2022 and became a worldwide sensation, OpenAI became less of a pure research lab and more of a traditional company focused on products and revenue. Sutskever and members of his team felt that left them with fewer resources for studying advanced AI and curbing its potential risks.


His relationship with Altman, who became OpenAI’s CEO, deteriorated. In November 2023, Sutskever delivered a message to his boss that would change both their lives: OpenAI’s board was firing Altman after determining he hadn’t been consistently candid with them.


The move backfired when hundreds of employees threatened to quit and Microsoft offered to hire them along with the deposed CEO. Sutskever said he regretted “my participation in the board’s actions.”


Altman was rehired in under a week. Sutskever remained officially employed, but stopped working.


He resigned last May and started SSI with former OpenAI researcher Daniel Levy and investor Daniel Gross. By focusing entirely on creating a safe super intelligence, the new company hopes to avoid the tension between products and research at OpenAI as it works toward a goal even more ambitious than AGI.


The startup assembled some small seed funding to get started before raising $1 billion in September.


Rumors have since swirled about SSI’s business and technological strategy. In a rare public appearance at the NeurIPS AI conference in December, Sutskever discussed the kind of super intelligence he is trying to develop.


He told thousands of fellow researchers that when such systems emerge, they could be unpredictable, self-aware and may even want rights for themselves.


“It’s not a bad end result if you have AIs and all they want is to coexist with us,” he said.


Write to Berber Jin at berber.jin@wsj.com and Deepa Seetharaman at deepa.seetharaman@wsj.com

 
 
 

Recent Posts

See All
Happy Juneenth!

We're kicking ass in Iran now, got our ass handed to us by Adolph; let's not forget the Spanish Inquisition and do we get Passover off?...

 
 
 

Comments


Post: Blog2_Post
  • Facebook
  • Twitter
  • LinkedIn

©2021 by The Spritzler Report. Proudly created with Wix.com

bottom of page