OpenAI Co-founder Ilya Sutskever’s New AI Venture SSI Raises $1 Billion to Ensure Safe Superintelligence

Key Points:
– SSI, co-founded by Ilya Sutskever, raises $1 billion, valuing the startup at $5 billion.
– The company focuses on developing safe AI that surpasses human capabilities.
– Top investors like Andreessen Horowitz and Sequoia Capital back the project.

Safe Superintelligence (SSI), the latest venture from OpenAI’s former chief scientist Ilya Sutskever, has made a significant splash in the AI world by securing $1 billion in funding just three months after its inception. With a valuation of $5 billion, SSI aims to develop artificial intelligence systems that are not only more powerful than current models but are also designed with safety and ethical considerations at the forefront.

SSI’s funding round saw participation from top-tier venture capital firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. The company’s focus on AI safety—a hotly debated topic in the industry—has attracted significant interest, especially as concerns grow about the potential for rogue AI systems to cause harm. Sutskever’s new venture promises to prioritize safe AI development, a move that aligns with the increasing regulatory scrutiny faced by AI companies worldwide.

The startup, which currently operates with a small team split between Palo Alto, California, and Tel Aviv, Israel, plans to use the newly acquired funds to build its computing power and recruit top-tier talent. This strategic approach underscores SSI’s commitment to creating a team of highly trusted and skilled researchers and engineers who share the company’s mission of developing safe AI.

Sutskever’s decision to leave OpenAI and start SSI was driven by his vision to tackle a different aspect of AI development—one that diverges from the path he was previously on. His departure from OpenAI earlier this year followed a series of internal conflicts, including the controversial removal and subsequent reinstatement of OpenAI CEO Sam Altman. This turn of events diminished Sutskever’s role at OpenAI, leading to his departure and the eventual formation of SSI.

Unlike OpenAI’s unconventional corporate structure, which was designed with AI safety in mind but also led to internal turmoil, SSI operates as a traditional for-profit company. This structure allows SSI to focus more on its mission without the complications that arise from a more complex corporate governance system.

SSI’s CEO Daniel Gross, along with Sutskever and Daniel Levy, a former OpenAI researcher, are steering the company toward becoming a leader in AI safety. The team is committed to building AI systems that not only advance the technology but also ensure that these systems remain aligned with human values. This focus on ethics and safety is becoming increasingly important as AI systems continue to evolve and integrate into more aspects of everyday life.

SSI’s approach to AI development includes rigorous vetting of potential hires to ensure they align with the company’s values. Gross emphasized the importance of recruiting individuals with “good character” who are motivated by the work rather than the hype surrounding AI.

As the AI industry continues to grow, SSI’s emphasis on safety could set it apart from other AI startups. The company plans to partner with cloud providers and chip manufacturers to meet its computing needs, but it has yet to announce specific partnerships. Sutskever’s early advocacy for scaling AI models laid the groundwork for many of the advances seen today, and his new approach at SSI suggests a continuation of this innovative mindset—albeit with a different focus.

With $1 billion in funding and the backing of some of the most prominent venture capitalists, SSI is poised to make a significant impact in the AI industry. The company’s focus on safe superintelligence could pave the way for new advancements that are not only powerful but also ethically sound.