Unilever Partners with Microsoft to Unlock AI-Powered Scientific Breakthroughs!
June 21, 2024New AI Tech Transforms Anaesthesia – Safer Surgeries Ahead!
June 21, 2024Former OpenAI Chief Scientist Launches Safe Superintelligence Inc.
Ilya Sutskever, a co-founder and former chief scientist at OpenAI, has launched a new company, Safe Superintelligence Inc. (SSI), just one month after departing OpenAI. Joining him in this venture are former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy. SSI aims to tackle the challenges of AI safety by advancing AI capabilities while ensuring safety remains a priority, addressing the critical need for control and restriction of potentially superintelligent AI systems.
Commitment to AI Safety and Innovation
Sutskever’s departure from OpenAI came after disagreements over AI safety approaches. At SSI, he continues to emphasise the importance of safely advancing AI. In a recent tweet, Sutskever highlighted SSI’s mission to prioritise safety alongside AI development. The company plans to address AI safety and capabilities as technical problems requiring innovative engineering and scientific breakthroughs, aiming to advance AI while maintaining a focus on safety and security.
Strategic Vision and Growth Plans
Unlike OpenAI’s initial non-profit model, SSI is established as a for-profit entity from the start, reflecting the need for significant capital to drive its mission. With offices in Palo Alto and Tel Aviv, SSI is actively recruiting technical talent. Co-founder Daniel Gross expressed confidence in the company’s ability to secure funding, underscoring the strong interest in AI and the team’s proven expertise. Sutskever’s vision for SSI involves scaling AI development safely and effectively, ensuring the company remains insulated from short-term commercial pressures.
(Visit TechCrunch for the full story)
*An AI tool was used to add an extra layer to the editing process for this story.