1. Home
  2. AI safety

AI safety

Ex-OpenAI chief scientist Ilya Sutskever launches SSI to focus on AI safety

The new company will develop AI safety and capabilities in tandem.

Co-founder and former chief scientist of OpenAI, Ilya Sutskever, and former OpenAI engineer Daniel Levy have joined forces with Daniel Gross, an investor and former partner in startup accelerator Y Combinator, to create Safe Superintelligence, Inc. (SSI). The new company’s goal and product are evident from its name.

SSI is a United States company with offices in Palo Alto and Tel Aviv. It will advance artificial intelligence (AI) by developing safety and capabilities in tandem, the trio of founders said in an online announcement on June 19. They added:

Sutskever left OpenAI on May 14. He was involved in the firing of CEO Sam Altman and played an ambiguous role at the company after stepping down from the board after Altman returned. Daniel Levy was among the researchers who left OpenAI a few days after Sutskever.

Read more

Ripple’s XRP token soars 20% to $0.83 after SEC Chair Gary Gensler hints at resignation

Vitalik Buterin stresses AI risks amid OpenAI leadership upheaval

Vitalik Buterin calls superintelligent AI “risky” amid leadership changes at OpenAI, stressing the need for caution and decentralization in AI development.

Ethereum co-founder Vitalik Butertin has shared his take on “superintelligent” artificial intelligence, calling it “risky” in response to ongoing leadership changes at OpenAI. 

On May 19, Cointelegraph reported that OpenAI’s former head of alignment, Jan Leike, resigned after saying he had reached a “breaking point” with management on the company’s core priorities.

Leike alleged that “safety culture and processes have taken a backseat to shiny products” at OpenAI, with many pointing toward developments around artificial general intelligence (AGI).

Read more

Ripple’s XRP token soars 20% to $0.83 after SEC Chair Gary Gensler hints at resignation