1. Home
  2. Superintelligent AI

Superintelligent AI

Vitalik Buterin stresses AI risks amid OpenAI leadership upheaval

Vitalik Buterin calls superintelligent AI “risky” amid leadership changes at OpenAI, stressing the need for caution and decentralization in AI development.

Ethereum co-founder Vitalik Butertin has shared his take on “superintelligent” artificial intelligence, calling it “risky” in response to ongoing leadership changes at OpenAI. 

On May 19, Cointelegraph reported that OpenAI’s former head of alignment, Jan Leike, resigned after saying he had reached a “breaking point” with management on the company’s core priorities.

Leike alleged that “safety culture and processes have taken a backseat to shiny products” at OpenAI, with many pointing toward developments around artificial general intelligence (AGI).

Read more

Ripple’s XRP token soars 20% to $0.83 after SEC Chair Gary Gensler hints at resignation

Vitalik Buterin: AI may surpass humans as the ‘apex species’

“Even Mars may not be safe” if superintelligent AI turns against humanity, warns Ethereum co-founder Vitalik Buterin.

Super-advanced artificial intelligence, left unchecked, has a “serious chance” of surpassing humans to become the next “apex species” of the planet, according Ethereum co-founder Vitalik Buterin.

But that will boil down to how humans potentially intervene with AI developments, he said.

In a Nov. 27 blog post, Buterin, seen by some as a thought leader in the cryptocurrency space, argued AI is “fundamentally different” from other recent inventions — such as social media, contraception, airplanes, guns, the wheel, and the printing press — as AI can create a new type of “mind” that can turn against human interests, adding:

“AI is [...] a new type of mind that is rapidly gaining in intelligence, and it stands a serious chance of overtaking humans' mental faculties and becoming the new apex species on the planet.”

Buterin argued that unlike climate change, a man-made pandemic, or nuclear war, superintelligent AI could potentially end humanity and leave no survivors, particularly if it ends up viewing humans as a threat to its own survival. 

“One way in which AI gone wrong could make the world worse is (almost) the worst possible way: it could literally cause human extinction.”

“Even Mars may not be safe,” Buterin added.

Buterin cited an August 2022 survey from over 4,270 machine learning researchers who estimated a 5-10% chance that AI kills humanity.

However, while Buterin stressed that claims of this nature are “extreme,” there are also ways for humans to prevail.

Brain interfaces and techno-optimism

Buterin suggested integrating brain-computer interfaces (BCI) to offer humans more control over powerful forms of AI-based computation and cognition.

A BCI is a communication pathway between the brain's electrical activity and an external device, such as a computer or robotic limb.

This would reduce the two-way communication loop between man and machine from seconds to milliseconds, and more importantly, ensure humans retain some degree of “meaningful agency” over the world, Buterin said.

A diagram depicting two possible feedback loops between humans and AI. Source: Vitalik.eth

Related: How AI is changing crypto: Hype vs. reality

Buterin suggested this route would be “safer” as humans could be involved in each decision made by the AI machine.

“We [can] reduce the incentive to offload high-level planning responsibility to the AI itself, and thereby reduce the chance that the AI does something totally unaligned with humanity's values on its own.”

The Ethereum co-founder also suggested “active human intention” to take AI in a direction that benefits humanity, as maximizing profit doesn’t always lead human down the most desirable pathway.

Buterin concluded that “we, humans, are the brightest star” in the universe, as we’ve developed technology to expand upon human potential for thousands of years, and hopefully many more to come:

“Two billion years from now, if the Earth or any part of the universe still bears the beauty of Earthly life, it will be human artifices like space travel and geoengineering that will have made it happen.”

Magazine: Real AI use cases in crypto, No. 1: The best money for AI is crypto

Ripple’s XRP token soars 20% to $0.83 after SEC Chair Gary Gensler hints at resignation