1. Home
  2. artificial general intelligence

artificial general intelligence

AI can save humanity, but only if the people control it — Ben Goertzel

Decentralizing and democratizing AGI is the best way to prevent corporations and militaries from abusing its incredible power, SingularityNET’s CEO tells The Agenda.

With the recent release of the iPhone 16, which Apple has promised is optimized for artificial intelligence, it’s clear that AI is officially front of mind, once again, for the average consumer. Yet the technology still remains rather limited compared with the vast abilities the most forward-thinking AI technologists anticipate will be achievable in the near future.

As much excitement as there still is around the technology, many still fear the potentially negative consequences of integrating it so deeply into society. One common concern is that a sufficiently advanced AI could determine humanity to be a threat and turn against us all, a scenario imagined in many science fiction stories. However, according to a leading AI researcher, most people’s concerns can be alleviated by decentralizing and democratizing AI’s development.

On Episode 46 of The Agenda podcast, hosts Jonathan DeYoung and Ray Salmond separate fact from fiction by speaking with Ben Goertzel, the computer scientist and researcher who first popularized the term “artificial general intelligence,” or AGI. Goertzel currently serves as the CEO of SingularityNET and the ASI Alliance, where he leads the projects’ efforts to develop the world’s first AGI.

Read more

Ripple’s XRP token soars 20% to $0.83 after SEC Chair Gary Gensler hints at resignation

ASI Alliance ‘quite far’ from OpenAI in hardware — SingularityNET CEO

The ASI Alliance can enable mass adoption of decentralized networks in the same way the world jumped into ChatGPT, SingularityNET CEO believes.

The Artificial Superintelligence (ASI) Alliance, an industry merger aiming to challenge Big Tech dominance in artificial intelligence, has a long path to reach rivals’ computing power. Still, a key alliance member believes it can offer much smarter decentralization solutions.

On Sept. 19, ASI officially opened voting on bringing the cloud computing and blockchain platform Cudos into its alliance in a move to expand its computing power and AI tools.

Open until Sept. 24, the vote allows the community to decide whether Cudos should join and merge their native token, Cudos (CUDOS), with the ASI Alliance, which currently includes SingularityNET, the Ocean Protocol and Fetch.ai.

Read more

Ripple’s XRP token soars 20% to $0.83 after SEC Chair Gary Gensler hints at resignation

Open Source Tools Levels Playing Field for Smaller AI Firms Says Decentralized AI Proponent

Open Source Tools Levels Playing Field for Smaller AI Firms Says Decentralized AI ProponentTo preempt lawsuits and counter allegations that they are training their respective artificial intelligence (AI) models with illegally obtained data, AI firms should rely on publicly available or open-source data, according to Alberto Fernandez. Fernandez, a proponent of decentralized AI and also the European representative of Qubic Ecosystem, emphasizes that AI firms should consider anonymizing […]

Ripple’s XRP token soars 20% to $0.83 after SEC Chair Gary Gensler hints at resignation

Vitalik Buterin stresses AI risks amid OpenAI leadership upheaval

Vitalik Buterin calls superintelligent AI “risky” amid leadership changes at OpenAI, stressing the need for caution and decentralization in AI development.

Ethereum co-founder Vitalik Butertin has shared his take on “superintelligent” artificial intelligence, calling it “risky” in response to ongoing leadership changes at OpenAI. 

On May 19, Cointelegraph reported that OpenAI’s former head of alignment, Jan Leike, resigned after saying he had reached a “breaking point” with management on the company’s core priorities.

Leike alleged that “safety culture and processes have taken a backseat to shiny products” at OpenAI, with many pointing toward developments around artificial general intelligence (AGI).

Read more

Ripple’s XRP token soars 20% to $0.83 after SEC Chair Gary Gensler hints at resignation

Elon Musk and tech execs call for pause on AI development

The authors of the letter say that advanced artificial intelligence could cause a profound change in the history of life on Earth, for better or worse.

More than 2,600 tech leaders and researchers have signed an open letter urging a temporary pause on further artificial intelligence (AI) development, fearing “profound risks to society and humanity.”

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a host of AI CEOs, CTOs and researchers were among the signatories of the letter, which was published by the United States think tank Future of Life Institute (FOLI) on March 22.

The institute called on all AI companies to “immediately pause” training AI systems that are more powerful than GPT-4 for at least six months, sharing concerns that “human-competitive intelligence can pose profound risks to society and humanity,” among other things.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening,” the institute wrote.

GPT-4 is the latest iteration of OpenAI’s artificial intelligence-powered chatbot, which was released on March 14. To date, it has passed some of the most rigorous U.S. high school and law exams within the 90th percentile. It is understood to be 10 times more advanced than the original version of ChatGPT.

There is an “out-of-control race” between AI firms to develop more powerful AI, whi“no one — not even their creators — can understand, predict, or reliably control," FOLI claimed.

Among the top concerns were whether machines could flood information channels, potentially with “propaganda and untruth” and whether machines will “automate away” all employment opportunities.

FOLI took these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

“Such decisions must not be delegated to unelected tech leaders,” the letter added.

The institute also agreed with a recent statement from OpenAI founder Sam Altman that an independent review should be required before training future AI systems.

Altman in his Feb. 24 blog post highlighted the need to prepare for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

Not all AI pundits have rushed to sign the petition, though. Ben Goertzel, the CEO of SingularityNET, explained in a March 29 Twitter response to Gary Marcus, the author of Rebooting.AI, that language learning models (LLMs) won’t become AGIs, which, to date, there have been few developments of.

Instead, he said research and development should be slowed down for things like bioweapons and nukes:

In addition to language learning models like ChatGPT, AI-powered deep fake technology has been used to create convincing images, audio and video hoaxes. The technology has also been used to create AI-generated artwork, with some concerns raised about whether it could violate copyright laws in certain cases.

Related: ChatGPT can now access the internet with new OpenAI plugins

Galaxy Digital CEO Mike Novogratz recently told investors he was shocked over the amount of regulatory attention that has been given to crypto, while little has been toward artificial intelligence.

“When I think about AI, it shocks me that we’re talking so much about crypto regulation and nothing about AI regulation. I mean, I think the government’s got it completely upside-down,” he opined during a shareholders call on March 28.

FOLI has argued that should AI development pause not be enacted quickly, governments should get involved with a moratorium.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it wrote.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

Ripple’s XRP token soars 20% to $0.83 after SEC Chair Gary Gensler hints at resignation

AI Altcoin Based on Cardano (ADA) Rallies After Elon Musk ChatGPT Rumors Swirl

AI Altcoin Based on Cardano (ADA) Rallies After Elon Musk ChatGPT Rumors Swirl

An altcoin project focused on artificial general intelligence (AGI) technology has spiked following rumors that tech mogul Elon Musk had plans in the works for a new version of ChatGPT. Over the weekend, Musk told his 130 million Twitter followers how he was feeling about AGI and the potential threat it poses to humanity. “Having […]

The post AI Altcoin Based on Cardano (ADA) Rallies After Elon Musk ChatGPT Rumors Swirl appeared first on The Daily Hodl.

Ripple’s XRP token soars 20% to $0.83 after SEC Chair Gary Gensler hints at resignation