1. Home
  2. robot

robot

Elon Musk and tech execs call for pause on AI development

The authors of the letter say that advanced artificial intelligence could cause a profound change in the history of life on Earth, for better or worse.

More than 2,600 tech leaders and researchers have signed an open letter urging a temporary pause on further artificial intelligence (AI) development, fearing “profound risks to society and humanity.”

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a host of AI CEOs, CTOs and researchers were among the signatories of the letter, which was published by the United States think tank Future of Life Institute (FOLI) on March 22.

The institute called on all AI companies to “immediately pause” training AI systems that are more powerful than GPT-4 for at least six months, sharing concerns that “human-competitive intelligence can pose profound risks to society and humanity,” among other things.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening,” the institute wrote.

GPT-4 is the latest iteration of OpenAI’s artificial intelligence-powered chatbot, which was released on March 14. To date, it has passed some of the most rigorous U.S. high school and law exams within the 90th percentile. It is understood to be 10 times more advanced than the original version of ChatGPT.

There is an “out-of-control race” between AI firms to develop more powerful AI, whi“no one — not even their creators — can understand, predict, or reliably control," FOLI claimed.

Among the top concerns were whether machines could flood information channels, potentially with “propaganda and untruth” and whether machines will “automate away” all employment opportunities.

FOLI took these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

“Such decisions must not be delegated to unelected tech leaders,” the letter added.

The institute also agreed with a recent statement from OpenAI founder Sam Altman that an independent review should be required before training future AI systems.

Altman in his Feb. 24 blog post highlighted the need to prepare for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

Not all AI pundits have rushed to sign the petition, though. Ben Goertzel, the CEO of SingularityNET, explained in a March 29 Twitter response to Gary Marcus, the author of Rebooting.AI, that language learning models (LLMs) won’t become AGIs, which, to date, there have been few developments of.

Instead, he said research and development should be slowed down for things like bioweapons and nukes:

In addition to language learning models like ChatGPT, AI-powered deep fake technology has been used to create convincing images, audio and video hoaxes. The technology has also been used to create AI-generated artwork, with some concerns raised about whether it could violate copyright laws in certain cases.

Related: ChatGPT can now access the internet with new OpenAI plugins

Galaxy Digital CEO Mike Novogratz recently told investors he was shocked over the amount of regulatory attention that has been given to crypto, while little has been toward artificial intelligence.

“When I think about AI, it shocks me that we’re talking so much about crypto regulation and nothing about AI regulation. I mean, I think the government’s got it completely upside-down,” he opined during a shareholders call on March 28.

FOLI has argued that should AI development pause not be enacted quickly, governments should get involved with a moratorium.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it wrote.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

‘One of the Most Powerful Patterns in All of Crypto’: Raoul Pal Says Ethereum Signaling Very Big Move Ahead

Ripple CTO shuts down ChatGPT’s XRP conspiracy theory

An AI chatbot alleged Ripple can secretly control its blockchain through an undisclosed backdoor in the network's code and has been ridiculed by the firm's CTO.

Ripple’s chief technology officer has responded to a conspiracy theory fabricated by Artificial Intelligence (AI) tool ChatGPT, which alleges the XRP Ledger (XRPL) is somehow being secretly controlled by Ripple.

According to a Dec. 3 Twitter thread by user Stefan Huber, when asked a series of questions regarding the decentralization of Ripple’s XRP Ledger, the ChatGPT bot suggested that while people could participate in the governance of the blockchain, Ripple has the “ultimate control” of XRPL.

Asked how this is possible without the consensus of participants and its publicly-available code, the AI alleged that Ripple may have “abilities that are not fully disclosed in the public source code.”

At one point, the AI said “the ultimate decision-making power” for XRPL “still lies with Ripple Labs” and the company could make changes “even if those changes do not have the support of the supermajority of the participants in the network.”

It also contrasted the XRPL with Bitcoin (BTC) saying the latter was “truly decentralized.”

However, Ripple CTO David Schwartz has called the bot’s logic into question, arguing that with that logic, Ripple could secretly control the Bitcoin network as it neither can be determined from the code.

The bot was also shown to contradict its own statements in the interaction, stating that the main reason for using “a distributed ledger like the [XRPL] is to enable secure and efficient transactions without the need for a central authority,” which contradicts its statement that the XRPL is managed centrally.

Related: Ripple files final submission against SEC as landmark case nears end

ChatGPT is a chatbot tool built by AI research company OpenAI which is designed to interact “in a conversational way” and answer questions about almost anything a user asks. It can even complete some tasks such as creating and testing smart contracts.

The AI was trained on “vast amounts of data from the internet written by humans, including conversations” according to OpenAI and warned because of this some of the bot's reponses can be “inaccurate, untruthful, and otherwise misleading at times.”

OpenAI CEO Sam Altman said upon its release on Nov. 30 that its “an early demo” and is “very much a research release.” The tool has already seen over one million users according to a Dec. 5 tweet by Altman.

Ethereum founder Vitalik Buterin also weighed in on the AI chatbot in a Dec. 4 tweet saying the idea that AI “will be free from human biases has probably died the hardest.”

‘One of the Most Powerful Patterns in All of Crypto’: Raoul Pal Says Ethereum Signaling Very Big Move Ahead