1. Home
  2. Future of Life Institute

Future of Life Institute

‘ChatGPT-like personal AI’ can now be run locally, Musk warns ‘singularity is near’

The AI model “GPT4All” isn’t as powerful as OpenAI’s ChatGPT-4 but requires just 4GB of space and doesn’t need the internet, providing immunity to AI censorship.

It is now possible to install and run a “ChatGPT-like” personal artificial intelligence (AI) on a home computer even without an internet connection, and Elon Musk has warned AI development has brought us closer to a technological point of no return.

Brian Roemmele, the founder of the technology blog Multiplex, wrote a detailed guide on how to install the personal AI “GPT4All” on April 11, calling it a “first PC” moment for personal AI.

GPT4All, which was built by programmers from AI development firm Nomic AI, was reportedly developed in four days at a cost of just $1,300 and requires only 4GB of space.

Roemmele warned that it is not as powerful as ChatGPT-4 from AI firm OpenAI, itself a huge improvement on its predecessor ChatGPT-3.5, but is still a powerful tool in its own right, noting:

“Are there limitations? Of course. It is not ChatGPT 4, and it will not handle some things correctly. However, it is one of the most powerful Personal AI systems ever released.”

Tesla and Twitter CEO Elon Musk has been a vocal critic of AI development and signed a letter published by the United States think tank Future of Life Institute on March 22 calling for all AI companies to “immediately pause” training powerful AI systems.

Related: Elon Musk reportedly buys thousands of GPUs for Twitter AI project

The letter warned that “human-competitive intelligence can pose profound risks to society and humanity,” a sentiment that Musk echoed in an April 12 tweet in which he joked “The Singularity is near.”

The singularity refers to a hypothetical point in time where technological growth becomes uncontrollable and irreversible, possibly helped by a self-improving synthetic intelligence.

While some view a technological singularity as a positive development, others believe it could be disastrous, leading to a dystopian future — similar to that depicted in the popular Terminator sci-fi franchise.

Roemmele has a different perspective on AI, arguing in his guide that it is more appropriately called IA, or “Intelligence Amplification,” and in response to the push for a pause suggested that people should “choose a side.”

Roemmele claimed that “AI is rapidly becoming the target of censorship, regulation and worse,” citing Italy blocking ChatGPT on March 31, and added that “this may be the last chance to own your own AI.”

Hodler’s Digest, April 2-8: BTC white paper hidden on macOS, Binance loses AUS license and DOGE news

Gala Games exploiter returns $22M from GALA token attack

‘Biased, deceptive’: Center for AI accuses ChatGPT creator of violating trade laws

The group believes GPT-4 violates Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.”

The Center for Artificial Intelligence and Digital Policy (CAIDP) has filed a complaint with the United States Federal Trade Commission (FTC) in an attempt to halt the release of powerful AI systems to consumers.

The complaint centered around OpenAI’s recently released large language model, GPT-4, which the CAIDP describes as “biased, deceptive, and a risk to privacy and public safety” in its March 30 complaint.

CAIDP, an independent non-profit research organization, argued that the commercial release of GPT-4 violates Section 5 of the FTC Act, which prohibits ''unfair or deceptive acts or practices in or affecting commerce.''

To back its case, the AI ethics organization pointed to contents in the GPT-4 System Card, which state:

“We found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.”

In the same document, it stated: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”

Complaint filed by the Centre for AI and Digital Policy against OpenAI. Source: CAIDP

CAIDP added that OpenAI released GPT-4 to the public for commercial use with full knowledge of these risks and that no independent assessment of GPT-4 was undertaken prior to its release.

As a result, the CAIDP wants the FTC to conduct an investigation into the products of OpenAI and other operators of powerful AI systems:

“It is time for the FTC to act [...] CAIDP urges the FTC to open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”

While ChatGPT-3 was released in November, the latest version, GPT-4 is considered to be ten times more intelligent. Upon its release on March 14, a study found that GPT-4 was able to pass the most rigorous U.S. high school and law exams within the top 90th percentile.

It can also detect smart contract vulnerabilities on Ethereum, among other things.

The complaint comes as Elon Musk, Apple’s Steve Wozniak, and a host of AI experts signed a petition to “pause” development on AI systems more powerful than GPT-4. 

CAIDP president Marc Rotenberg was among the other 2600 signers of the petition, which was introduced by the Future of Life Institute on March 22.

Related: Here’s how ChatGPT-4 spends $100 in crypto trading

The authors argued that “Advanced AI could represent a profound change in the history of life on Earth,” for better or for worse.

The United Nations Educational, Scientific and Cultural Organization (UNESCO) has also called on states to implement the UN’s “Recommendation on the Ethics of AI” framework.

In other news, a former AI researcher for Google recently alleged that Google’s AI chatbot, "Bard," has been trained using ChatGPT’s responses.

While the researcher has resigned over the incident, Google executives have denied the allegations put forth by their former colleague.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

Gala Games exploiter returns $22M from GALA token attack

Elon Musk and tech execs call for pause on AI development

The authors of the letter say that advanced artificial intelligence could cause a profound change in the history of life on Earth, for better or worse.

More than 2,600 tech leaders and researchers have signed an open letter urging a temporary pause on further artificial intelligence (AI) development, fearing “profound risks to society and humanity.”

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a host of AI CEOs, CTOs and researchers were among the signatories of the letter, which was published by the United States think tank Future of Life Institute (FOLI) on March 22.

The institute called on all AI companies to “immediately pause” training AI systems that are more powerful than GPT-4 for at least six months, sharing concerns that “human-competitive intelligence can pose profound risks to society and humanity,” among other things.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening,” the institute wrote.

GPT-4 is the latest iteration of OpenAI’s artificial intelligence-powered chatbot, which was released on March 14. To date, it has passed some of the most rigorous U.S. high school and law exams within the 90th percentile. It is understood to be 10 times more advanced than the original version of ChatGPT.

There is an “out-of-control race” between AI firms to develop more powerful AI, whi“no one — not even their creators — can understand, predict, or reliably control," FOLI claimed.

Among the top concerns were whether machines could flood information channels, potentially with “propaganda and untruth” and whether machines will “automate away” all employment opportunities.

FOLI took these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

“Such decisions must not be delegated to unelected tech leaders,” the letter added.

The institute also agreed with a recent statement from OpenAI founder Sam Altman that an independent review should be required before training future AI systems.

Altman in his Feb. 24 blog post highlighted the need to prepare for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

Not all AI pundits have rushed to sign the petition, though. Ben Goertzel, the CEO of SingularityNET, explained in a March 29 Twitter response to Gary Marcus, the author of Rebooting.AI, that language learning models (LLMs) won’t become AGIs, which, to date, there have been few developments of.

Instead, he said research and development should be slowed down for things like bioweapons and nukes:

In addition to language learning models like ChatGPT, AI-powered deep fake technology has been used to create convincing images, audio and video hoaxes. The technology has also been used to create AI-generated artwork, with some concerns raised about whether it could violate copyright laws in certain cases.

Related: ChatGPT can now access the internet with new OpenAI plugins

Galaxy Digital CEO Mike Novogratz recently told investors he was shocked over the amount of regulatory attention that has been given to crypto, while little has been toward artificial intelligence.

“When I think about AI, it shocks me that we’re talking so much about crypto regulation and nothing about AI regulation. I mean, I think the government’s got it completely upside-down,” he opined during a shareholders call on March 28.

FOLI has argued that should AI development pause not be enacted quickly, governments should get involved with a moratorium.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it wrote.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

Gala Games exploiter returns $22M from GALA token attack