1. Home
  2. Language Learning Model

Language Learning Model

AI bots mingled at a bar and had a party when researchers gave them a town

25 AI "agents" were given a virtual town and were observed going to a bar for lunch, planning a party and expressing other human-like behavior.

A society of 25 artificial intelligence (AI) bots were observed waking up, cooking breakfast, heading to work, going to the bar for lunch with friends and even throwing a party by six researchers who created a town for the bots.

The researchers from Google and Stanford University explained in an April 7 paper titled “Generative Agents: Interactive Simulacra of Human Behavior” that they built a virtual town populated with ChatGPT-trained “generative agents.”

The purpose of the study — which is yet to be peer-reviewed — was to create a small, interactive society of AI bots inspired by life-simulation games such as The Sims.

The agents could make a wide range of inferences about themselves, other agents and their town of “Smallville” by synthesizing new information, storing it in memory and then behaving in a way that reflects that knowledge.

A bird's-eye view of Smallville, which consists of houses, a park, a bar, a shopping center, a pharmacy and a college. Source: Arxiv.org

For example, the agents could turn off their kitchen stove when they see their breakfast is burning, coordinate plans and even engage in seemingly meaningful conversations with other agents.

The results led the researchers to conclude that the generative agents produce “believable” human behaviors:

“By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.”

One example shared in the study explained that the AI agent “Isabella Rodriguez” invited nine other agents to a Valentine’s Day party at the town's cafe.

The details of the party were passed on to four others, including “Abigail,” who then expressed excitement about the upcoming event with Isabella.

A string of conversations that were carried out between the AI agents in relation to the upcoming Valentine's Day party. Source: Arxiv.org

In another example showing the “day in the life” of an AI agent, “John Lin” woke up at 7 am, brushed his teeth, had a shower, ate breakfast and checked the news at the dining table in his living room.

Before John's son Eddy headed off to school, John asked what he’ll be working on for the day, Eddy responds and John remarks on it before sharing the news with his “wife,” Mei.

A morning in the life of a generative agent, John Lin with his wife Mei and son Eddy. Source: Arxiv.org

However, not everything went right in the experiment.

While the memory of each AI bot would enlarge with each passing interaction, sometimes the most relevant information wouldn’t be retrieved, and as a result "some agents chose less typical locations for their actions."

Related: Elon Musk and tech execs call for pause on AI development

For example, when agents were deciding where to have lunch, many initially chose the town cafe, however, the researchers said:

"As some agents learned about a nearby bar, they opted to go there instead for lunch, even though the bar was intended to be a get-together location for later in the day unless the town had spontaneously developed an afternoon drinking habit."

In another example, some AI agents walked into shops in Smallville that were closed, while some college students walked in on others in the dorm bathroom because they thought it could be occupied by more than one age.

The researchers said they will soon expand on the “expressivity” and “performance” of the AI bots through the more advanced GPT-4, the latest iteration of ChatGPT, which has passed United States high school and law exams in the 90th percentile.

Magazine: NFT Creator, Emily Xie: Creating ‘organic’ generative art from robotic algorithms

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe

Elon Musk and tech execs call for pause on AI development

The authors of the letter say that advanced artificial intelligence could cause a profound change in the history of life on Earth, for better or worse.

More than 2,600 tech leaders and researchers have signed an open letter urging a temporary pause on further artificial intelligence (AI) development, fearing “profound risks to society and humanity.”

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a host of AI CEOs, CTOs and researchers were among the signatories of the letter, which was published by the United States think tank Future of Life Institute (FOLI) on March 22.

The institute called on all AI companies to “immediately pause” training AI systems that are more powerful than GPT-4 for at least six months, sharing concerns that “human-competitive intelligence can pose profound risks to society and humanity,” among other things.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening,” the institute wrote.

GPT-4 is the latest iteration of OpenAI’s artificial intelligence-powered chatbot, which was released on March 14. To date, it has passed some of the most rigorous U.S. high school and law exams within the 90th percentile. It is understood to be 10 times more advanced than the original version of ChatGPT.

There is an “out-of-control race” between AI firms to develop more powerful AI, whi“no one — not even their creators — can understand, predict, or reliably control," FOLI claimed.

Among the top concerns were whether machines could flood information channels, potentially with “propaganda and untruth” and whether machines will “automate away” all employment opportunities.

FOLI took these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

“Such decisions must not be delegated to unelected tech leaders,” the letter added.

The institute also agreed with a recent statement from OpenAI founder Sam Altman that an independent review should be required before training future AI systems.

Altman in his Feb. 24 blog post highlighted the need to prepare for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

Not all AI pundits have rushed to sign the petition, though. Ben Goertzel, the CEO of SingularityNET, explained in a March 29 Twitter response to Gary Marcus, the author of Rebooting.AI, that language learning models (LLMs) won’t become AGIs, which, to date, there have been few developments of.

Instead, he said research and development should be slowed down for things like bioweapons and nukes:

In addition to language learning models like ChatGPT, AI-powered deep fake technology has been used to create convincing images, audio and video hoaxes. The technology has also been used to create AI-generated artwork, with some concerns raised about whether it could violate copyright laws in certain cases.

Related: ChatGPT can now access the internet with new OpenAI plugins

Galaxy Digital CEO Mike Novogratz recently told investors he was shocked over the amount of regulatory attention that has been given to crypto, while little has been toward artificial intelligence.

“When I think about AI, it shocks me that we’re talking so much about crypto regulation and nothing about AI regulation. I mean, I think the government’s got it completely upside-down,” he opined during a shareholders call on March 28.

FOLI has argued that should AI development pause not be enacted quickly, governments should get involved with a moratorium.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it wrote.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe