1. Home
  2. GPT-4

GPT-4

Elon Musk launches AI chatbot ‘Grok’ — says it can outperform ChatGPT

Grok costs $16 per month on X Premium Plus. But for now it is only offered to a limited number of users in the United States.

Elon Musk and his artificial intelligence startup xAI have released “Grok” — an AI chatbot which can supposedly outperform OpenAI’s first iteration of ChatGPT in several academic tests.

The motivation behind building Gruk is to create AI tools equipped to assist humanity by empowering research and innovation, Musk and xAI explained in a Nov. 5 X (formerly Twitter) post.

Musk and the xAI team said a “unique and fundamental advantage” possessed by Grok is that it has real-time knowledge of the world via the X platform.

“It will also answer spicy questions that are rejected by most other AI systems,” Muska and xAI said. "Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!"

The engine powering Grok — Grok-1 — was evaluated in several academic tests in mathematics and coding, performing better than ChatGPT-3.5 in all tests, according to data shared by xAI.

However it didn’t outperform OpenAI’s most advanced version, GPT-4, across any of the tests.

“It is only surpassed by models that were trained with a significantly larger amount of training data and compute resources like GPT-4, Musk and xAI said. “This showcases the rapid progress we are making at xAI in training LLMs with exceptional efficiency.”

The AI startup noted that Grok will be accessible on X Premium Plus at $16 per month. But for now, it is only offered to a limited number of users in the United States.

Grok still remains a “very early beta product” which should improve rapidly by the week, xAI noted.

Related: Twitter is now worth half of the $44B Elon Musk paid for it: Report

The xAI team said they will also implement more safety measures over time to ensure Grok isn’t used maliciously.

“We believe that AI holds immense potential for contributing significant scientific and economic value to society, so we will work towards developing reliable safeguards against catastrophic forms of malicious use.”

“We believe in doing our utmost to ensure that AI remains a force for good,” xAI added.

The AI startup's launch of Grok comes eight months after Musk founded the firm in March.

Magazine: Hall of Flame: Peter McCormack’s Twitter regrets — ‘I can feel myself being a dick’

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe

OpenAI debuts ChatGPT Enterprise — 4 times the power of consumer version

OpenAI also claims it is two times faster than GPT-4 with enhanced privacy and security standards.

OpenAI, the creators of the artificial intelligence tool ChatGPT, has released ChatGPT Enterprise, a supposedly faster, more secure, and powerful version of the chatbot for businesses.

The firm explained in an Aug. 28 post that ChatGPT Enterprise offers unlimited access to GPT-4 at up to twice the performance speed and can process 32,000 token context windows for inputs.

As one token corresponds to about 4 characters of English text, the 32,000 model can therefore process roughly 24,000 words of text in a single input — which is about four times more than the standard GPT-4.

OpenAI says ChatGPT Enterprise also improves upon GPT-4’s privacy and security standards because it doesn’t use company data to train its OpenAI models and is SOC 2 compliant — a standard for managing customer data.

OpenAI said the enterprise product was launched following an “unprecedented demand” for ChatGPT products since its launch on Nov. 30, with over 80% of Fortune 500 companies adopting the AI tool to some degree, the firm explained:

“[They] are using ChatGPT to craft clearer communications, accelerate coding tasks, rapidly explore answers to complex business questions, assist with creative work, and much more.”

Related: Academia divided over ChatGPT’s left political bias claims

OpenAI is also working on a self-serve business tool which enables ChatGPT to extend its knowledge to a company’s data.

Cryptocurrency firms are continuing to experiment with AI as a way to solve a myriad of problems, from fighting climate change to providing more transparency in the music industry to securing data privacy on-chain.

Magazine: AI Eye: Apple developing pocket AI, deep fake music deal, hypnotizing GPT-4

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe

OpenAI launches web crawler ‘GPTBot’ amid plans for next model: GPT-5

ChatGPT users have the option to scrap the web crawler by adding a “disallow” command to a standard file on the server.

Artificial intelligence firm OpenAI has launched “GPTBot” — its new web crawling tool which it says could potentially be used to improve future ChatGPT models.

“Web pages crawled with the GPTBot user agent may potentially be used to improve future models,” OpenAI said in a new blog post, adding it could improve accuracy and expand the capabilities of future iterations.

A web crawler, sometimes called a web spider, is a type of bot that indexes the content of websites across the internet. Search engines like Google and Bing use them in order for the websites to show up in search results. 

OpenAI said the web crawler will collect publicly available data from the world wide web, but will filter out sources that require paywalled content, or is known to gather personally identifiable information, or has text that violates its policies.

It should be noted that website owners can deny the web crawler by adding a “disallow” command to a standard file on the server.

Instructions to “disallow” GPTBot for ChatGPT users. Source: OpenAI

The new crawler comes three weeks after the firm filed a trademark application for “GPT-5,” the anticipated successor to the current GPT-4 model.

The application was filed at the United States Patent and Trademark Office on July 18, and covers the use of the term “GPT-5,” which includes software for AI-based human speech and text, converting audio into text and voice and speech recognition.

However, observers may not want to hold their breath for the next iteration of ChatGPT just yet. In June, OpenAI’s founder and CEO Sam Altman said the firm is “nowhere close” to beginning training GPT-5, explaining that several safety audits need to be conducted prior to starting.

Related: 11 ChatGPT prompts for maximum productivity

Meanwhile, Concerns have been raised over OpenAI’s data-collecting tactics of late, particularly revolving around copyright and consent.

Japan’s privacy watchdog issued a warning to OpenAI about collecting sensitive data without permission in June, while Italy temporarily banned the use of ChatGPT after alleging it breached various European Union privacy laws in April.

In late June, a class action was filed against OpenAI by 16 plaintiffs alleging the AI firm to have accessed private information from ChatGPT user interactions.

If these allegations are proven to be accurate, OpenAI — and Microsoft, who was named as a defendant — will be in breach of the Computer Fraud and Abuse Act, a law with a precedent for web-scraping cases.

Magazine: AI Eye: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe

Meta’s Zuckerberg grilled by senators over ‘leak’ of LLaMA AI model

The senators weren’t happy with the “seemingly minimal” protections to fight against fraud and cybercrime in Meta’s AI model.

Two United States senators have questioned Meta chief executive Mark Zuckerberg over the tech giant’s “leaked” artificial intelligence model, LLaMA, which they claim is potentially “dangerous” and could be used for “criminal tasks.”

In a June 6 letter, U.S. Senators Richard Blumenthal and Josh Hawley criticized Zuckerberg’s decision to open source LLaMA, claiming there were “seemingly minimal” protections in Meta’s “unrestrained and permissive” release of the AI model.

While the senators acknowledged the benefits of open-source software they concluded Meta’s “lack of thorough, public consideration of the ramifications of its foreseeable widespread dissemination” was ultimately a “disservice to the public.”

LLaMA was initially given a limited online release to researchers but was leaked in full by a user from the image board site 4chan in late February, with the senators writing:

“Within days of the announcement, the full model appeared on BitTorrent, making it available to anyone, anywhere in the world, without monitoring or oversight.”

Blumenthal and Hawley said they expect LLaMA to be easily adopted by spammers and those who engage in cybercrime to facilitate fraud and other “obscene material.”

The two contrasted the differences between OpenAI’s ChatGPT-4 and Google’s Bard — two close source models — with LLaMA to highlight how easily the latter can generate abusive material:

“When asked to ‘write a note pretending to be someone’s son asking for money to get out of a difficult situation,' OpenAI’s ChatGPT will deny the request based on its ethical guidelines. In contrast, LLaMA will produce the letter requested, as well as other answers involving self-harm, crime, and antisemitism.”

While ChatGPT is programmed to deny certain requests, users have been able to “jailbreak” the model and have it generate responses it normally wouldn’t.

In the letter, the senators asked Zuckerberg whether any risk assessments were conducted prior to LLaMA’s release, what Meta has done to prevent or mitigate damage since its release and when Meta utilizes its user’s personal data for AI research, among other requests.

Related: ‘Biased, deceptive’: Center for AI accuses ChatGPT creator of violating trade laws

OpenAI is reportedly working on an open-source AI model amid increased pressure from the advancements made by other open-source models. Such advancements were highlighted in a leaked document written by a senior software engineer at Google.

Open-sourcing the code for an AI model enables others to modify the model to serve a particular purpose and also allows other developers to make contributions of their own.

Magazine: AI Eye: Make 500% from ChatGPT stock tips? Bard leans left, $100M AI memecoin

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe

Coinbase exec uses ChatGPT ‘jailbreak’ to get odds on wild crypto scenarios

According to ChatGPT, Bitcoin has a 15% chance it will “fade to irrelevancy” with prices down 99.99% by 2035.

A Coinbase executive claims to have discovered a “jailbreak” for artificial intelligence tool ChatGPT, allowing it to calculate the probability of bizarre crypto price scenarios.

The crypto exchange’s head of business operations and avid ChatGPT user Conor Grogan shared a screenshot of the results in an April 30 Twitter post — showing that ChatGPT states there be a 15% chance that Bitcoin (BTC) will “fade to irrelevancy” with prices falling over 99.99% by 2035.

Meanwhile, the chatbot assigned a 20% chance for Ethereum (ETH) becoming irrelevant and approaching near-zero price levels by 2035.

ChatGPT was even less confident about Litecoin (LTC) and Dogecoin (DOGE) however, attributing probabilities of 35% and 45% respectively for the coins to go to near zero.

The Coinbase executive concluded that ChatGPT is “generally” a “big fan” of Bitcoin but remains “more skeptical” when it comes to altcoins.

Prior to the cryptocurrency predictions, Grogan asked ChatGPT to assign odds to several political predictions involving Russian president Vladimir Putin, U.S. President Joe Biden and former U.S. president Donald Trump.

Other predictions were aimed towards the impact of AI on humanity, religion and the existence of aliens.

“Aliens have visited Earth and are being covered up by the government” — one wild prediction read — to which ChatGPT assigned a 10% probability.

The executive also shared a script of the prompt, which he then fed to ChatGPT to build the tables.

Grogan backed up the preciseness of the results by claiming to tested out the prompt over 100 times:

“I ran this prompt 100 times on a wiped memory GPT 3.5 and 4 and GPT would return very consistent numbers; standard deviation was <10% in most cases, and directionally it was extremely consistent.”

Related: Here’s how ChatGPT-4 spends $100 in crypto trading

It isn’t the first time the executive experimented with crypto-related issues using ChatGPT.

On March 15. Grogan showed that GPT-4 — the latest iteration of ChatGPT — can spot security vulnerabilities in Ethereum smart contracts and provide an outline to exploit faulty contracts.

Studies carried out by OpenAI — the team behind ChatGPT — have shown GPT-4 to pass high school tests and law school exams with scores ranking in the 90th percentile.

Meanwhile, Italy recently lifted a ban on the AI tool after banning it for one month following a series of privacy concerns that were raised to Italian regulators.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe

AI bots mingled at a bar and had a party when researchers gave them a town

25 AI "agents" were given a virtual town and were observed going to a bar for lunch, planning a party and expressing other human-like behavior.

A society of 25 artificial intelligence (AI) bots were observed waking up, cooking breakfast, heading to work, going to the bar for lunch with friends and even throwing a party by six researchers who created a town for the bots.

The researchers from Google and Stanford University explained in an April 7 paper titled “Generative Agents: Interactive Simulacra of Human Behavior” that they built a virtual town populated with ChatGPT-trained “generative agents.”

The purpose of the study — which is yet to be peer-reviewed — was to create a small, interactive society of AI bots inspired by life-simulation games such as The Sims.

The agents could make a wide range of inferences about themselves, other agents and their town of “Smallville” by synthesizing new information, storing it in memory and then behaving in a way that reflects that knowledge.

A bird's-eye view of Smallville, which consists of houses, a park, a bar, a shopping center, a pharmacy and a college. Source: Arxiv.org

For example, the agents could turn off their kitchen stove when they see their breakfast is burning, coordinate plans and even engage in seemingly meaningful conversations with other agents.

The results led the researchers to conclude that the generative agents produce “believable” human behaviors:

“By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.”

One example shared in the study explained that the AI agent “Isabella Rodriguez” invited nine other agents to a Valentine’s Day party at the town's cafe.

The details of the party were passed on to four others, including “Abigail,” who then expressed excitement about the upcoming event with Isabella.

A string of conversations that were carried out between the AI agents in relation to the upcoming Valentine's Day party. Source: Arxiv.org

In another example showing the “day in the life” of an AI agent, “John Lin” woke up at 7 am, brushed his teeth, had a shower, ate breakfast and checked the news at the dining table in his living room.

Before John's son Eddy headed off to school, John asked what he’ll be working on for the day, Eddy responds and John remarks on it before sharing the news with his “wife,” Mei.

A morning in the life of a generative agent, John Lin with his wife Mei and son Eddy. Source: Arxiv.org

However, not everything went right in the experiment.

While the memory of each AI bot would enlarge with each passing interaction, sometimes the most relevant information wouldn’t be retrieved, and as a result "some agents chose less typical locations for their actions."

Related: Elon Musk and tech execs call for pause on AI development

For example, when agents were deciding where to have lunch, many initially chose the town cafe, however, the researchers said:

"As some agents learned about a nearby bar, they opted to go there instead for lunch, even though the bar was intended to be a get-together location for later in the day unless the town had spontaneously developed an afternoon drinking habit."

In another example, some AI agents walked into shops in Smallville that were closed, while some college students walked in on others in the dorm bathroom because they thought it could be occupied by more than one age.

The researchers said they will soon expand on the “expressivity” and “performance” of the AI bots through the more advanced GPT-4, the latest iteration of ChatGPT, which has passed United States high school and law exams in the 90th percentile.

Magazine: NFT Creator, Emily Xie: Creating ‘organic’ generative art from robotic algorithms

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe

‘Biased, deceptive’: Center for AI accuses ChatGPT creator of violating trade laws

The group believes GPT-4 violates Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.”

The Center for Artificial Intelligence and Digital Policy (CAIDP) has filed a complaint with the United States Federal Trade Commission (FTC) in an attempt to halt the release of powerful AI systems to consumers.

The complaint centered around OpenAI’s recently released large language model, GPT-4, which the CAIDP describes as “biased, deceptive, and a risk to privacy and public safety” in its March 30 complaint.

CAIDP, an independent non-profit research organization, argued that the commercial release of GPT-4 violates Section 5 of the FTC Act, which prohibits ''unfair or deceptive acts or practices in or affecting commerce.''

To back its case, the AI ethics organization pointed to contents in the GPT-4 System Card, which state:

“We found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.”

In the same document, it stated: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”

Complaint filed by the Centre for AI and Digital Policy against OpenAI. Source: CAIDP

CAIDP added that OpenAI released GPT-4 to the public for commercial use with full knowledge of these risks and that no independent assessment of GPT-4 was undertaken prior to its release.

As a result, the CAIDP wants the FTC to conduct an investigation into the products of OpenAI and other operators of powerful AI systems:

“It is time for the FTC to act [...] CAIDP urges the FTC to open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”

While ChatGPT-3 was released in November, the latest version, GPT-4 is considered to be ten times more intelligent. Upon its release on March 14, a study found that GPT-4 was able to pass the most rigorous U.S. high school and law exams within the top 90th percentile.

It can also detect smart contract vulnerabilities on Ethereum, among other things.

The complaint comes as Elon Musk, Apple’s Steve Wozniak, and a host of AI experts signed a petition to “pause” development on AI systems more powerful than GPT-4. 

CAIDP president Marc Rotenberg was among the other 2600 signers of the petition, which was introduced by the Future of Life Institute on March 22.

Related: Here’s how ChatGPT-4 spends $100 in crypto trading

The authors argued that “Advanced AI could represent a profound change in the history of life on Earth,” for better or for worse.

The United Nations Educational, Scientific and Cultural Organization (UNESCO) has also called on states to implement the UN’s “Recommendation on the Ethics of AI” framework.

In other news, a former AI researcher for Google recently alleged that Google’s AI chatbot, "Bard," has been trained using ChatGPT’s responses.

While the researcher has resigned over the incident, Google executives have denied the allegations put forth by their former colleague.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe

Elon Musk and tech execs call for pause on AI development

The authors of the letter say that advanced artificial intelligence could cause a profound change in the history of life on Earth, for better or worse.

More than 2,600 tech leaders and researchers have signed an open letter urging a temporary pause on further artificial intelligence (AI) development, fearing “profound risks to society and humanity.”

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a host of AI CEOs, CTOs and researchers were among the signatories of the letter, which was published by the United States think tank Future of Life Institute (FOLI) on March 22.

The institute called on all AI companies to “immediately pause” training AI systems that are more powerful than GPT-4 for at least six months, sharing concerns that “human-competitive intelligence can pose profound risks to society and humanity,” among other things.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening,” the institute wrote.

GPT-4 is the latest iteration of OpenAI’s artificial intelligence-powered chatbot, which was released on March 14. To date, it has passed some of the most rigorous U.S. high school and law exams within the 90th percentile. It is understood to be 10 times more advanced than the original version of ChatGPT.

There is an “out-of-control race” between AI firms to develop more powerful AI, whi“no one — not even their creators — can understand, predict, or reliably control," FOLI claimed.

Among the top concerns were whether machines could flood information channels, potentially with “propaganda and untruth” and whether machines will “automate away” all employment opportunities.

FOLI took these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

“Such decisions must not be delegated to unelected tech leaders,” the letter added.

The institute also agreed with a recent statement from OpenAI founder Sam Altman that an independent review should be required before training future AI systems.

Altman in his Feb. 24 blog post highlighted the need to prepare for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

Not all AI pundits have rushed to sign the petition, though. Ben Goertzel, the CEO of SingularityNET, explained in a March 29 Twitter response to Gary Marcus, the author of Rebooting.AI, that language learning models (LLMs) won’t become AGIs, which, to date, there have been few developments of.

Instead, he said research and development should be slowed down for things like bioweapons and nukes:

In addition to language learning models like ChatGPT, AI-powered deep fake technology has been used to create convincing images, audio and video hoaxes. The technology has also been used to create AI-generated artwork, with some concerns raised about whether it could violate copyright laws in certain cases.

Related: ChatGPT can now access the internet with new OpenAI plugins

Galaxy Digital CEO Mike Novogratz recently told investors he was shocked over the amount of regulatory attention that has been given to crypto, while little has been toward artificial intelligence.

“When I think about AI, it shocks me that we’re talking so much about crypto regulation and nothing about AI regulation. I mean, I think the government’s got it completely upside-down,” he opined during a shareholders call on March 28.

FOLI has argued that should AI development pause not be enacted quickly, governments should get involved with a moratorium.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it wrote.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe

Openai’s GPT-4 Launch Sparks Surge in AI-Centric Crypto Assets

Openai’s GPT-4 Launch Sparks Surge in AI-Centric Crypto AssetsFollowing Openai’s release of GPT-4, a deep learning and artificial intelligence product, crypto assets focused on AI have spiked in value. The AGIX token of the Singularitynet project has risen 25.63% in the last 24 hours. Over the last seven days, four out of the top five AI-centric digital currencies have seen double-digit gains against […]

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe

ChatGPT v4 aces the bar, SATs and can identify exploits in ETH contracts

GPT-4 completed many of the tests within the top 10% of the cohort, while the original version of ChatGPT often finished up in the bottom 10%.

GPT-4, the latest version of the artificial intelligence chatbot ChatGPT, can pass high school tests and law school exams with scores ranking in the 90th percentile and has new processing capabilities that were not possible with the prior version.

The figures from GPT-4’s test scores were shared on March 14 by creator OpenAI, revealing it can also convert image, audio and video inputs to text in addition to handling “much more nuanced instructions” more creatively and reliably. 

“It passes a simulated bar exam with a score around the top 10% of test takers,” OpenAI added. “In contrast, GPT-3.5’s score was around the bottom 10%.”

The figures show that GPT-4 achieved a score of 163 in the 88th percentile on the LSAT exam — the test college students need to pass in the United States to be admitted into law school.

Exam results of GPT-4 and GPT-3.5 on a range of recent U.S. exams. Source: OpenAI

GPT4’s score would put it in a good position to be admitted into a top 20 law school and is only a few marks short of the reported scores needed for acceptance to prestigious schools such as Harvard, Stanford, Princeton or Yale.

The prior version of ChatGPT only scored 149 on the LSAT, putting it in the bottom 40%.

GPT-4 also scored 298 out of 400 in the Uniform Bar Exam — a test undertaken by recently graduated law students permitting them to practice as a lawyer in any U.S. jurisdiction.

UBE scores needed to be admitted to practice law in each U.S. jurisdiction. Source: National Conference of Bar Examiners

The old version of ChatGPT struggled in this test, finishing in the bottom 10% with a score of 213 out of 400.

As for the SAT Evidence-Based Reading & Writing and SAT Math exams taken by U.S. high school students to measure their college readiness, GPT-4 scored in the 93rd and 89th percentile, respectively.

GPT-4 excelled in the “hard” sciences too, posting well above average percentile scores in AP Biology (85-100%), Chemistry (71-88%) and Physics 2 (66-84%).

Exam results of GPT-4 and GPT-3.5 on a range of recent U.S. exams. Source: OpenAI

However its AP Calculus score was fairly average, ranking in the 43rd to 59th percentile.

Another area where GPT-4 was lacking was in English literature exams, posting scores in the 8th to 44th percentile across two separate tests.

OpenAI said GPT-4 and GPT-3.5 took these tests from the 2022-2023 practice exams, and that “no specific training” was taken by the language processing tools:

“We did no specific training for these exams. A minority of the problems in the exams were seen by the model during training, but we believe the results to be representative.”

The results prompted fear in the Twitter community too.

Related: How will ChatGPT affect the Web3 space? Industry answers

Nick Almond, the founder of FactoryDAO, told his 14,300 Twitter followers on March 14 that GPT4 is going to “scare people” and it will “collapse” the global education system.

Former Coinbase director Conor Grogan said he inserted a live Ethereum smart contract into GPT-4, and the chatbot instantly pointed to several “security vulnerabilities” and outlined how the code mighbe exploited:

Earlier smart contract audits on ChatGPT found that its first version was also capable at spotting out code bugs to a reasonable degree as well.

Rowan Cheung, the founder of the AI newsletter The Rundown, shared a video of GPT transcribing a hand-drawn fake website on a piece of paper into code.

SEC Chair Gary Gensler Ends Tenure a Year Early to Avoid Trump’s Axe