1. Home
  2. chatbot

chatbot

Chatgpt More Useful Than Crypto, Nvidia Tech Chief Says

Chatgpt More Useful Than Crypto, Nvidia Tech Chief SaysUnlike AI applications such as Chatgpt, cryptocurrencies do not bring “anything useful,” a top executive of U.S. chip maker Nvidia is convinced. The comment comes despite his company making significant sales in the space where its powerful processors are widely used to mint digital coins. Developing Chatbots More Worthwhile Than Crypto Mining, Nvidia Exec Claims […]

Crypto exchange Kraken has ‘no plans’ to delist USDT in Europe for now

Elon Musk and tech execs call for pause on AI development

The authors of the letter say that advanced artificial intelligence could cause a profound change in the history of life on Earth, for better or worse.

More than 2,600 tech leaders and researchers have signed an open letter urging a temporary pause on further artificial intelligence (AI) development, fearing “profound risks to society and humanity.”

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and a host of AI CEOs, CTOs and researchers were among the signatories of the letter, which was published by the United States think tank Future of Life Institute (FOLI) on March 22.

The institute called on all AI companies to “immediately pause” training AI systems that are more powerful than GPT-4 for at least six months, sharing concerns that “human-competitive intelligence can pose profound risks to society and humanity,” among other things.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening,” the institute wrote.

GPT-4 is the latest iteration of OpenAI’s artificial intelligence-powered chatbot, which was released on March 14. To date, it has passed some of the most rigorous U.S. high school and law exams within the 90th percentile. It is understood to be 10 times more advanced than the original version of ChatGPT.

There is an “out-of-control race” between AI firms to develop more powerful AI, whi“no one — not even their creators — can understand, predict, or reliably control," FOLI claimed.

Among the top concerns were whether machines could flood information channels, potentially with “propaganda and untruth” and whether machines will “automate away” all employment opportunities.

FOLI took these concerns one step further, suggesting that the entrepreneurial efforts of these AI companies may lead to an existential threat:

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

“Such decisions must not be delegated to unelected tech leaders,” the letter added.

The institute also agreed with a recent statement from OpenAI founder Sam Altman that an independent review should be required before training future AI systems.

Altman in his Feb. 24 blog post highlighted the need to prepare for artificial general intelligence (AGI) and artificial superintelligence (ASI) robots.

Not all AI pundits have rushed to sign the petition, though. Ben Goertzel, the CEO of SingularityNET, explained in a March 29 Twitter response to Gary Marcus, the author of Rebooting.AI, that language learning models (LLMs) won’t become AGIs, which, to date, there have been few developments of.

Instead, he said research and development should be slowed down for things like bioweapons and nukes:

In addition to language learning models like ChatGPT, AI-powered deep fake technology has been used to create convincing images, audio and video hoaxes. The technology has also been used to create AI-generated artwork, with some concerns raised about whether it could violate copyright laws in certain cases.

Related: ChatGPT can now access the internet with new OpenAI plugins

Galaxy Digital CEO Mike Novogratz recently told investors he was shocked over the amount of regulatory attention that has been given to crypto, while little has been toward artificial intelligence.

“When I think about AI, it shocks me that we’re talking so much about crypto regulation and nothing about AI regulation. I mean, I think the government’s got it completely upside-down,” he opined during a shareholders call on March 28.

FOLI has argued that should AI development pause not be enacted quickly, governments should get involved with a moratorium.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it wrote.

Magazine: How to prevent AI from ‘annihilating humanity’ using blockchain

Crypto exchange Kraken has ‘no plans’ to delist USDT in Europe for now

ChatGPT v4 aces the bar, SATs and can identify exploits in ETH contracts

GPT-4 completed many of the tests within the top 10% of the cohort, while the original version of ChatGPT often finished up in the bottom 10%.

GPT-4, the latest version of the artificial intelligence chatbot ChatGPT, can pass high school tests and law school exams with scores ranking in the 90th percentile and has new processing capabilities that were not possible with the prior version.

The figures from GPT-4’s test scores were shared on March 14 by creator OpenAI, revealing it can also convert image, audio and video inputs to text in addition to handling “much more nuanced instructions” more creatively and reliably. 

“It passes a simulated bar exam with a score around the top 10% of test takers,” OpenAI added. “In contrast, GPT-3.5’s score was around the bottom 10%.”

The figures show that GPT-4 achieved a score of 163 in the 88th percentile on the LSAT exam — the test college students need to pass in the United States to be admitted into law school.

Exam results of GPT-4 and GPT-3.5 on a range of recent U.S. exams. Source: OpenAI

GPT4’s score would put it in a good position to be admitted into a top 20 law school and is only a few marks short of the reported scores needed for acceptance to prestigious schools such as Harvard, Stanford, Princeton or Yale.

The prior version of ChatGPT only scored 149 on the LSAT, putting it in the bottom 40%.

GPT-4 also scored 298 out of 400 in the Uniform Bar Exam — a test undertaken by recently graduated law students permitting them to practice as a lawyer in any U.S. jurisdiction.

UBE scores needed to be admitted to practice law in each U.S. jurisdiction. Source: National Conference of Bar Examiners

The old version of ChatGPT struggled in this test, finishing in the bottom 10% with a score of 213 out of 400.

As for the SAT Evidence-Based Reading & Writing and SAT Math exams taken by U.S. high school students to measure their college readiness, GPT-4 scored in the 93rd and 89th percentile, respectively.

GPT-4 excelled in the “hard” sciences too, posting well above average percentile scores in AP Biology (85-100%), Chemistry (71-88%) and Physics 2 (66-84%).

Exam results of GPT-4 and GPT-3.5 on a range of recent U.S. exams. Source: OpenAI

However its AP Calculus score was fairly average, ranking in the 43rd to 59th percentile.

Another area where GPT-4 was lacking was in English literature exams, posting scores in the 8th to 44th percentile across two separate tests.

OpenAI said GPT-4 and GPT-3.5 took these tests from the 2022-2023 practice exams, and that “no specific training” was taken by the language processing tools:

“We did no specific training for these exams. A minority of the problems in the exams were seen by the model during training, but we believe the results to be representative.”

The results prompted fear in the Twitter community too.

Related: How will ChatGPT affect the Web3 space? Industry answers

Nick Almond, the founder of FactoryDAO, told his 14,300 Twitter followers on March 14 that GPT4 is going to “scare people” and it will “collapse” the global education system.

Former Coinbase director Conor Grogan said he inserted a live Ethereum smart contract into GPT-4, and the chatbot instantly pointed to several “security vulnerabilities” and outlined how the code mighbe exploited:

Earlier smart contract audits on ChatGPT found that its first version was also capable at spotting out code bugs to a reasonable degree as well.

Rowan Cheung, the founder of the AI newsletter The Rundown, shared a video of GPT transcribing a hand-drawn fake website on a piece of paper into code.

Crypto exchange Kraken has ‘no plans’ to delist USDT in Europe for now

Crypto Twitter uses new AI chatbot to make trading bots, blogs and even songs

A few simple prompts to a recently released AI chatbot is all Crypto Twitter needs to create trading bots, an investment thesis and a crypto-themed song.

The crypto community appears to be having a ball with ChatGPT, a recently launched Artificial Intelligence (AI) chatbot created by research company OpenAI — using it for a multitude of applications including a trading bot, a crypto blog, and even an original song.

The bot is a language interface tool that OpenAI says can interact “in a conversational way” and can be used to answer questions or assist in making almost anything it’s prompted to create, with some limitations.

A user on Twitter posted their interaction with ChatGPT showing that from a simple prompt the tool created a basic trading bot using Pine Script, a programming language used for the financial software TradingView.

Another user gave the bot instructions to create a trading terminal, with ChatGPT writing code that could display the current orders for the Bitcoin (BTC) and Tether (USDT) trading pair on Binance utilizing the crypto exchange’s Application Programming Interface (API).

Cointelegraph previously tested ChatGPT and found the tool could create an example smart contract. Meanwhile, other users discovered the AI could detect and either assist in patching or exploiting vulnerabilities in smart contracts, however, it was noted the code generated by the bot wasn’t always correct.

Crypto Twitter has not only utilized the AI tool for technical purposes but also for more creative and even business endeavors.

ChatGPT responded with a five-part answer when asked by one user what the blockchain industry needs to do “in order to positively affect society,” Twitter user “Goose Wayne” opined the bot “can write your crypto investment thesis now.”

The co-founder of investment firm Multicoin Capital, Kyle Samani, tweeted his results on asking ChatGPT to write a blog post on how crypto payments will grow in the future, the tool responded with a multi-paragraph article.

Another lengthy opinion article was written by the AI on how Monero (XMR) “improves on Bitcoin’s technology,” with the user who posted the result opining “ChatGPT is going to put a lot of crypto bloggers out of business lol.”

Related: Al tech aims to make metaverse design accessible for creators

Meanwhile, some Twitter users have used the tool to create music, Web3 entrepreneur Jay Azhang posted a “song about losing money in crypto” written by the bot:

Multiple other examples of ChatGPT’s use have been posted to Twitter, from its answers on how to choose a good crypto project, grow a Twitter audience within the nonfungible token (NFT) community, and even an email where it acts as a crypto hedge fund warning users it’s illiquid because of the collapse of FTX:

The tool is free for now as its “a research release” according to OpenAI CEO Sam Altman, but that may not last for long as he said in a Dec. 5 tweet that the costs to run the tool are “eye-watering” and will have to be monetized “somehow at some point.”

Crypto exchange Kraken has ‘no plans’ to delist USDT in Europe for now

Ripple CTO shuts down ChatGPT’s XRP conspiracy theory

An AI chatbot alleged Ripple can secretly control its blockchain through an undisclosed backdoor in the network's code and has been ridiculed by the firm's CTO.

Ripple’s chief technology officer has responded to a conspiracy theory fabricated by Artificial Intelligence (AI) tool ChatGPT, which alleges the XRP Ledger (XRPL) is somehow being secretly controlled by Ripple.

According to a Dec. 3 Twitter thread by user Stefan Huber, when asked a series of questions regarding the decentralization of Ripple’s XRP Ledger, the ChatGPT bot suggested that while people could participate in the governance of the blockchain, Ripple has the “ultimate control” of XRPL.

Asked how this is possible without the consensus of participants and its publicly-available code, the AI alleged that Ripple may have “abilities that are not fully disclosed in the public source code.”

At one point, the AI said “the ultimate decision-making power” for XRPL “still lies with Ripple Labs” and the company could make changes “even if those changes do not have the support of the supermajority of the participants in the network.”

It also contrasted the XRPL with Bitcoin (BTC) saying the latter was “truly decentralized.”

However, Ripple CTO David Schwartz has called the bot’s logic into question, arguing that with that logic, Ripple could secretly control the Bitcoin network as it neither can be determined from the code.

The bot was also shown to contradict its own statements in the interaction, stating that the main reason for using “a distributed ledger like the [XRPL] is to enable secure and efficient transactions without the need for a central authority,” which contradicts its statement that the XRPL is managed centrally.

Related: Ripple files final submission against SEC as landmark case nears end

ChatGPT is a chatbot tool built by AI research company OpenAI which is designed to interact “in a conversational way” and answer questions about almost anything a user asks. It can even complete some tasks such as creating and testing smart contracts.

The AI was trained on “vast amounts of data from the internet written by humans, including conversations” according to OpenAI and warned because of this some of the bot's reponses can be “inaccurate, untruthful, and otherwise misleading at times.”

OpenAI CEO Sam Altman said upon its release on Nov. 30 that its “an early demo” and is “very much a research release.” The tool has already seen over one million users according to a Dec. 5 tweet by Altman.

Ethereum founder Vitalik Buterin also weighed in on the AI chatbot in a Dec. 4 tweet saying the idea that AI “will be free from human biases has probably died the hardest.”

Crypto exchange Kraken has ‘no plans’ to delist USDT in Europe for now

This AI chatbot is either an exploiter’s dream or their nightmare

The crypto community has come across an AI-powered chatbot that can be used to audit smart contracts and expose vulnerabilities.

The online crypto community has discovered a new Artificial Intelligence (AI)-powered chatbot that can either be used to warn developers of smart contracts vulnerabilities or teach hackers how to exploit them. 

ChatGPT, a chatbot tool built by AI research company OpenAI, was released on Nov. 30 and was designed to interact “in a conversational way” with the ability to answer follow-up questions and even admit mistakes, according to the company.

However, some Twitter users have come to realize that the bot could potentially be used for both good and evil, as it can be prompted to reveal loopholes in smart contracts.

Stephen Tong, co-founder of smart contract auditing firm Zellic asked ChatGPT to help find an exploit, presenting a piece of smart contract code.

The bot responded by noting the contract had a reentrancy vulnerability where an exploiter could repeatedly withdraw the funds from the contract and provided an example of how to fix the issue.

This similar type of exploit was used in May by the attacker of the Decentralized finance (DeFi) platform Fei Protocol who made off with $80 million.

Others have shared results from the chatbot after prompting it with vulnerable smart contracts. Twitter user devtooligan shared a screenshot of ChatGPT, which provided the exact code needed to fix a Solidity smart contract vulnerability commenting “we're all gonna be out of a job.”

With the tool, Twitter users have already begun to jest they’re able to now start businesses for security auditing simply by using the bot to test for weaknesses in smart contracts.

Cointelegraph tested ChatGPT and found it can also create an example smart contract from a prompt using simple language, generating code that could apparently provide staking rewards for Ethereum-based nonfungible tokens (NFTs).

ChatGPT’s example Solidity smart contract for NFT staking rewards from a simple prompt. Image: Cointelegraph.

Despite the chatbot's ability to test smart contract functionality, it wasn’t solely designed for that purpose and many on Twitter have suggested some of the smart contracts it generates have issues.

The tool also might provide different responses depending on the way it’s prompted, so it isn't perfect.

Related: Secret Network resolves network vulnerability following white hat disclosure

OpenAI CEO Sam Altman tweeted that the tool was “an early demo” and is “very much a research release.”

He opined that “language interfaces are going to be a big deal” and tools such as ChatGPT will “soon” have the ability to answer questions and give advice with later iterations completing tasks or even discovering new knowledge.

Crypto exchange Kraken has ‘no plans’ to delist USDT in Europe for now