1. Home
  2. demo

demo

Google’s Gemini demo is now getting accused of being ‘fake’

Onlookers praised the Gemini tech demo from Google upon its release last week but the tech firm admits some of it was jazzed up for “brevity.”

A "hands-on" tech demo of Google’s new artificial intelligence model Gemini has gone from being the talk of the town to being accused by critics of being “basically entirely fake.”

The six-minute video, which garnered 2.1 million views on YouTube since its release on Dec. 7, shows it seamlessly interacting with a human operator in seemingly real-time, including analyzing a duck drawing, hand gestures, and inventing a game called “Guess the Country" with just an image prompt of the world map. 

However, Oriol Vinyals, a Google Deepmind executive has since clarified that while the user prompts and outputs in the video are real, it has been “shortened for brevity.” In reality, Gemini's interactions were text-based, not voiced, and took much longer than how it was represented in the video.

Read more

Cardano Primed To Continue Surging As Whales and Institutions Accumulate ADA, Says Crypto Analyst

This AI chatbot is either an exploiter’s dream or their nightmare

The crypto community has come across an AI-powered chatbot that can be used to audit smart contracts and expose vulnerabilities.

The online crypto community has discovered a new Artificial Intelligence (AI)-powered chatbot that can either be used to warn developers of smart contracts vulnerabilities or teach hackers how to exploit them. 

ChatGPT, a chatbot tool built by AI research company OpenAI, was released on Nov. 30 and was designed to interact “in a conversational way” with the ability to answer follow-up questions and even admit mistakes, according to the company.

However, some Twitter users have come to realize that the bot could potentially be used for both good and evil, as it can be prompted to reveal loopholes in smart contracts.

Stephen Tong, co-founder of smart contract auditing firm Zellic asked ChatGPT to help find an exploit, presenting a piece of smart contract code.

The bot responded by noting the contract had a reentrancy vulnerability where an exploiter could repeatedly withdraw the funds from the contract and provided an example of how to fix the issue.

This similar type of exploit was used in May by the attacker of the Decentralized finance (DeFi) platform Fei Protocol who made off with $80 million.

Others have shared results from the chatbot after prompting it with vulnerable smart contracts. Twitter user devtooligan shared a screenshot of ChatGPT, which provided the exact code needed to fix a Solidity smart contract vulnerability commenting “we're all gonna be out of a job.”

With the tool, Twitter users have already begun to jest they’re able to now start businesses for security auditing simply by using the bot to test for weaknesses in smart contracts.

Cointelegraph tested ChatGPT and found it can also create an example smart contract from a prompt using simple language, generating code that could apparently provide staking rewards for Ethereum-based nonfungible tokens (NFTs).

ChatGPT’s example Solidity smart contract for NFT staking rewards from a simple prompt. Image: Cointelegraph.

Despite the chatbot's ability to test smart contract functionality, it wasn’t solely designed for that purpose and many on Twitter have suggested some of the smart contracts it generates have issues.

The tool also might provide different responses depending on the way it’s prompted, so it isn't perfect.

Related: Secret Network resolves network vulnerability following white hat disclosure

OpenAI CEO Sam Altman tweeted that the tool was “an early demo” and is “very much a research release.”

He opined that “language interfaces are going to be a big deal” and tools such as ChatGPT will “soon” have the ability to answer questions and give advice with later iterations completing tasks or even discovering new knowledge.

Cardano Primed To Continue Surging As Whales and Institutions Accumulate ADA, Says Crypto Analyst