1. Home
  2. Google

Google

Google’s Gemini, OpenAI’s ChatGPT go head-to-head in Cointelegraph test

Comparisons of Google’s Gemini and OpenAI’s ChatGPT continue to flood internet social spaces, so we decided to put them to the test with questions of our own.

On Dec. 6, Google launched its latest artificial intelligence (AI) model, Gemini, which it claimed is the most advanced model currently available on the market — even better than the popular model developed by OpenAI, ChatGPT-4. 

This bold claim was treated like a challenge by community sleuths across the internet, who swiftly moved to examine the methods and benchmarks used by Google to assert Gemini’s supposed superiority and poke fun at the company’s marketing of the product.

David Gull, CEO of AI-powered wellness startup Vital, told Cointelegraph that each model, be it ChatGPT-4, Llama 2, or now Gemini, has its own set of strengths and challenges.

Read more

Trump says he is “very positive and [open-minded] to cryptocurrency companies,” advocates for US leadership in crypto industry

Is Google’s Gemini really smarter than OpenAI’s GPT-4? Community sleuths find out

After Google launched its new high-performance AI model Gemini and claimed it to be far superior to OpenAI’s GPT-4 users on social media began to challenge those claims.

Google launched its latest artificial intelligence (AI) model Gemini on Dec. 6, announcing it as the most advanced AI model currently available on the market, surpassing OpenAI’s GPT-4. 

Gemini is multimodal, which means it was built to understand and combine different types of information. It comes in three versions (Ultra, Pro, Nano) to serve different use cases, and one area in which it appears to beat GPT-4 is its ability to perform advanced math and specialized coding.

On its debut, Google released multiple benchmark tests that compared Gemini with GPT-4. The Gemini Ultra version achieved “state-of-the-art performance” in 30 out of 32 academic benchmarks that were used in large language model (LLM) development.

Read more

Trump says he is “very positive and [open-minded] to cryptocurrency companies,” advocates for US leadership in crypto industry

Google’s ‘GPT-4 killer’ Gemini is out, here’s how you can try it

Google has deployed its newest weapon in the AI arms race, a new artificial intelligence model that it claims is smarter and more powerful than OpenAI’s GPT-4.

Tech giant Google has officially rolled out Gemini, its latest artificial intelligence model that it claims has surpassed OpenAI's GPT-4.

On Dec. 6, Google CEO Sundar Pichai and Google DeepMind CEO and co-founder Demis Hassabis announced the launch of Gemini in a company blog post

The AI model has been optimized for different sizes and use cases (Ultra, Pro, Nano) and built to be multimodal to understand and combine different types of information.

Read more

Trump says he is “very positive and [open-minded] to cryptocurrency companies,” advocates for US leadership in crypto industry

DeepMind exec: AI assesses climate issues, falls short of full solution

Google DeepMind Climate Action Lead Sims Witherspoon suggested a strategy dubbed the “Understand, Optimize, Accelerate” framework, outlining three steps for tackling climate change with AI.

Amid efforts by climate scientists and advocates to address environmental challenges, Google DeepMind Climate Action Lead Sims Witherspoon sees potential in artificial intelligence (AI), emphasizing the importance of framing the solution through thoughtful questioning.

At the Wired Impact Conference in London, Google DeepMind Climate Action Lead Sims Witherspoon said she sees climate change as a scientific and technological challenge, expressing optimism in addressing it through artificial intelligence. Earlier this year, Google merged its Brain and DeepMind AI teams under a single banner called Google DeepMind.

Witherspoon suggested a strategy dubbed the “Understand, Optimize, Accelerate” framework, outlining three steps for tackling climate change with AI, which involve engaging with those affected, assessing AI's applicability, and deploying a solution for impactful change.

DeepMind Climate Action Lead at the Wired Impact Conference in London      Source: Youtube

Examining the path to deployment, Witherspoon observed that certain options become less viable due to existing regulatory conditions, infrastructure constraints, or other limitations and dependencies such as restricted data availability or suitable partners.

Witherspoon stressed the importance of a collaborative approach, highlighting that while individual expertise is valuable, cooperation is crucial and necessitates the combined contributions of academics, regulatory bodies, corporations, non-governmental organizations (NGOs), and impacted communities.

Witherspoon said that, in collaboration with the U.K.'s National Weather Service Meteorological Office in 2021, Google DeepMind leveraged their comprehensive radar data to analyze rainfall in the U.K.

Witherspoon stated,

Read more

Trump says he is “very positive and [open-minded] to cryptocurrency companies,” advocates for US leadership in crypto industry

Google sues scammers over creation of fake Bard AI chatbot

Google has filed a lawsuit against scammers offering a malicious version of its AI chatbot Bard that tricks users into downloading and installing malware on their devices.

Google has filed a lawsuit against three scammers for creating fake advertisements for updates to Google’s artificial intelligence (AI) chatbot Bard, among other things, which, when downloaded, installs malware.

The lawsuit was filed on Nov. 13 and names the defendants as “DOES 1-3,” as they remain anonymous. Google says that the scammers have used its trademarks specifically relating to its AI products, such as “Google, Google AI, and Bard,” to “lure unsuspecting victims into downloading malware onto their computers.”

It gave an example of deceptive social media pages and trademarked content that make it look like a Google product, with invitations to download free versions of Bard and other AI products.

Screenshot of fake “Google AI” social media page used by scammers. Source: Court documents (Google)

Google said that unsuspecting users unknowingly download the malware by following the links, which are designed to access and exploit users’ social media login credentials and primarily target businesses and advertisers. 

The tech giant asked the court for damages, an award of attorneys’ fees, permanent injunctive relief for injuries inflicted by the defendants, all profits obtained by the scammers, a comprehensive restraining order and anything else the court deems “just and equitable.”

Related: OpenAI promises to fund legal costs for ChatGPT users sued over copyright

The lawsuit comes as AI services, including chatbot services, have seen a significant increase in users worldwide. According to recent data, Google’s Bard bot gets 49.7 million unique visitors each month. 

OpenAI’s popular AI chatbot service, ChatGPT, has more than 100 million monthly users with nearly 1.5 billion monthly visitors to its website.

This upsurge in popularity and accessibility of AI services has also brought many lawsuits against the companies developing the technology. OpenAI, Google and Meta — the parent company of Facebook and Instagram — have all been caught up in legal battles in the past year.

In July, Google was brought into a class-action lawsuit. Eight individuals who filed on behalf of “millions of class members,” such as internet users and copyright holders, said that Google had violated their privacy and property rights. It came after Google updated its new privacy policy with data scraping capabilities for AI training purposes.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Trump says he is “very positive and [open-minded] to cryptocurrency companies,” advocates for US leadership in crypto industry

How Google’s AI legal protections can change art and copyright protections

Amid myriad legal accusations surrounding its AI services, Google stands its ground, vowing to protect its users.

Google has been facing a wave of litigation recently as the implications of generative artificial intelligence (AI) on copyright and privacy rights become clearer.

Amid the ever-intensifying debate, Google has not only defended its AI training practices but also pledged to shield users of its generative AI products from accusations of copyright violations.

However, Google’s protective umbrella only spans seven specified products with generative AI attributes and conspicuously leaves out Google’s Bard search tool. The move, although a solace to some, opens a Pandora’s box of questions around accountability, the protection of creative rights and the burgeoning field of AI.

Moreover, the initiative is also being perceived as more than just a mere reactive measure from Google, but rather a meticulously crafted strategy to indemnify the blossoming AI landscape.

AI’s legal cloud 

The surge of generative AI over the last couple of years has rekindled the age-old flame of copyright debates with a modern twist. The bone of contention currently pivots around whether the data used to train AI models and the output generated by them violate propriety intellectual property (IP) affiliated with private entities.

In this regard, the accusations against Google consist of just this and, if proven, could not only cost Google a lot of money but also set a precedent that could throttle the growth of generative AI as a whole​.

Google’s legal strategy, meticulously designed to instill confidence among its clientele, stands on two primary pillars, i.e., the indemnification of its training data and its generated output. To elaborate, Google has committed to bearing legal responsibility should the data employed to devise its AI models face allegations of IP violations.

Not only that, but the tech giant is also looking to protect users against claims that the text, images or other content engendered by its AI services do not infringe upon anyone else’s personal data — encapsulating a wide array of its services, including Google Docs, Slides and Cloud Vertex AI.

Google has argued that the utilization of publicly available information for training AI systems is not tantamount to stealing, invasion of privacy or copyright infringement.

However, this assertion is under severe scrutiny as a slew of lawsuits accuse Google of misusing personal and copyrighted information to feed its AI models. One of the proposed class-action lawsuits even alleges that Google has built its entire AI prowess on the back of secretly purloined data from millions of internet users.

Therefore, the legal battle seems to be more than just a confrontation between Google and the aggrieved parties; it underlines a much larger ideological conundrum, namely: “Who truly owns the data on the internet? And to what extent can this data be used to train AI models, especially when these models churn out commercially lucrative outputs?”

An artist’s perspective

The dynamic between generative AI and protecting intellectual property rights is a landscape that seems to be evolving rapidly. 

Nonfungible token artist Amitra Sethi told Cointelegraph that Google’s recent announcement is a significant and welcome development, adding:

“Google’s policy, which extends legal protection to users who may face copyright infringement claims due to AI-generated content, reflects a growing awareness of the potential challenges posed by AI in the creative field.”

However, Sethi believes that it is important to have a nuanced understanding of this policy. While it acts as a shield against unintentional infringement, it might not cover all possible scenarios. In her view, the protective efficacy of the policy could hinge on the unique circumstances of each case. 

When an AI-generated piece loosely mirrors an artist’s original work, Sethi believes the policy might offer some recourse. But in instances of “intentional plagiarism through AI,” the legal scenario could get murkier. Therefore, she believes that it is up to the artists themselves to remain proactive in ensuring the full protection of their creative output.

Recent: Game review: Immutable’s Guild of Guardians offers mobile dungeon adventures

Sethi said that she recently copyrighted her unique art genre, “SoundBYTE,” so as to highlight the importance of artists taking active measures to secure their work. “By registering my copyright, I’ve established a clear legal claim to my creative expressions, making it easier to assert my rights if they are ever challenged,” she added.

In the wake of such developments, the global artist community seems to be coming together to raise awareness and advocate for clearer laws and regulations governing AI-generated content​​.

Tools like Glaze and Nightshade have also appeared to protect artists’ creations. Glaze applies minor changes to artwork that, while practically imperceptible to the human eye, feeds incorrect or bad data to AI art generators. Similarly, Nightshade lets artists add invisible changes to the pixels within their pieces, thereby “poisoning the data” for AI scrapers.

Examples of how “poisoned” artworks can produce an incorrect image from an AI query. Source: MIT

Industry-wide implications 

The existing narrative is not limited to Google and its product suite. Other tech majors like Microsoft and Adobe have also made overtures to protect their clients against similar copyright claims.

Microsoft, for instance, has put forth a robust defense strategy to shield users of its generative AI tool, Copilot. Since its launch, the company has staunchly defended the legality of Copilot’s training data and its generated information, asserting that the system merely serves as a means for developers to write new code in a more efficient fashion​.

Adobe has incorporated guidelines within its AI tools to ensure users are not unwittingly embroiled in copyright disputes and is also offering AI services bundled with legal assurances against any external infringements.

Magazine: Ethereum restaking: Blockchain innovation or dangerous house of cards?

The inevitable court cases that will appear regarding AI will undoubtedly shape not only legal frameworks but also the ethical foundations upon which future AI systems will operate.

Tomi Fyrqvist, co-founder and chief financial officer for decentralized social app Phaver, told Cointelegraph that in the coming years, it would not be surprising to see more lawsuits of this nature coming to the fore:

“There is always going to be someone suing someone. Most likely, there will be a lot of lawsuits that are opportunistic, but some will be legit.”

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Trump says he is “very positive and [open-minded] to cryptocurrency companies,” advocates for US leadership in crypto industry

Google Cloud teams up with MultiversX amid blockchain firm’s focus on metaverse

MultiversX announced an array of new features on its xPortal super-app with tools to build next-gen metaverse features on the same day of the Google Cloud partnership.

Google Cloud has teamed up with blockchain infrastructure firm MultiversX (formerly Elrond) to boost its Web 3 presence. Google Cloud has integrated MultiversX into the platform which will in turn help Web3 projects and users derive valuable insights from powerful data analytics and artificial intelligence tools within the Google Cloud ecosystem.

MultiversX claims that the partnership between the two firms has the potential to immediately streamline the execution of large-scale, data-first blockchain projects. This should help developers easily access data about addresses, transacted amounts, smart contract interactions, and increased on-chain analytics, the company said.

On the other hand, the Google Cloud involvement in the MultiversX network will enable ecosystem builders to utilize advanced tools and services available on the platform to bring high performance and scalability to their decentralized application dApps’ non-blockchain components. Daniel Rood, Head of Web3 EMEA at Google Cloud, added:

“There are exciting opportunities to enable Web3 developers to build and scale faster and as we explore new verticals within the space, our partnership with MultiversX will allow us to expand our strategy and reach further and solidify our position as one of the main innovation drivers in the blockchain world.” 

MultiverseX has forged multiple partnerships with mainstream brands in the past as well to push the Web3 use cases in the traditional world. The first European institutional marketplace for digital assets, ICI D|SERVICES, as well as Audi's platform for in-car virtual reality, holoride, have both chosen MultiversX as their platform of choice.

The blockchain infrastructure firm focused on metaverse scalability also announced a set of new scalable features for its decentralized digital asset wallet xPortal SuperApp. The updated features will allow users to handle money easily in both fiat and cryptocurrency. Users of the xPortal will have access to peer-to-peer fiat payments as well as European IBANs, SEPA, and SWIFT by the beginning of 2024.

The platform also announced the launch of the xWorlds Developer Kit, which offers an array of unique tools that creators can use to build the next generation of augmented reality experiences through leveraging xPortal as a wallet and distribution hub. The kit includes highly realistic AI-powered 3D avatars as well.

Trump says he is “very positive and [open-minded] to cryptocurrency companies,” advocates for US leadership in crypto industry

Google requests dismissal of AI data scraping class-action suit

Google argued in its motion to dismiss the claims that using publicly available information shared on the internet is not “stealing,” as claimed.

Big Tech player Google is seeking to dismiss a proposed class-action lawsuit that claims it’s violating the privacy and property rights of millions of internet users by scraping data to train its artificial intelligence models. 

Google filed the motion on Oct. 17 in a California District Court, saying it’s necessary to use public data to train itsAI chatbots such as Bard. It argued the claims are based upon false premises that it is “stealing” the information that is publicly shared on the internet.

“Using publicly available information to learn is not stealing. Nor is it an invasion of privacy, conversion, negligence, unfair competition, or copyright infringement.”

Google said such a lawsuit would “take a sledgehammer not just to Google’s services but to the very idea of generative AI."

The suit was opened against Google in July by eight individuals claiming to represent “millions of class members” such as internet users and copyright holders.

They claim their privacy and property rights were violated under a Google privacy policy change a week before the suit was filed that allows data scraping for AI training purposes.

Related: Google updates service policies to comply with EU regulations

Google argued the complaint concerns “irrelevant conduct by third parties and doomsday predictions about AI.” 

It said the complaint failed to address any core issues, particularly how the plaintiffs have been harmed by using their information.

This case is one of many that have been brought against tech giants that are developing and training AI systems. On Sept. 20, Meta refuted claims of copyright infringement during the training of its AI.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Trump says he is “very positive and [open-minded] to cryptocurrency companies,” advocates for US leadership in crypto industry

Google to protect users in AI copyright accusations

Google explicitly stated that only seven products fall under this legal protection, excluding Google’s Bard search tool.

Google has announced its commitment to protect users of generative artificial intelligence (AI) systems within its Google Cloud and Workspace platforms in cases where they face allegations of intellectual property infringement. This move aligns Google with other companies, such as Microsoft, Adobe and more, which have also made similar assurances.

In a recent blog post, Google made it clear that customers utilizing products integrated with generative AI capabilities will receive legal protection. This announcement addresses mounting concerns regarding the potential copyright issues associated with generative AI.

Google explicitly outlined seven products that fall under this legal protection. The products are Duet AI in Workspace, encompassing text generation in Google Docs and Gmail, as well as image generation in Google Slides and Google Meet; Duet AI in Google Cloud; Vertex AI Search; Vertex AI Conversation; Vertex AI Text Embedding API; Visual Captioning on Vertex AI; and Codey APIs. It’s worth noting that this list did not include Google’s Bard search tool.

According to Google:

“If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.”

Google has unveiled a distinctive approach to intellectual property indemnification, described as a pioneering two-pronged strategy. Under this initiative, Google extends its protection to encompass both the training data and the outcomes generated from its foundational models.

Screenshot of Google’s announcement. Source: Google

This signifies that if legal action is taken against someone due to the use of Google’s training data that involves copyrighted material, Google will assume responsibility for addressing this legal challenge.

The company clarified that the indemnity related to training data is not a novel form of protection. However, Google acknowledged that its customers expressed a desire for clear and explicit confirmation that this protection extends to scenarios where the training data incorporates copyrighted material.

Related: Google Assistant will soon incorporate Bard AI chat service

Google will additionally protect users if they face legal action due to the results they obtain while utilizing its foundation models. This includes scenarios where users generate content resembling published works. The company emphasized that this safeguard is contingent on users not intentionally generating or using content to infringe upon the rights of others.

Other companies have issued similar statements. Microsoft declared its commitment to assume legal responsibility for enterprise users of its Copilot products. Adobe, on the other hand, affirmed its dedication to safeguarding enterprise customers from copyright, privacy and publicity rights claims when using Firefly.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Trump says he is “very positive and [open-minded] to cryptocurrency companies,” advocates for US leadership in crypto industry

EU mulls more restrictive regulations for large AI models: Report

Negotiators in the EU are reportedly considering additional restrictions for large AI models - like OpenAI’s GPT-4- as a component of the forthcoming AI Act.

Representatives in the European Union are reportedly negotiating a plan for additional regulations on the largest artificial intelligence (AI) systems, according to a report from Bloomberg. 

The European Commission, European Parliament and the various EU member states are said to be in discussions regarding the potential effects of large language models (LLMs), including Meta’s Llama 2 and OpenAI’s GPT-4, and possible additional restrictions to be imposed on them as a part of the forthcoming AI Act.

Bloomberg reports that sources close to the matter said the goal is not to overburden new startups with too many regulations while keeping larger models in check.

According to the sources, the agreement reached by negotiators on the topic is still in the preliminary stages.

The AI Act and the new proposed regulations for LLMs would be a similar approach to the matter as the EU’s Digital Services Act (DSA).

The DSA was recently implemented by EU lawmakers and makes it so platforms and websites have standards to protect user data and scan for illegal activities. However, the web’s largest platforms are subject to stricter controls.

Companies under this category like Alphabet Inc. and Meta Inc. had until Aug. 28 to update service practices to comply with the new EU standards.

Related: UNESCO and Netherlands design AI supervision project for the EU

The EU’s AI Act is posed to be one of the first set of mandatory rules for AI set in place by a Western government. China has already enacted its own set of AI regulations, which came into effect in August 2023. 

Under the EU’s AI regulations companies developing and deploying AI systems would need to perform risk assessments, label AI-generated content and are completely banned from the use of biometric surveillance, among other things.

However, the legislation has not been enacted yet and member states still have the ability to disagree with any of the proposals set forth by parliament.

In China, since the implementation of its AI laws, it has been reported that more than 70 new AI models have already been released.

Magazine: The Truth Behind Cuba’s Bitcoin Revolution: An on-the-ground report

Trump says he is “very positive and [open-minded] to cryptocurrency companies,” advocates for US leadership in crypto industry