1. Home
  2. Google

Google

Solana Co-Founder Anatoly Yakovenko Thinks Crypto Has a Shot at Disrupting the App Store Duopoly

Solana Co-Founder Anatoly Yakovenko Thinks Crypto Has a Shot at Disrupting the App Store DuopolyAnatoly Yakovenko, co-founder of Solana Labs, believes that crypto can have a shot at disrupting Google’s and Apple’s app store duopoly. He criticized the state of the software distribution on mobile devices, lamenting that the two stores collected over 30% of all earnings just to display a top 10 list of the most downloaded apps. […]

Analyst Says Bottom Is In for Altcoin Markets, Predicts Crypto Rally Following Weak Economic Data

Google slashes price of Gemini AI model, opens up to developers

The Google parent company Alphabet said it is slashing prices for its pro version of AI model Gemini and plans to make its tools more accessible to developers to create their own versions.

Alphabet, the parent company of Google, announced on Dec. 13 that it plans to slash the cost of a version of its most advanced artificial intelligence (AI) model Gemini and make it more accessible to developers. 

According to reports, the company said the price for the pro model of Gemini has been cut 25-50% of what it was in June.

Gemini was introduced in three variations on Dec. 6, with its most sophisticated version being able to reason and understand information at a higher level than other Google technology, along with computing video and audio.

Read more

Analyst Says Bottom Is In for Altcoin Markets, Predicts Crypto Rally Following Weak Economic Data

Opensource AI can outperform private models like Chat-GPT – ARK Invest research

In 2023, Yi 34B, Falcon 180B and Mixtral 8x7B emerged as some of the top open-source AI that showcased comparable performance to market leaders.

While generative artificial intelligence (AI) models backed by centralized cloud infrastructure — such as ChatGPT — currently lead on overall performance, new research shows that open-source competitors are catching up.

The current market leaders of generative AI, such as Google and OpenAI, took a centralized approach to building their infrastructure — effectively limiting public access to various information, including the data sources used for the training model.

This could change, the research team at Cathy Wood’s ARK Invest claims, suggesting the possibility of open-source AI models outperforming their centralized counterparts by 2024.

Read more

Analyst Says Bottom Is In for Altcoin Markets, Predicts Crypto Rally Following Weak Economic Data

Google To Loosen Restrictions on Crypto Ads, Will Allow Promotion of ‘Cryptocurrency Coin Trusts’

Google To Loosen Restrictions on Crypto Ads, Will Allow Promotion of ‘Cryptocurrency Coin Trusts’

Google has announced it’s updating its policy on allowing advertisements related to cryptocurrencies worldwide. According to a new update from the company, the search giant will update its policy on crypto in January 2024 to address the advertising rules for what it calls Cryptocurrency Coin Trusts. According to the announcement, US advertisers will be able […]

The post Google To Loosen Restrictions on Crypto Ads, Will Allow Promotion of ‘Cryptocurrency Coin Trusts’ appeared first on The Daily Hodl.

Analyst Says Bottom Is In for Altcoin Markets, Predicts Crypto Rally Following Weak Economic Data

Google’s Gemini, OpenAI’s ChatGPT go head-to-head in Cointelegraph test

Comparisons of Google’s Gemini and OpenAI’s ChatGPT continue to flood internet social spaces, so we decided to put them to the test with questions of our own.

On Dec. 6, Google launched its latest artificial intelligence (AI) model, Gemini, which it claimed is the most advanced model currently available on the market — even better than the popular model developed by OpenAI, ChatGPT-4. 

This bold claim was treated like a challenge by community sleuths across the internet, who swiftly moved to examine the methods and benchmarks used by Google to assert Gemini’s supposed superiority and poke fun at the company’s marketing of the product.

David Gull, CEO of AI-powered wellness startup Vital, told Cointelegraph that each model, be it ChatGPT-4, Llama 2, or now Gemini, has its own set of strengths and challenges.

Read more

Analyst Says Bottom Is In for Altcoin Markets, Predicts Crypto Rally Following Weak Economic Data

Is Google’s Gemini really smarter than OpenAI’s GPT-4? Community sleuths find out

After Google launched its new high-performance AI model Gemini and claimed it to be far superior to OpenAI’s GPT-4 users on social media began to challenge those claims.

Google launched its latest artificial intelligence (AI) model Gemini on Dec. 6, announcing it as the most advanced AI model currently available on the market, surpassing OpenAI’s GPT-4. 

Gemini is multimodal, which means it was built to understand and combine different types of information. It comes in three versions (Ultra, Pro, Nano) to serve different use cases, and one area in which it appears to beat GPT-4 is its ability to perform advanced math and specialized coding.

On its debut, Google released multiple benchmark tests that compared Gemini with GPT-4. The Gemini Ultra version achieved “state-of-the-art performance” in 30 out of 32 academic benchmarks that were used in large language model (LLM) development.

Read more

Analyst Says Bottom Is In for Altcoin Markets, Predicts Crypto Rally Following Weak Economic Data

Google’s ‘GPT-4 killer’ Gemini is out, here’s how you can try it

Google has deployed its newest weapon in the AI arms race, a new artificial intelligence model that it claims is smarter and more powerful than OpenAI’s GPT-4.

Tech giant Google has officially rolled out Gemini, its latest artificial intelligence model that it claims has surpassed OpenAI's GPT-4.

On Dec. 6, Google CEO Sundar Pichai and Google DeepMind CEO and co-founder Demis Hassabis announced the launch of Gemini in a company blog post

The AI model has been optimized for different sizes and use cases (Ultra, Pro, Nano) and built to be multimodal to understand and combine different types of information.

Read more

Analyst Says Bottom Is In for Altcoin Markets, Predicts Crypto Rally Following Weak Economic Data

DeepMind exec: AI assesses climate issues, falls short of full solution

Google DeepMind Climate Action Lead Sims Witherspoon suggested a strategy dubbed the “Understand, Optimize, Accelerate” framework, outlining three steps for tackling climate change with AI.

Amid efforts by climate scientists and advocates to address environmental challenges, Google DeepMind Climate Action Lead Sims Witherspoon sees potential in artificial intelligence (AI), emphasizing the importance of framing the solution through thoughtful questioning.

At the Wired Impact Conference in London, Google DeepMind Climate Action Lead Sims Witherspoon said she sees climate change as a scientific and technological challenge, expressing optimism in addressing it through artificial intelligence. Earlier this year, Google merged its Brain and DeepMind AI teams under a single banner called Google DeepMind.

Witherspoon suggested a strategy dubbed the “Understand, Optimize, Accelerate” framework, outlining three steps for tackling climate change with AI, which involve engaging with those affected, assessing AI's applicability, and deploying a solution for impactful change.

DeepMind Climate Action Lead at the Wired Impact Conference in London      Source: Youtube

Examining the path to deployment, Witherspoon observed that certain options become less viable due to existing regulatory conditions, infrastructure constraints, or other limitations and dependencies such as restricted data availability or suitable partners.

Witherspoon stressed the importance of a collaborative approach, highlighting that while individual expertise is valuable, cooperation is crucial and necessitates the combined contributions of academics, regulatory bodies, corporations, non-governmental organizations (NGOs), and impacted communities.

Witherspoon said that, in collaboration with the U.K.'s National Weather Service Meteorological Office in 2021, Google DeepMind leveraged their comprehensive radar data to analyze rainfall in the U.K.

Witherspoon stated,

Read more

Analyst Says Bottom Is In for Altcoin Markets, Predicts Crypto Rally Following Weak Economic Data

Google sues scammers over creation of fake Bard AI chatbot

Google has filed a lawsuit against scammers offering a malicious version of its AI chatbot Bard that tricks users into downloading and installing malware on their devices.

Google has filed a lawsuit against three scammers for creating fake advertisements for updates to Google’s artificial intelligence (AI) chatbot Bard, among other things, which, when downloaded, installs malware.

The lawsuit was filed on Nov. 13 and names the defendants as “DOES 1-3,” as they remain anonymous. Google says that the scammers have used its trademarks specifically relating to its AI products, such as “Google, Google AI, and Bard,” to “lure unsuspecting victims into downloading malware onto their computers.”

It gave an example of deceptive social media pages and trademarked content that make it look like a Google product, with invitations to download free versions of Bard and other AI products.

Screenshot of fake “Google AI” social media page used by scammers. Source: Court documents (Google)

Google said that unsuspecting users unknowingly download the malware by following the links, which are designed to access and exploit users’ social media login credentials and primarily target businesses and advertisers. 

The tech giant asked the court for damages, an award of attorneys’ fees, permanent injunctive relief for injuries inflicted by the defendants, all profits obtained by the scammers, a comprehensive restraining order and anything else the court deems “just and equitable.”

Related: OpenAI promises to fund legal costs for ChatGPT users sued over copyright

The lawsuit comes as AI services, including chatbot services, have seen a significant increase in users worldwide. According to recent data, Google’s Bard bot gets 49.7 million unique visitors each month. 

OpenAI’s popular AI chatbot service, ChatGPT, has more than 100 million monthly users with nearly 1.5 billion monthly visitors to its website.

This upsurge in popularity and accessibility of AI services has also brought many lawsuits against the companies developing the technology. OpenAI, Google and Meta — the parent company of Facebook and Instagram — have all been caught up in legal battles in the past year.

In July, Google was brought into a class-action lawsuit. Eight individuals who filed on behalf of “millions of class members,” such as internet users and copyright holders, said that Google had violated their privacy and property rights. It came after Google updated its new privacy policy with data scraping capabilities for AI training purposes.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Analyst Says Bottom Is In for Altcoin Markets, Predicts Crypto Rally Following Weak Economic Data

How Google’s AI legal protections can change art and copyright protections

Amid myriad legal accusations surrounding its AI services, Google stands its ground, vowing to protect its users.

Google has been facing a wave of litigation recently as the implications of generative artificial intelligence (AI) on copyright and privacy rights become clearer.

Amid the ever-intensifying debate, Google has not only defended its AI training practices but also pledged to shield users of its generative AI products from accusations of copyright violations.

However, Google’s protective umbrella only spans seven specified products with generative AI attributes and conspicuously leaves out Google’s Bard search tool. The move, although a solace to some, opens a Pandora’s box of questions around accountability, the protection of creative rights and the burgeoning field of AI.

Moreover, the initiative is also being perceived as more than just a mere reactive measure from Google, but rather a meticulously crafted strategy to indemnify the blossoming AI landscape.

AI’s legal cloud 

The surge of generative AI over the last couple of years has rekindled the age-old flame of copyright debates with a modern twist. The bone of contention currently pivots around whether the data used to train AI models and the output generated by them violate propriety intellectual property (IP) affiliated with private entities.

In this regard, the accusations against Google consist of just this and, if proven, could not only cost Google a lot of money but also set a precedent that could throttle the growth of generative AI as a whole​.

Google’s legal strategy, meticulously designed to instill confidence among its clientele, stands on two primary pillars, i.e., the indemnification of its training data and its generated output. To elaborate, Google has committed to bearing legal responsibility should the data employed to devise its AI models face allegations of IP violations.

Not only that, but the tech giant is also looking to protect users against claims that the text, images or other content engendered by its AI services do not infringe upon anyone else’s personal data — encapsulating a wide array of its services, including Google Docs, Slides and Cloud Vertex AI.

Google has argued that the utilization of publicly available information for training AI systems is not tantamount to stealing, invasion of privacy or copyright infringement.

However, this assertion is under severe scrutiny as a slew of lawsuits accuse Google of misusing personal and copyrighted information to feed its AI models. One of the proposed class-action lawsuits even alleges that Google has built its entire AI prowess on the back of secretly purloined data from millions of internet users.

Therefore, the legal battle seems to be more than just a confrontation between Google and the aggrieved parties; it underlines a much larger ideological conundrum, namely: “Who truly owns the data on the internet? And to what extent can this data be used to train AI models, especially when these models churn out commercially lucrative outputs?”

An artist’s perspective

The dynamic between generative AI and protecting intellectual property rights is a landscape that seems to be evolving rapidly. 

Nonfungible token artist Amitra Sethi told Cointelegraph that Google’s recent announcement is a significant and welcome development, adding:

“Google’s policy, which extends legal protection to users who may face copyright infringement claims due to AI-generated content, reflects a growing awareness of the potential challenges posed by AI in the creative field.”

However, Sethi believes that it is important to have a nuanced understanding of this policy. While it acts as a shield against unintentional infringement, it might not cover all possible scenarios. In her view, the protective efficacy of the policy could hinge on the unique circumstances of each case. 

When an AI-generated piece loosely mirrors an artist’s original work, Sethi believes the policy might offer some recourse. But in instances of “intentional plagiarism through AI,” the legal scenario could get murkier. Therefore, she believes that it is up to the artists themselves to remain proactive in ensuring the full protection of their creative output.

Recent: Game review: Immutable’s Guild of Guardians offers mobile dungeon adventures

Sethi said that she recently copyrighted her unique art genre, “SoundBYTE,” so as to highlight the importance of artists taking active measures to secure their work. “By registering my copyright, I’ve established a clear legal claim to my creative expressions, making it easier to assert my rights if they are ever challenged,” she added.

In the wake of such developments, the global artist community seems to be coming together to raise awareness and advocate for clearer laws and regulations governing AI-generated content​​.

Tools like Glaze and Nightshade have also appeared to protect artists’ creations. Glaze applies minor changes to artwork that, while practically imperceptible to the human eye, feeds incorrect or bad data to AI art generators. Similarly, Nightshade lets artists add invisible changes to the pixels within their pieces, thereby “poisoning the data” for AI scrapers.

Examples of how “poisoned” artworks can produce an incorrect image from an AI query. Source: MIT

Industry-wide implications 

The existing narrative is not limited to Google and its product suite. Other tech majors like Microsoft and Adobe have also made overtures to protect their clients against similar copyright claims.

Microsoft, for instance, has put forth a robust defense strategy to shield users of its generative AI tool, Copilot. Since its launch, the company has staunchly defended the legality of Copilot’s training data and its generated information, asserting that the system merely serves as a means for developers to write new code in a more efficient fashion​.

Adobe has incorporated guidelines within its AI tools to ensure users are not unwittingly embroiled in copyright disputes and is also offering AI services bundled with legal assurances against any external infringements.

Magazine: Ethereum restaking: Blockchain innovation or dangerous house of cards?

The inevitable court cases that will appear regarding AI will undoubtedly shape not only legal frameworks but also the ethical foundations upon which future AI systems will operate.

Tomi Fyrqvist, co-founder and chief financial officer for decentralized social app Phaver, told Cointelegraph that in the coming years, it would not be surprising to see more lawsuits of this nature coming to the fore:

“There is always going to be someone suing someone. Most likely, there will be a lot of lawsuits that are opportunistic, but some will be legit.”

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Analyst Says Bottom Is In for Altcoin Markets, Predicts Crypto Rally Following Weak Economic Data