1. Home
  2. Google

Google

Google Cloud teams up with MultiversX amid blockchain firm’s focus on metaverse

MultiversX announced an array of new features on its xPortal super-app with tools to build next-gen metaverse features on the same day of the Google Cloud partnership.

Google Cloud has teamed up with blockchain infrastructure firm MultiversX (formerly Elrond) to boost its Web 3 presence. Google Cloud has integrated MultiversX into the platform which will in turn help Web3 projects and users derive valuable insights from powerful data analytics and artificial intelligence tools within the Google Cloud ecosystem.

MultiversX claims that the partnership between the two firms has the potential to immediately streamline the execution of large-scale, data-first blockchain projects. This should help developers easily access data about addresses, transacted amounts, smart contract interactions, and increased on-chain analytics, the company said.

On the other hand, the Google Cloud involvement in the MultiversX network will enable ecosystem builders to utilize advanced tools and services available on the platform to bring high performance and scalability to their decentralized application dApps’ non-blockchain components. Daniel Rood, Head of Web3 EMEA at Google Cloud, added:

“There are exciting opportunities to enable Web3 developers to build and scale faster and as we explore new verticals within the space, our partnership with MultiversX will allow us to expand our strategy and reach further and solidify our position as one of the main innovation drivers in the blockchain world.” 

MultiverseX has forged multiple partnerships with mainstream brands in the past as well to push the Web3 use cases in the traditional world. The first European institutional marketplace for digital assets, ICI D|SERVICES, as well as Audi's platform for in-car virtual reality, holoride, have both chosen MultiversX as their platform of choice.

The blockchain infrastructure firm focused on metaverse scalability also announced a set of new scalable features for its decentralized digital asset wallet xPortal SuperApp. The updated features will allow users to handle money easily in both fiat and cryptocurrency. Users of the xPortal will have access to peer-to-peer fiat payments as well as European IBANs, SEPA, and SWIFT by the beginning of 2024.

The platform also announced the launch of the xWorlds Developer Kit, which offers an array of unique tools that creators can use to build the next generation of augmented reality experiences through leveraging xPortal as a wallet and distribution hub. The kit includes highly realistic AI-powered 3D avatars as well.

Binance to list ZKsync with token distribution program amid widespread criticism

Google requests dismissal of AI data scraping class-action suit

Google argued in its motion to dismiss the claims that using publicly available information shared on the internet is not “stealing,” as claimed.

Big Tech player Google is seeking to dismiss a proposed class-action lawsuit that claims it’s violating the privacy and property rights of millions of internet users by scraping data to train its artificial intelligence models. 

Google filed the motion on Oct. 17 in a California District Court, saying it’s necessary to use public data to train itsAI chatbots such as Bard. It argued the claims are based upon false premises that it is “stealing” the information that is publicly shared on the internet.

“Using publicly available information to learn is not stealing. Nor is it an invasion of privacy, conversion, negligence, unfair competition, or copyright infringement.”

Google said such a lawsuit would “take a sledgehammer not just to Google’s services but to the very idea of generative AI."

The suit was opened against Google in July by eight individuals claiming to represent “millions of class members” such as internet users and copyright holders.

They claim their privacy and property rights were violated under a Google privacy policy change a week before the suit was filed that allows data scraping for AI training purposes.

Related: Google updates service policies to comply with EU regulations

Google argued the complaint concerns “irrelevant conduct by third parties and doomsday predictions about AI.” 

It said the complaint failed to address any core issues, particularly how the plaintiffs have been harmed by using their information.

This case is one of many that have been brought against tech giants that are developing and training AI systems. On Sept. 20, Meta refuted claims of copyright infringement during the training of its AI.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Binance to list ZKsync with token distribution program amid widespread criticism

Google to protect users in AI copyright accusations

Google explicitly stated that only seven products fall under this legal protection, excluding Google’s Bard search tool.

Google has announced its commitment to protect users of generative artificial intelligence (AI) systems within its Google Cloud and Workspace platforms in cases where they face allegations of intellectual property infringement. This move aligns Google with other companies, such as Microsoft, Adobe and more, which have also made similar assurances.

In a recent blog post, Google made it clear that customers utilizing products integrated with generative AI capabilities will receive legal protection. This announcement addresses mounting concerns regarding the potential copyright issues associated with generative AI.

Google explicitly outlined seven products that fall under this legal protection. The products are Duet AI in Workspace, encompassing text generation in Google Docs and Gmail, as well as image generation in Google Slides and Google Meet; Duet AI in Google Cloud; Vertex AI Search; Vertex AI Conversation; Vertex AI Text Embedding API; Visual Captioning on Vertex AI; and Codey APIs. It’s worth noting that this list did not include Google’s Bard search tool.

According to Google:

“If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.”

Google has unveiled a distinctive approach to intellectual property indemnification, described as a pioneering two-pronged strategy. Under this initiative, Google extends its protection to encompass both the training data and the outcomes generated from its foundational models.

Screenshot of Google’s announcement. Source: Google

This signifies that if legal action is taken against someone due to the use of Google’s training data that involves copyrighted material, Google will assume responsibility for addressing this legal challenge.

The company clarified that the indemnity related to training data is not a novel form of protection. However, Google acknowledged that its customers expressed a desire for clear and explicit confirmation that this protection extends to scenarios where the training data incorporates copyrighted material.

Related: Google Assistant will soon incorporate Bard AI chat service

Google will additionally protect users if they face legal action due to the results they obtain while utilizing its foundation models. This includes scenarios where users generate content resembling published works. The company emphasized that this safeguard is contingent on users not intentionally generating or using content to infringe upon the rights of others.

Other companies have issued similar statements. Microsoft declared its commitment to assume legal responsibility for enterprise users of its Copilot products. Adobe, on the other hand, affirmed its dedication to safeguarding enterprise customers from copyright, privacy and publicity rights claims when using Firefly.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Binance to list ZKsync with token distribution program amid widespread criticism

EU mulls more restrictive regulations for large AI models: Report

Negotiators in the EU are reportedly considering additional restrictions for large AI models - like OpenAI’s GPT-4- as a component of the forthcoming AI Act.

Representatives in the European Union are reportedly negotiating a plan for additional regulations on the largest artificial intelligence (AI) systems, according to a report from Bloomberg. 

The European Commission, European Parliament and the various EU member states are said to be in discussions regarding the potential effects of large language models (LLMs), including Meta’s Llama 2 and OpenAI’s GPT-4, and possible additional restrictions to be imposed on them as a part of the forthcoming AI Act.

Bloomberg reports that sources close to the matter said the goal is not to overburden new startups with too many regulations while keeping larger models in check.

According to the sources, the agreement reached by negotiators on the topic is still in the preliminary stages.

The AI Act and the new proposed regulations for LLMs would be a similar approach to the matter as the EU’s Digital Services Act (DSA).

The DSA was recently implemented by EU lawmakers and makes it so platforms and websites have standards to protect user data and scan for illegal activities. However, the web’s largest platforms are subject to stricter controls.

Companies under this category like Alphabet Inc. and Meta Inc. had until Aug. 28 to update service practices to comply with the new EU standards.

Related: UNESCO and Netherlands design AI supervision project for the EU

The EU’s AI Act is posed to be one of the first set of mandatory rules for AI set in place by a Western government. China has already enacted its own set of AI regulations, which came into effect in August 2023. 

Under the EU’s AI regulations companies developing and deploying AI systems would need to perform risk assessments, label AI-generated content and are completely banned from the use of biometric surveillance, among other things.

However, the legislation has not been enacted yet and member states still have the ability to disagree with any of the proposals set forth by parliament.

In China, since the implementation of its AI laws, it has been reported that more than 70 new AI models have already been released.

Magazine: The Truth Behind Cuba’s Bitcoin Revolution: An on-the-ground report

Binance to list ZKsync with token distribution program amid widespread criticism

AI a powerful tool for devs to change gaming, says former Google gaming head

Ryan Wyatt deciphers the the possibilities for AI to help gamers and game developers achieve.

The world embraced artificial Intelligence (AI), hoping to see it transform complex and day-to-day processes. While generative AI models won millions of users, discussions around the transformative potential of AI in all walks of life became mainstream. 

Today, AI is being tested across all business verticals as entrepreneurs challenge the status quo, streamlining and automating processes in varying industries. This drive also resurrects ecosystems that have lost their vigor over years of trial and error.

In the quest to find the true potential of this technology, humanity continues to infuse AI elements into existing systems in the hopes of outperforming current limitations.

The gaming ecosystem sees AI as a means to supersede incremental upgrades. From reutilizing seasoned hardware to squeezing out the price-performance ratio from the latest graphics processing units (GPUs), the gaming industry sees AI’s potential to redefine how gamers of the future will consume their products.

“AI will be one of the most important tools for game developers to improve their work output and production, and unlock rich and new experiences for gamers,” said Ryan Wyatt, the former global head of gaming partnerships at Google and former head of gaming at YouTube.

Wyatt’s exposure to gaming — on both professional and personal fronts — allowed him a special viewpoint at the intersection of a gamer’s wishful thinking and an entrepreneur’s reality check.

Wyatt garnered over two decades of gaming experience before entering crypto as the CEO of Polygon Labs, eventually retiring as the president to take up an advisory role for the blockchain company.

Speaking to Cointelegraph, Wyatt reveals how AI could potentially transform the gaming ecosystem and what it could mean for the future of blockchain gaming.

Cointelegraph: What is the role of AI in the gaming ecosystem?

Ryan Wyatt: The term “AI in gaming” has been overused to the point of exhaustion. In my opinion, it is simply another powerful tool in the developer’s toolkit, which is already extensive and continues to grow. This expansion of toolsets — AI being one of them — will enable a variety of new gaming experiences that we have never seen before and allow game developers to do more. We often talk about AI as a replacement for the work being done in gaming, but I strongly disagree. I see it as a powerful tool that will allow game teams, both small and large, to do more than they ever could before, which may require human resources to be leveraged differently but not minimize or diminish the importance of the many roles required to make a game. And in return, gamers will get to experience games that were never deemed possible before.

CT: Can AI potentially take up the heavy computational tasks that currently rely solely on GPUs? Do you think AI could allow us to repurpose legacy systems that contribute to e-waste, or is it just wishful thinking?

RW: This is a tough one. I do think it is wishful thinking to assume that AI can repurpose all these legacy systems and reduce e-waste. Based on the track record of how hardware has grown and advanced so much over the last two decades, there’s no indication to believe we’re moving in the right direction here, as we’ve continued to increase e-waste over the last 10 years. From a technology standpoint, we’re constantly evolving, and the necessity and demand to expand on hardware, specifically with the GPU, continues to increase significantly. I believe there will be a number of optimizations that AI can introduce to the problem: offloading more resources to the CPU, optimizing for legacy systems, etc., but I think it’s wishful thinking to assume we can reduce e-waste as we continue to push the limits of technology and hardware to create things that were never imaginable before. This seems like a problem that isn’t going to be meaningfully resolved over the next decade, and, in fact, I anticipate it to get worse before it gets better, with AI exacerbating the issue in a 5–10 year time horizon. 

CT: If AI could be used for graphics optimization, unlimited (free world) map rendering or a storyline that never ends, but you could choose only one, which one would you choose as a gamer, and why?

RW: This is a matter of personal preference, but I hope we see both. I believe that storylines and NPCs [non-player characters] could evolve greatly from where they are today. We have seen amazing and beautiful open worlds expand in parallel with computational and hardware improvements. While not unlimited, expanding worlds have played a meaningful role in games over the last decade.

Recent: FTX’s $3.4B crypto liquidation: What it means for crypto markets

To me, one area that needs to evolve is how we engage with NPCs in games. This has been rather archaic for quite some time and has largely relied on linear lines of pre-programmed communication and dialogue. This is already changing with companies like Inworld AI and the work they are doing; their tech helps a game developer craft unique and memorable AI NPCs with its fully integrated character engine.

Their engine goes beyond large language models (LLMs) by adding configurable safety, knowledge, memory and other controls in place. The characters then have distinct personalities and contextual awareness, which is insane to see from a gamer’s perspective.

We haven’t had these kinds of dialogue interactions inside of games before, so it’s hard to wrap your head around how it will change the industry because it’s just something that was once unfathomable. Once these developer tools are seamlessly integrated into proprietary engines of large AAA publishers, you’ll see a new era of immersive game experiences. I also believe you’ll see a huge burden lift on the game development cycle that will allow for expansive worlds by not just large studios with companies like Kaedim; you effectively reduce all of the hours lost in modeling by simply generating stunning 3D art with nothing more than an image. These are the types of tools that are going to advance and multiply game development and usher us into a new era of gaming.

The interesting thing is the collision of both of these topics over the next decade!

CT: What are your thoughts on blockchain gaming? How did you find it different from traditional/mainstream titles?

Blockchain gaming is another tool in the toolbelt for game developers and gamers to change the way we interact with games. By storing assets and information on a blockchain, which is not owned by any intermediary, we can expand upon value exchange between game developers, users and gamers (peer-to-peer). This is done inefficiently today, and although some examples come close, such as CS:GO, it is still far from perfect.

The entire crypto space is going through a much-needed reset, washing away bad actors, and from the dust, you will see true, well-intended pioneers and innovators emerge. The unfortunate abuse of the financial aspects of crypto has made many game developers, especially in the West, apprehensive about incorporating blockchain technology into their gaming infrastructure stack, which I believe is temporary.

However, in the East, we are seeing top gaming developers (e.g., Square Enix and Nexon) fully commit to blockchain gaming due to the new game mechanics and relationships that can be created between gamers and developers. I fully expect the re-emergence of blockchain conversations being driven by the application layer in 2024 to 2025, which will do a better job of illustrating the power of launching games on blockchain infrastructure stacks, even if only certain aspects of games are built on them. The last three years of crypto have been dominated in conversation at the infrastructure (blockchain) layer and finance (decentralized finance (DeFi) sector, and ironically, the abuse has come from bad actors of centralized platforms (such as FTX) that don’t even embrace the core values of decentralization.

CT: From a gamer’s perspective, what do you think AI can do to help the widespread adoption of blockchain gaming?

RW: I’m not sure if blockchain gaming will become widely adopted anytime soon; we’re still years out from this, and there are great companies that are pushing the envelope here, like Immutable, but I do think that as AI becomes materially indistinguishable from reality, there is value in blockchains holding accountability over the advancement of AI. This is because blockchains are transparent and immutable, meaning that they can be used to track and verify the provenance of AI-generated content. This is important because it will help to ensure that AI is used ethically and responsibly and that it does not create harmful or misleading content.

I am certain that we will see blockchains in the future host authentic and verifiable information in a world where things coming from AI become indistinguishable from reality. This is because blockchains provide a secure and tamper-proof way to store data, which is essential for ensuring the authenticity and reliability of AI-generated content.

CT: Despite the involvement of the people behind mainstream titles, the blockchain gaming industry has not taken off, unlike other crypto sub-ecosystems. What could have been done differently?

RW: I think this is largely misguided due to timing expectations and the underwhelming first iteration of blockchain games. Game development cycles are so long, and the first batch of blockchain games were either rudimentary, rushed to market, had the wrong incentive mechanisms, were not highly produced or had other issues. There also have been blockchain infrastructure woes that have needed time to overcome, [such as] gas costs, difficult user journeys to navigate and other infrastructure challenges that are just now starting to be resolved by layer-1 and layer-2 protocols.

However, I’ve seen a lot of amazing blockchain games in development that will be released in 2024 to 2025. These games will truly explore the uniqueness that blockchain games have to offer. Games are such a monumental lift to create, and the ones that go deep with either small or large teams will ultimately need more time to show their work. There has been an outsized amount of capital deployed into blockchain games, in the several billions of dollars, and we’ve only seen a single-digit percentage of releases from that cohort of investment.

CT: What went wrong with blockchain gaming? Why don’t gamers buy into the idea of play-to-earn?

Play-to-earn as a philosophy isn’t that crazy. Game developers are always looking to reward gamers for spending more time in their game because longer session times equate to more value, which is captured by the game developer. So, conceptually, this idea of putting time into a game and being rewarded for it isn’t a new game mechanic.

Play-to-earn in blockchain games tries to expand upon this concept of value exchange from developer to player.

Magazine: Blockchain detectives: Mt. Gox collapse saw birth of Chainalysis

However, the economies are really difficult to balance when you don’t have the autonomy over every aspect of them due to the nature of them being decentralized. Ultimately, this has either led to pure abuse of the category, unfortunate attempts to do right and fail or will need more tinkering to ultimately find the right token and economic strategy.

CT: Speaking from a different angle, what benefit could AI and blockchain bring to mainstream gaming? What could compel developers to adopt and infuse the tech into their existing gameplay?

RW: There is certainly a chicken-and-egg issue here. Game developers need to push the limits of what these technologies can do, learn from it, iterate on it and then showcase it to gamers to see if this is what they truly want. But at the end of the day, the large games continue to dominate viewership on YouTube and Twitch.

Steam’s top games, such as DotA and CS, have remained juggernauts, and breakout hits like Minecraft and Roblox are generational unicorns. Both of these games took over a decade to materialize into what we know them to be today. In order to achieve mass adoption, you will need to see these games permeated with the technology. I believe that both of these technologies — AI and blockchain — will have breakout moments from native app developers and indie game devs. However, for true mass adoption, larger players will inevitably need to incorporate the technology.

Disclaimer: Wyatt is an angel investor in many AI, Gaming and blockchain companies, including Immutable and Kaedim, both of which are mentioned in his responses.

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Binance to list ZKsync with token distribution program amid widespread criticism

Google Cloud is now a validator on the Polygon network

According to Polygon, “the same infrastructure used to power YouTube and Gmail” will help secure its network.

Polygon Labs announced on Sep. 29 that Google Cloud has joined the Polygon PoS network as a validator. 

Google Cloud joins more than 100 other validators verifying transactions on its L2 Ethereum network.

Per a post from Polygon Labs on the X platform announcing the partnership:

“The same infrastructure used to power YouTube and Gmail is now helping to secure the fast, low-cost, Ethereum-for-all Polygon protocol.”

Validators on the Polygon network help secure the network by operating nodes, staking MATIC, and participating in proof-of-stake consensus mechanics.

The Google Cloud Singapore account confirmed on X that Google Cloud was “now serving as a validator on the Polygon PoS network,” adding that it would be “contributing to the network's collective security, governance, and decentralization alongside 100+ other validators.”

Image source: Polygon Staking

While many of the validators are anonymous, as Cointelegraph recently reported, Google Cloud joins Germany's Deutsche Telekom, one of Europe’s largest telecommunications firms, on the Polygon network.

For its part, Google Cloud describes its relationship with Polygon Labs as “an ongoing strategic collaboration.” Alongside the announcement that it would be joining the network as a validator, Google Cloud APAC also released a YouTube video titled “Polygon Labs is solving for a Web3 future for all.”

Polygon Labs recently launched its “Polygon 2.0” initiative to update the Polygon network. As Cointelegraph reported, “Phase 0,” the current phase, features three Polygon Improvement Proposals (PIPs), PIPs 17-19.

PIP 17 involves the transition from MATIC to new token POL, whilst PIPs 18 and 19 address supporting endeavors such as the technical description of POL and updating gas tokens. According to Polygon, these changes are slated to begin taking place in Q4 2023.

Related: Google Cloud adds 11 blockchains to data warehouse ‘BigQuery’

Binance to list ZKsync with token distribution program amid widespread criticism

Google and Microsoft-backed AI firm AlphaSense raises $150M at $2.5B valuation

AlphaSense’s client list now includes most of the S&P 500 and nearly every firm listed in the Dow 50.

AlphaSense, a B2B artificial intelligence (AI) platform specializing in business intelligence and search, announced the successful completion of a $150 million Series E funding round led by BOND and joined by Google parent company Alphabet’s investment arm, CapitalG, as well as Goldman Sachs and Viking Global.

The latest round saw the company’s valuation grow from $1.7 billion, its value upon raising $225 million during its Series D in June of 2023, to $2.5 billion.

AlphaSense’s strong market position and continued growth owes to the recent boom in the AI sector. While consumer-facing generative AI models such as OpenAI’s ChatGPT and Anthropic’s Bard are designed to serve general purpose audiences, AlphaSense’s models combine strategic data points from both public and private analytics with a machine learning pipeline.

This allows AlphaSense’s “insights-as-a-service” platform to offer deep insights into business and finance analytics and provide actionable intelligence.

Related: ChatGPT can now browse the internet, no longer limited to info from 2021

In the crypto and blockchain world, platforms such as AlphaSense have the potential to go beyond the often dubious insights provided by generalized AI models such as ChatGPT. Where the latter has a penchant for hallucination, AlphaSense’s models parse specific datasets relevant to business intelligence and, essentially, curate insights into easily digestible articles complete with text and images.

Per a press release, AlphaSense CEO and founder Jack Kokko said the latest investment round would allow the company to stay at the forefront of the B2B generative AI sector:

“The additional capital allows us to invest strategically, so we can continue to lead the generative AI revolution in our market, and deliver on our mission of helping businesses find the right data and insights to support more confident and agile decision-making. We are building the future of market intelligence, and we are proud to continue revolutionizing search for enterprise customers.”

Binance to list ZKsync with token distribution program amid widespread criticism

Meta refutes claims of copyright infringement in AI training

In a lawsuit against Sarah Silverman and other authors Meta claims its AI system does not create copyright infringing material.

Meta has refuted claims that its artificial intelligence (AI) model Llama was trained using copyrighted material from popular books.

In court on Sept. 18 Meta asked a San Francisco federal judge to dismiss claims made by author Sarah Silverman and a host of other authors who have said it violated copyrights of their books in order to train its AI system.

The Facebook and Instagram parent company called the use of materials to train its systems “transformative” and of “fair use.”

“Use of texts to train LLaMA to statistically model language and generate original expression is transformative by nature and quintessential fair use..."

It continued by pointing out a conclusion in another related court battle, “much like Google’s wholesale copying of books to create an internet search tool was found to be fair use in Authors Guild v. Google, Inc., 804 F.3d 202 (2d Cir. 2015).” 

Meta said the “core issue” of copyright fair use should be taken up again on, “another day, on a more fulsome record.” The company said the plaintiff couldn’t provide explanations of the “information” they’re referring to, nor could they provide specific outputs related to their material.

The attorneys of the authors said in a separate statement on Sept. 19 that they are "confident” their claims will be held and will continue to proceed through “discovery and trial.”

OpenAI also attempted to dismiss parts of the claims back in August under similiar grounds to what Meta is currently proposing. 

Related: What is fair use? US Supreme Court weighs in on AI’s copyright dilemma

The original lawsuit against Meta and OpenAI was opened in July and was one of many lawsuits popping up against Big Tech giants over copyright and data infringement with the rise of AI.

On Sept. 5 a pair of unnamed engineers opened a class-action lawsuit against OpenAI and Microsoft regarding their alleged scraping methods to obtain private data while training their respective AI models.

In July, Google was sued on similar grounds after it updated its privacy policy. The lawsuit accused the company of misusing large amounts of data, including copyrighted material, in its own AI training.

Magazine: AI Eye: Real uses for AI in crypto, Google’s GPT-4 rival, AI edge for bad employees

Binance to list ZKsync with token distribution program amid widespread criticism

AI music sending traditional industry into ‘panic’, says new AI music platform CEO

Can Ansay the founder of AI streaming and marketplace platform Musixy.ai, says AI-generated music is revolutionary and brings efficiency and lowers costs to productions.

Artificial intelligence (AI) has been making waves in various industries across the globe. However, the conflict between its usefulness and its ability to infringe on intellectual property (IP) has seen a particular struggle in the creative industries. 

Major players in the music industry from artists and record labels to institutions like the Grammys and YouTube have all had to factor in AI in some form.

In the midst of traditional spaces in the music industry dealing with technology, new platforms are popping up that are embracing the technology from the start. Musixy.ai launched on Sept. 14 to serve as a streaming platform, label and marketplace for music exclusively generated by AI.

Cointelegraph spoke with Can Ansay, the CEO and founder of Musixy.ai, to better understand how giving AI-generated music its own space could shape the future music industry.

Musixy.ai said that it aims to become the “Spotify for AI hit songs,” particularly those that have been banned from other platforms. Over the last year, Spotify and other major streaming platforms have become more vigilant after Universal Music Group sent out an email asking them to step up their policing of copyrighted AI tracks.

Ansay said “the establishment” or major labels are in panic mode again, “as it was back then with Napster, because they fear revenue losses due to a new disruptive technology.”

“Unlike back then, the AI revolution is not only perfectly legal, but even threatens the existence of record companies; music is not only produced much more efficiently but also cheaper.”

He said AI presents “talented producers” with the ability to produce and monetize a hit song with any famous voice in any language. Musixy.ai particularly emphasizes the creation of new and covered hit songs with AI-generated vocals of well-known artists.

Related: AI-generated music challenges “efficiency” and “cost” of traditional labels, music exec.

Musixy.ai also works with Ghostwriter, who produced a viral song with AI-generated vocal tracks of artists Drake and the Weeknd called “Heart on My Sleeves." 

The song initially was said to be eligible for a Grammy, though the sentiment was later clarified by the Grammy CEO highlighting that it was taken down from commercial streaming platforms and didn’t receive permission from the artist or label to use the vocal likeness and therefore doesn’t qualify for nomination

Ansay said if Musixy.ai is recognized as a streaming platform by the Recording Academy:

“For the first time these amazing AI-assisted songs could rightfully win the Grammy recognition they deserve, produced with the help of AI.”

“This is especially true for those songs that unofficially use the vocals of famous singers with the help of AI that were arbitrarily banned from all other recognized streaming platforms,” he continued.

Ansay argues that from a legal perspective, vocal likeness is not “protectable,” as it would violate professional ethics and make it difficult for singers to work having a voice similar to another more famous voice. 

Instead he suggests that AI vocal tracks should be marked as  "unofficial" to avoid confusion.

Recently Google and Universal Music Group were reportedly in negotiations over a tool that would allow for AI tracks to be created using artists’ likenesses in a legal way.

When asked if AI-generated music be "competing" on the same level as non-AI-generated music in terms of awards and recognition or have its own playing field - he said both directions could be viable.

“For that to happen, one must legitimately, legally, and arguably under the rules of the Grammys, distinguish what tasks AI is used for in music production and to what degree.”

Otherwise, he believes a new category should be created such as  "AI Song of the Year" or something similar. "Because according to the Grammys' mission statement on their website," he argued, "they also want to recognize excellence in 'science.'"

Magazine: Tokenizing music royalties as NFTs could help the next Taylor Swift

Binance to list ZKsync with token distribution program amid widespread criticism

Google Cloud teams up with Web3 startup to make DeFi mainstream

Google Cloud told Cointelegraph that the partnership was done in light of increasing interest from clients exploring blockchain workloads on Google Cloud.

Google Cloud has joined forces with a Web3 startup to develop user-centric developer tools for decentralized finance (DeFi) to lower the barrier of entry into the decentralized world.

The DeFi infrastructure provider Orderly Network has teamed up with Google Cloud to develop off-chain components of DeFi infrastructure focused on tackling self-custody and transparency challenges. Orderly will be a DeFi infrastructure provider, available on Google Cloud Marketplace.

Google Cloud told Cointelegraph that the partnership was struck in light of increasing interest from clients exploring blockchain workloads on Google Cloud.

Rishi Ramchandani, Head of APAC Web3 GTM, Google Cloud Asia Pacific told Cointelegraph that the surge in demand highlights the necessity for a tailored Web3 product suite. He added:

“Working with Orderly Network to build robust infrastructure will help address the gaps in DeFi adoption and growth, and ensure scalability in the continuously evolving space through the development of secure and user-centric enterprise developer tools.”

With blockchain technology being at the centre of the fintech revolution, many in the financial industry are exploring decentralized technologies, including JP Morgan which has been actively testing various blockchain-based solutions including DeFi ones. The traditional banking systems started showing interest in blockchain tech quite early with one report from 2021 suggesting that 55% of the top-100 banks have some exposure to the decentralized tech.

Orderly hopes to distribute the DeFi load into on-chain and off-chain components to ensure a balance between speed with sufficient decentralization. The firm claimed this distribution would streamline operations without compromising the inherent advantages of a decentralized system. These off-chain components will ensure that crucial interactions are carried out on-chain while interactions that can be efficiently handled off-chain are processed away from the main blockchain.

Cointelegraph got in touch with Arjun Arora, COO at Orderly Network, to understand how their collaboration with Google will help in making DeFi mainstream. Arora told Cointelgraph that to achieve mainstream adoption, blockchain technology must outperform current solutions and Orderly is building a trading Lego for seamless dApp integration across blockchains with a focus on merging the best of decentralzied exchanges (DEXs) and centralized exchanges (CEXs.)

“Our collaboration with Google ensures our matching engine competes with centralized systems seen in traditional finance, but the rest of our infrastructure and liquidity network retains all the benefits of self-custody and transparency seen in decentralized finance.”

DeFi's biggest challenge comes from the entry barrier and the security issues that have plagued the ecosystem for a long. With the likes of Google Cloud entering the DeFi infrastructure market with Orderly as its key partner, the collaboration aims to build a secure environment and tools to resolve these issues.

Binance to list ZKsync with token distribution program amid widespread criticism