1. Home
  2. Copyrights

Copyrights

DAO acquires rights to the image behind the Doge meme

Individuals behind the project said by acquiring the license to the iconic Shiba Inu image, they hoped to “unify the whole Doge community.”

Own The Doge, the decentralized autonomous organization (DAO) associated with the Shiba Inu meme behind the Dogecoin (DOGE) token, announced that it had secured legal rights to the iconic image.

Speaking to Cointelegraph on April 17, Own The Doge project leaders John Monarch and ‘Tridog’ said they had negotiated a deal with Sato Atsuko, the owner of the 18-year-old Shiba Inu Kabosu that became popular for her confused expression while sitting on a couch. The group purchased exclusive licensing and rights for the Doge image, seemingly giving them control over merchandise and other uses for the meme in the crypto space.

“I think [the deal] unlocks a lot for corporations, where there’s confusion around copyright,” said Tridog.

Read more

Trader Says Chainlink (LINK) Flashing Short-Term Bullish Signal As Crypto Market Tracks Sideways

Artists aim to thwart AI with data-poisoning software and legal action

With AI-generated content continuing to evolve, the advent of data-poisoning tools capable of shielding an artist’s works from AI could be a game changer.

As the use of artificial intelligence (AI) has permeated the creative media space — especially art and design — the definition of intellectual property (IP) seems to be evolving in real time as it becomes increasingly difficult to understand what constitutes plagiarism.

Over the past year, AI-driven art platforms have pushed the limits of IP rights by utilizing extensive data sets for training, often without the explicit permission of the artists who crafted the original works.

For instance, platforms like OpenAI’s DALL-E and Midjourney’s service offer subscription models, indirectly monetizing the copyrighted material that constitutes their training data sets.

Read more

Trader Says Chainlink (LINK) Flashing Short-Term Bullish Signal As Crypto Market Tracks Sideways

Evidence mounts as new artists jump on Stability AI, MidJourney copyright lawsuit

The battle continues as artists amend a lawsuit previously struck down by court authorities against major AI companies who have allegedly violated creative copyright laws.

A copyright lawsuit filed against multiple companies developing artificial intelligence (AI) tools has been amended as artists and their legal teams alleged the misuse of their creative works. 

On Nov.

The new artists include H.

According to the amended class action case Stability AI, Midjourney and DeviantArt, along with a new defendant, Runway AI, have produced systems that create art in the style of the artists when the artists' names are used as prompts fed to the AI.

The plaintiffs claim that, as a result, users have generated art that is “indistinguishable” from their own.

"AI image products are primarily valued as copyright-laundering devices, promising customers the benefits of art without the costs of artists."

Related: Artists face a choice with AI: Adapt or become obsolete

In addition, the artists allege that Midjourney - one of the most popular generative AI tools for creating art with roughly 16.4 million users, according to its website - has violated rights that fall under federal trademark laws in the United States.

The claims point to MidJourney's website promoting a list of over 4,700 artists’ names, which includes some of the plaintiffs’, to use as generative prompts.

Read more

Trader Says Chainlink (LINK) Flashing Short-Term Bullish Signal As Crypto Market Tracks Sideways

Grimes collaboration with music platform makes 200+ AI songs available for creators

Grimes’ manager Daouda Leonard and music platform Slip.stream explain the importance of artists owning their data and controlling their rights to stay ahead in their approach to AI.

The rapid emergence of artificial intelligence (AI) into the public sector has proven to be one of the biggest developments of the year on a global scale. 

Major industries have been turned upside down with AI now on the scene. In the creative sectors, the music industry in particular, AI is often seen as a double-edged sword: a creativity kickstarter and a thief in the night of copyrights.

The popular musician and producer Grimes, however, has had a different approach to being an artist in the time of emerging AI. She was one of the first artists to be vocal about the technology after its explosion in popularity in late 2022 with the release of ChatGPT.

In April, Grimes famously said she would split 50% of the royalties with the creators generating AI music using her vocals. It was after this that Grimes announced her new platform elf.tech, an open-source software program solely dedicated to legally replicating her voice for music creation.

In September, Grimes was included as one of Time Magazine’s Top 100 People in AI. Recently, Grimes and her team partnered with music creation platform Slip.stream to make 200+ GrimesAI songs available for use by creators.

Cointelegraph spoke with Grimes’ manager and CEO of research studio for musician’s IP rights CreateSafe, Daouda Leonard, along with the team behind Slip.stream, to understand how creatives can get ahead in their approach to AI.

Daouda pinpointed the current moment on the timeline of the technological revolution as the “DARQ ages (Distributed, Artificial, Reality and Quantum). “The only way through it is to use it,” he said. “I think all industry executives, artists, and companies need to be experimenting with new emergent technologies.”

“By rights holders allowing new technology platforms to train with their data, they can be proactive about striking lucrative deals for their artists and catalogs.”

With AI, creators can now utilize artists’ voices, for example, in their own creations, and AI companies are taking creative data to train their systems. Therefore, a strong content management system and royalties mechanism need to be a priority.

Grimes’ management said it’s also using another emerging technology, smart contracts, to make this happen and manage metadata information about “who did what, when and what they’re owed.”

Slip.stream, being the platform that houses the available Grimes AI tracks, echoed the sentiment saying:

“It’s up to forward-thinking artists, executives, and companies to dream up and experiment with its applications to better protect their clients and capture any upside… It takes guts to zig when others zag.”

When artists and management don’t manage to stay on top of such things, it could become too late.

Lawsuits against AI companies have been springing up throughout the year, be it the Author’s Guild launching a class-action lawsuit against OpenAI - the creator of ChatGPT- or Universal Music Group (UMG) suing Anthropic AI, both over creative copyright infringement.

Related: Universal Music and Google in talks over deal to combat AI deep fakes: Report

Leading by example, Grimes is showing the industry what is possible when artists both own their data and control the rights to it.

“Owning your masters and publishing is only good if you know what to do with it,” said Daouda. “I don’t know if there is a perfect artist to do such a move. Grimes felt that it was important to experiment and see what’s possible.”

“I think every artist who is open to taking risks and curious about how technology can be a benefit to their career is the perfect for doing this, so I’m sure there are a lot of them.”

Many industry insiders who have wrapped their heads around the possibilities AI can present to artists and are trying to proactively find ways to reap the benefits without losing sovereignty have touted the technology as a “creative amplifier” of sorts.

Slip.stream said when artists are proactive with their rights and content shows that “AI is not about replacing humans with robots, but establishing new norms and structures for artistic collaboration that were unavailable to the masses before CreateSafe and Grimes.”

“To give anyone in the world the ability to collaborate with their favorite artist, opens up groundbreaking possibilities for creative output and fan engagement.”

Grimes herself posted a similar sentiment on X, formerly known as Twitter, a few days after the announcement of her collaboration with Slip.stream:

Daouda ended by saying that he believes what is happening with AI is even bigger than samples and collaborations.

“Generative AI or computational creativity makes it possible for people to go from idea to distribution in minutes, maybe even seconds,” he said.

“Whether that’s a good or bad thing is subjective, but what is objective is that now a lot of people can do it and it opens up modes of expression that ultimately could lead to a certain type of healing that many people can participate in. Music is healing and when we can participate in it that’s powerful.”

Magazine: BitCulture: Fine art on Solana, AI music, podcast + book reviews

Trader Says Chainlink (LINK) Flashing Short-Term Bullish Signal As Crypto Market Tracks Sideways

Law professor says blockchain tech could ‘revolutionize’ copyright offices

According to the research, blockchain provides several game-changing benefits for intellectual property licensing and management.

A professor from the Texas A&M University School of Law recently published research exploring blockchain technology use cases in the world of copyright administration. According to their findings, blockchain has the potential to radically alter the way intellectual property is handled both “domestically and internationally.” 

Dr. Peter Yu, the Regents Professor of Law and Communication and Director of the Texas A&M University School of Law’s Center for Law and Intellectual Property, and the paper’s sole author, asserts that blockchain’s immutability makes it a prime candidate for integration with the intellectual property system.

Per the paper:

"On a blockchain, once a transaction has been recorded, it is virtually impossible to alter that record. Should the transaction be wrongly recorded, a new transaction will have to be hashed into the blockchain to provide correction. The immutability feature has therefore made blockchain technology very attractive for registering copyright, storing ownership and licensing records, or completing other similar tasks."

Dr. Yu continues to explain that, particular to the copyright system, the blockchain ledger can provide a method by which people can determine the status of a particular record, such as whether the copyright has fallen into public domain or become orphaned.

Other benefits, according to the research, include traceability, transparency, and disintermediation.

Related: Bitcoin white paper turns 15 as Satoshi Nakamoto’s legacy lives on

Traceability is defined in the paper as the ability to trace the entire lifecycle of a registration on the copyright ledger from its inception. Making that information available to the public via a blockchain explorer or similar method would provide an additional layer of transparency not available through traditional server-based records systems.

The final benefit discussed in Dr. Yu’s paper, disintermediation, involves blockchain’s ability to operate independently of a governing body.

Per the paper, “without dependence on a trusted intermediary – such as a government, a bank, or a clearinghouse – the technology supports global cooperation even in the absence of the participation or support of governments or intergovernmental bodies."

Dr. Yu speculates that these benefits could lead to an artist/business led copyright system where intellectual property is potentially registered and mediated independently of the state.

Trader Says Chainlink (LINK) Flashing Short-Term Bullish Signal As Crypto Market Tracks Sideways

How Google’s AI legal protections can change art and copyright protections

Amid myriad legal accusations surrounding its AI services, Google stands its ground, vowing to protect its users.

Google has been facing a wave of litigation recently as the implications of generative artificial intelligence (AI) on copyright and privacy rights become clearer.

Amid the ever-intensifying debate, Google has not only defended its AI training practices but also pledged to shield users of its generative AI products from accusations of copyright violations.

However, Google’s protective umbrella only spans seven specified products with generative AI attributes and conspicuously leaves out Google’s Bard search tool. The move, although a solace to some, opens a Pandora’s box of questions around accountability, the protection of creative rights and the burgeoning field of AI.

Moreover, the initiative is also being perceived as more than just a mere reactive measure from Google, but rather a meticulously crafted strategy to indemnify the blossoming AI landscape.

AI’s legal cloud 

The surge of generative AI over the last couple of years has rekindled the age-old flame of copyright debates with a modern twist. The bone of contention currently pivots around whether the data used to train AI models and the output generated by them violate propriety intellectual property (IP) affiliated with private entities.

In this regard, the accusations against Google consist of just this and, if proven, could not only cost Google a lot of money but also set a precedent that could throttle the growth of generative AI as a whole​.

Google’s legal strategy, meticulously designed to instill confidence among its clientele, stands on two primary pillars, i.e., the indemnification of its training data and its generated output. To elaborate, Google has committed to bearing legal responsibility should the data employed to devise its AI models face allegations of IP violations.

Not only that, but the tech giant is also looking to protect users against claims that the text, images or other content engendered by its AI services do not infringe upon anyone else’s personal data — encapsulating a wide array of its services, including Google Docs, Slides and Cloud Vertex AI.

Google has argued that the utilization of publicly available information for training AI systems is not tantamount to stealing, invasion of privacy or copyright infringement.

However, this assertion is under severe scrutiny as a slew of lawsuits accuse Google of misusing personal and copyrighted information to feed its AI models. One of the proposed class-action lawsuits even alleges that Google has built its entire AI prowess on the back of secretly purloined data from millions of internet users.

Therefore, the legal battle seems to be more than just a confrontation between Google and the aggrieved parties; it underlines a much larger ideological conundrum, namely: “Who truly owns the data on the internet? And to what extent can this data be used to train AI models, especially when these models churn out commercially lucrative outputs?”

An artist’s perspective

The dynamic between generative AI and protecting intellectual property rights is a landscape that seems to be evolving rapidly. 

Nonfungible token artist Amitra Sethi told Cointelegraph that Google’s recent announcement is a significant and welcome development, adding:

“Google’s policy, which extends legal protection to users who may face copyright infringement claims due to AI-generated content, reflects a growing awareness of the potential challenges posed by AI in the creative field.”

However, Sethi believes that it is important to have a nuanced understanding of this policy. While it acts as a shield against unintentional infringement, it might not cover all possible scenarios. In her view, the protective efficacy of the policy could hinge on the unique circumstances of each case. 

When an AI-generated piece loosely mirrors an artist’s original work, Sethi believes the policy might offer some recourse. But in instances of “intentional plagiarism through AI,” the legal scenario could get murkier. Therefore, she believes that it is up to the artists themselves to remain proactive in ensuring the full protection of their creative output.

Recent: Game review: Immutable’s Guild of Guardians offers mobile dungeon adventures

Sethi said that she recently copyrighted her unique art genre, “SoundBYTE,” so as to highlight the importance of artists taking active measures to secure their work. “By registering my copyright, I’ve established a clear legal claim to my creative expressions, making it easier to assert my rights if they are ever challenged,” she added.

In the wake of such developments, the global artist community seems to be coming together to raise awareness and advocate for clearer laws and regulations governing AI-generated content​​.

Tools like Glaze and Nightshade have also appeared to protect artists’ creations. Glaze applies minor changes to artwork that, while practically imperceptible to the human eye, feeds incorrect or bad data to AI art generators. Similarly, Nightshade lets artists add invisible changes to the pixels within their pieces, thereby “poisoning the data” for AI scrapers.

Examples of how “poisoned” artworks can produce an incorrect image from an AI query. Source: MIT

Industry-wide implications 

The existing narrative is not limited to Google and its product suite. Other tech majors like Microsoft and Adobe have also made overtures to protect their clients against similar copyright claims.

Microsoft, for instance, has put forth a robust defense strategy to shield users of its generative AI tool, Copilot. Since its launch, the company has staunchly defended the legality of Copilot’s training data and its generated information, asserting that the system merely serves as a means for developers to write new code in a more efficient fashion​.

Adobe has incorporated guidelines within its AI tools to ensure users are not unwittingly embroiled in copyright disputes and is also offering AI services bundled with legal assurances against any external infringements.

Magazine: Ethereum restaking: Blockchain innovation or dangerous house of cards?

The inevitable court cases that will appear regarding AI will undoubtedly shape not only legal frameworks but also the ethical foundations upon which future AI systems will operate.

Tomi Fyrqvist, co-founder and chief financial officer for decentralized social app Phaver, told Cointelegraph that in the coming years, it would not be surprising to see more lawsuits of this nature coming to the fore:

“There is always going to be someone suing someone. Most likely, there will be a lot of lawsuits that are opportunistic, but some will be legit.”

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Trader Says Chainlink (LINK) Flashing Short-Term Bullish Signal As Crypto Market Tracks Sideways

Universal Music Group sues Anthropic AI over copyright infringement

Universal Music Group, Concord Publishing and ABKCO Music & Records alleged that Anthropic “unlawfully” copied and disseminated “vast amounts of copyrighted works” from the publishers.

Universal Music Group (UMG), Concord Publishing and ABKCO Music & Records have filed a lawsuit against the artificial intelligence (AI) startup Anthropic on accusations of the latter committing copyright infringement in training its AI chatbot Claude.

The lawsuit was filed on Oct. 18 and claims that Anthropic “unlawfully” copied and disseminated “vast amounts of copyrighted works - including the lyrics to myriad musical compositions” that are under the ownership or control of the publishers.

It called Anthropic’s use of the works “widespread and systematic infringement” and said the defendant cannot reproduce, distribute and display copyrighted works to build a business without the proper rights.

“This foundational rule of copyright law dates all the way back to the Statute of Anne in 1710, and it has been applied time and time again to numerous infringing technological developments in the centuries since. That principle does not fall away simply because a company adorns its infringement with the words “AI.”

The lawsuit claims that Claude can generate identical or nearly identical copies of songs such as “What a Wonderful World,” “Gimme Shelter,” “American Pie,” “Sweet Home Alabama,” “Every Breath You Take” and at least 500 more.

Related: British MPs urge action on NFT copyright infringement, crypto fan tokens

In this case, the publishers provided examples of Claude being able to deliver an almost word for word replication of UMG’s song “I will survive” by Gloria Gaynor. 

The plaintiffs have asked the court to order that the alleged infringement is put to an end, along with monetary damages.

This case joins the many popping up against major AI developers on the grounds of copyright infringement. 

OpenAI, the developer of AI chatbot ChatGPT, has been sued for similar reasons by the Author’s Guild. Meta is currently facing a lawsuit by author Sarah Silverman and others for copyright issues. Google is involved in a lawsuit regarding its data scraping policy for AI training purposes.

As far as the music industry’s involvement is concerned, UMG has been vigilant about protecting its catalogue and the rights of its artists from AI-related copyright violations. On Oct. 18 it entered into a strategic partnership with BandLab Technologies focusing on ethical AI usage to protect artist and songwriter rights.

Over the summer, UMG and Google were reportedly in talks to create a tool that would allow for the creation of AI tracks using artists’ likenesses in a legal way.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Trader Says Chainlink (LINK) Flashing Short-Term Bullish Signal As Crypto Market Tracks Sideways

Universal Music Group enters partnership to protect artists’ rights against AI violations

The new partnership between Universal Music Group and BandLab Technologies focuses on ethical AI usage to protect artist and songwriter rights.

Universal Music Group (UMG) announced a new partnership with social music creation platform BandLab Technologies on Oct. 18 to promote responsible practices with artificial intelligence (AI) in the industry. 

The partnership says its focus is on the “ethical use of AI,” with one of the main goals being to protect the rights of artists and songwriters.

Michael Nash, the executive vice president and the chief digital officer of UMG, added that:

“This is more important than ever right now as AI assumes an increasingly prominent place in the evolution of music creation tools.”

Nash commented that along with protecting artists' rights, the two plan to create responsible approaches to using AI in creative processes to “champion human creativity and culture.”

A similar sentiment was expressed by the CEO of the Recording Academy, the institution behind the Grammy Awards, in an interview with Cointelegraph when he said AI could be an “amplifier” of human creativity.

Related: AI music sending traditional industry into ‘panic,’ says new AI music platform CEO

This is not the first time UMG has taken on AI-related issues. In August, UMG and Google were reportedly in talks over ways to combat AI deep fakes through the development of a new tool that would allow for the creation of AI tracks using artists’ likenesses in a legal way.

Shortly before UMG and Google began talking about taking AI copyright issues, YouTube released its own set of principles for working with the music industry on AI tech.

YouTube said it had been in talks with major music industry players such as UMG over how to develop the principles. One was the introduction to its new “Music AI Incubator.”

The struggle for copyright infringement matters between artists, musicians and creators regarding AI has even reached the courts. In August 2023, a United States judge denied copyright for AI art.

Magazine: BitCulture: Fine art on Solana, AI music, podcast + book reviews

Trader Says Chainlink (LINK) Flashing Short-Term Bullish Signal As Crypto Market Tracks Sideways

Google to protect users in AI copyright accusations

Google explicitly stated that only seven products fall under this legal protection, excluding Google’s Bard search tool.

Google has announced its commitment to protect users of generative artificial intelligence (AI) systems within its Google Cloud and Workspace platforms in cases where they face allegations of intellectual property infringement. This move aligns Google with other companies, such as Microsoft, Adobe and more, which have also made similar assurances.

In a recent blog post, Google made it clear that customers utilizing products integrated with generative AI capabilities will receive legal protection. This announcement addresses mounting concerns regarding the potential copyright issues associated with generative AI.

Google explicitly outlined seven products that fall under this legal protection. The products are Duet AI in Workspace, encompassing text generation in Google Docs and Gmail, as well as image generation in Google Slides and Google Meet; Duet AI in Google Cloud; Vertex AI Search; Vertex AI Conversation; Vertex AI Text Embedding API; Visual Captioning on Vertex AI; and Codey APIs. It’s worth noting that this list did not include Google’s Bard search tool.

According to Google:

“If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.”

Google has unveiled a distinctive approach to intellectual property indemnification, described as a pioneering two-pronged strategy. Under this initiative, Google extends its protection to encompass both the training data and the outcomes generated from its foundational models.

Screenshot of Google’s announcement. Source: Google

This signifies that if legal action is taken against someone due to the use of Google’s training data that involves copyrighted material, Google will assume responsibility for addressing this legal challenge.

The company clarified that the indemnity related to training data is not a novel form of protection. However, Google acknowledged that its customers expressed a desire for clear and explicit confirmation that this protection extends to scenarios where the training data incorporates copyrighted material.

Related: Google Assistant will soon incorporate Bard AI chat service

Google will additionally protect users if they face legal action due to the results they obtain while utilizing its foundation models. This includes scenarios where users generate content resembling published works. The company emphasized that this safeguard is contingent on users not intentionally generating or using content to infringe upon the rights of others.

Other companies have issued similar statements. Microsoft declared its commitment to assume legal responsibility for enterprise users of its Copilot products. Adobe, on the other hand, affirmed its dedication to safeguarding enterprise customers from copyright, privacy and publicity rights claims when using Firefly.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change

Trader Says Chainlink (LINK) Flashing Short-Term Bullish Signal As Crypto Market Tracks Sideways

The Author’s Guild launches class-action lawsuit against OpenAI

The Author’s Guild opened a lawsuit against OpenAI, alleging misuse of copyrighted material in training of its AI models.

The Author’s Guild in the United States opened a class-action lawsuit against the Microsoft-backed OpenAI on Sept. 19 due to its alleged misuse of copyrighted material in the training of its artificial intelligence (AI) models.

According to court documents, the oldest and largest professional organization for writers in the U.S. is operating under the Copyright Act and seeking “redress” for what it calls “flagrant and harmful infringement” of registered copyrights in written works of fiction.

It goes on to argue that works were copied wholesale and without permission or “consideration” by feeding them into large language models (LLMs).

“These algorithms are at the heart of Defendants’ massive commercial enterprise. And at the heart of these algorithms is systematic theft on a mass scale.”

The Author’s Guild said it represents a class of professional fiction writers whose “works spring from their own minds and their creative literary expression.” It says, therefore, that since their livelihoods derive from these creative works, the LLMs “endanger” the ability of fiction writers to make a living.

It suggested that the AI models could’ve been trained via the public domain, or OpenAI could have paid a licensing fee for the usage of the copyrighted works.

“What Defendants could not do was evade the Copyright Act altogether to power their lucrative commercial endeavor, taking whatever datasets of relatively recent books they could get their hands on without authorization.”

On Sept. 11, the Guild posted an article on X about how authors can protect their work from AI web crawlers. 

Pinned to the top of its profile, the Author’s Guild has a link to its advocacy work in regards to AI technologies.

Related: Elon Musk, Mark Zuckerberg and Sam Altman talk AI regs in Washington

This filing from the Author’s Guild follows updates in a similar lawsuit against Meta and OpenAI and their respective AI models using copyrighted material in training.

Author Sarah Silverman and others opened the lawsuit in July; however, now both companies have asked judges to dismiss the claims.

In August, the U.S. Copyright Office issued a notice of inquiry on AI, seeking public comment on topics related to AI content production and how it should be handled by policymakers when AI content mimics that which is made by human creators.

Prior to the inquiry, U.S. District Judge Beryl Howell ruled that artwork created solely by AI is not eligible for copyright protection.

Magazine: NFT collapse and monster egos feature in new Murakami exhibition

Trader Says Chainlink (LINK) Flashing Short-Term Bullish Signal As Crypto Market Tracks Sideways