1. Home
  2. Copyrights

Copyrights

Meta refutes claims of copyright infringement in AI training

In a lawsuit against Sarah Silverman and other authors Meta claims its AI system does not create copyright infringing material.

Meta has refuted claims that its artificial intelligence (AI) model Llama was trained using copyrighted material from popular books.

In court on Sept. 18 Meta asked a San Francisco federal judge to dismiss claims made by author Sarah Silverman and a host of other authors who have said it violated copyrights of their books in order to train its AI system.

The Facebook and Instagram parent company called the use of materials to train its systems “transformative” and of “fair use.”

“Use of texts to train LLaMA to statistically model language and generate original expression is transformative by nature and quintessential fair use..."

It continued by pointing out a conclusion in another related court battle, “much like Google’s wholesale copying of books to create an internet search tool was found to be fair use in Authors Guild v. Google, Inc., 804 F.3d 202 (2d Cir. 2015).” 

Meta said the “core issue” of copyright fair use should be taken up again on, “another day, on a more fulsome record.” The company said the plaintiff couldn’t provide explanations of the “information” they’re referring to, nor could they provide specific outputs related to their material.

The attorneys of the authors said in a separate statement on Sept. 19 that they are "confident” their claims will be held and will continue to proceed through “discovery and trial.”

OpenAI also attempted to dismiss parts of the claims back in August under similiar grounds to what Meta is currently proposing. 

Related: What is fair use? US Supreme Court weighs in on AI’s copyright dilemma

The original lawsuit against Meta and OpenAI was opened in July and was one of many lawsuits popping up against Big Tech giants over copyright and data infringement with the rise of AI.

On Sept. 5 a pair of unnamed engineers opened a class-action lawsuit against OpenAI and Microsoft regarding their alleged scraping methods to obtain private data while training their respective AI models.

In July, Google was sued on similar grounds after it updated its privacy policy. The lawsuit accused the company of misusing large amounts of data, including copyrighted material, in its own AI training.

Magazine: AI Eye: Real uses for AI in crypto, Google’s GPT-4 rival, AI edge for bad employees

Trezor to end privacy-enhancing coinjoin feature as Wasabi Wallet steps back

AI music sending traditional industry into ‘panic’, says new AI music platform CEO

Can Ansay the founder of AI streaming and marketplace platform Musixy.ai, says AI-generated music is revolutionary and brings efficiency and lowers costs to productions.

Artificial intelligence (AI) has been making waves in various industries across the globe. However, the conflict between its usefulness and its ability to infringe on intellectual property (IP) has seen a particular struggle in the creative industries. 

Major players in the music industry from artists and record labels to institutions like the Grammys and YouTube have all had to factor in AI in some form.

In the midst of traditional spaces in the music industry dealing with technology, new platforms are popping up that are embracing the technology from the start. Musixy.ai launched on Sept. 14 to serve as a streaming platform, label and marketplace for music exclusively generated by AI.

Cointelegraph spoke with Can Ansay, the CEO and founder of Musixy.ai, to better understand how giving AI-generated music its own space could shape the future music industry.

Musixy.ai said that it aims to become the “Spotify for AI hit songs,” particularly those that have been banned from other platforms. Over the last year, Spotify and other major streaming platforms have become more vigilant after Universal Music Group sent out an email asking them to step up their policing of copyrighted AI tracks.

Ansay said “the establishment” or major labels are in panic mode again, “as it was back then with Napster, because they fear revenue losses due to a new disruptive technology.”

“Unlike back then, the AI revolution is not only perfectly legal, but even threatens the existence of record companies; music is not only produced much more efficiently but also cheaper.”

He said AI presents “talented producers” with the ability to produce and monetize a hit song with any famous voice in any language. Musixy.ai particularly emphasizes the creation of new and covered hit songs with AI-generated vocals of well-known artists.

Related: AI-generated music challenges “efficiency” and “cost” of traditional labels, music exec.

Musixy.ai also works with Ghostwriter, who produced a viral song with AI-generated vocal tracks of artists Drake and the Weeknd called “Heart on My Sleeves." 

The song initially was said to be eligible for a Grammy, though the sentiment was later clarified by the Grammy CEO highlighting that it was taken down from commercial streaming platforms and didn’t receive permission from the artist or label to use the vocal likeness and therefore doesn’t qualify for nomination

Ansay said if Musixy.ai is recognized as a streaming platform by the Recording Academy:

“For the first time these amazing AI-assisted songs could rightfully win the Grammy recognition they deserve, produced with the help of AI.”

“This is especially true for those songs that unofficially use the vocals of famous singers with the help of AI that were arbitrarily banned from all other recognized streaming platforms,” he continued.

Ansay argues that from a legal perspective, vocal likeness is not “protectable,” as it would violate professional ethics and make it difficult for singers to work having a voice similar to another more famous voice. 

Instead he suggests that AI vocal tracks should be marked as  "unofficial" to avoid confusion.

Recently Google and Universal Music Group were reportedly in negotiations over a tool that would allow for AI tracks to be created using artists’ likenesses in a legal way.

When asked if AI-generated music be "competing" on the same level as non-AI-generated music in terms of awards and recognition or have its own playing field - he said both directions could be viable.

“For that to happen, one must legitimately, legally, and arguably under the rules of the Grammys, distinguish what tasks AI is used for in music production and to what degree.”

Otherwise, he believes a new category should be created such as  "AI Song of the Year" or something similar. "Because according to the Grammys' mission statement on their website," he argued, "they also want to recognize excellence in 'science.'"

Magazine: Tokenizing music royalties as NFTs could help the next Taylor Swift

Trezor to end privacy-enhancing coinjoin feature as Wasabi Wallet steps back

Grammy CEO clarifies AI Drake song ineligible for award over copyright issues

The Record Academy executive clearly stated that the track is “not eligible” and cited that the vocals were not legally obtained nor were they cleared by the label or artist.

The CEO of the Recording Academy, which hosts the yearly Grammy Music Awards, has cleared up misconceptions regarding the eligibility of an artificial intelligence (AI)-generated Drake song for an award nomination.

On Sept. 8, Harvey Mason Jr. took to Instagram and released a video clearly stating that the track is “not eligible for Grammy consideration” and wanted to be extra clear that:

“Even though it was written by a human creator, the vocals were not legally obtained, the vocals were not cleared by the label or the artist, and the song is not commercially available — because of that, it’s not eligible.”

He said the topic of AI is both “complicated” and “moving really quickly” while also commenting that he takes it “very seriously” and anticipates more evolution and changes in the industry.

While music with AI components can be eligible for Grammy nominations, the track must meet specific requirements, most importantly that the part up for nomination was created by a human. For example, for a track to win an award for vocal performance, it must have been performed by a human.

Mason Jr. reiterated this element in his most recent statement by saying:

“Please, do not be confused: the Academy is here to support and advocate and protect and represent human artists and human creators period.”

In a previous interview with Cointelegraph, he also stressed this aspect, saying “The role of the Academy is always to protect the creative and music communities.”

Related: Justin Bieber hit track becomes NFT for royalty sharing

In addition to the human element, the other aspect stressed by Mason Jr. is that in order to be eligible for an award, the track must be commercially available. This includes availability on major streaming platforms, such as Spotify and Apple Music. 

However, the track in question was removed from platforms due to its copyright violations and lack of approval from the artist and label.

Labels have been advocating for platforms to be vigilant in removing content that infringes on the intellectual property of artists. Back in April, Universal Music Group (UMG) asked streaming services, including Spotify, to remove AI-generated content.

Most recently, UMG and Google announced a collaboration to combat AI deep fakes. The two are in negotiations for licensing melodies and vocal tracks for use in AI-generated music.

Magazine: BitCulture: Fine art on Solana, AI music, podcast + book reviews

Trezor to end privacy-enhancing coinjoin feature as Wasabi Wallet steps back

US Copyright Office issues notice of inquiry on artificial intelligence

The inquiry seeks information and comment on issues related to the content AI produces and how policy makers should treat AI that imitates or mimics human artists.

The United States Copyright Office issued an official request for comments and notice of inquiry on copyright and artificial intelligence (AI) in the Federal Register on Aug. 30. 

According to the filing, the Copyright Office is seeking “factual information and views” on copyright issues raised by recent advances in generative AI models such as OpenAI’s ChatGPT and Google’s Bard.

In a press release sent via email from the Library of Congress and viewed by Cointelegraph, the U.S. Copyright Office stated:

“These issues include the use of copyrighted works to train AI models, the appropriate levels of transparency and disclosure with respect to the use of copyrighted works, the legal status of AI-generated outputs, and the appropriate treatment of AI-generated outputs that mimic personal attributes of human artists.”

Those interested in commenting during the official inquiry period will have until Oct. 18 to do so.

The request comes during a tumultuous time for the AI industry with regards to regulation in the U.S. and around the world. While the EU and other territories have enacted policies to protect citizen privacy and limit how corporations can use, share, and sell data, there’s been little in the way of regulation concerning the use of copyrighted material to train or prompt AI systems.

Related: British MPs call on government to scrap AI exemptions that hurt artists

As Cointelegraph reported previously, the media industry is grappling with how to deal with the emergence of AI systems capable of imitating the work of creators and artists. The New York Times and other news agencies have taken steps to block web crawlers from AI companies seeking to train their models on their data.

Artists such as comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey have sued OpenAI for allegedly training AI models on copyrighted work without the consent of the owners or creators.

Beyond copyright issues, there are also concerns related to AI involving misalignment (the idea that the machines could have objectives that clash with the wellbeing of humanity) and the mass proliferation of misinformation.

The U.S. government has held a series of meetings with stakeholders in the AI community, with the next, a closed-door meeting between Senator Chuck Schumer and Tesla CEO Elon Musk, Alphabet CEO Sundar Pichai, OpenAI CEO Sam Altman, and Microsoft CEO Satya Nadella, slated for Sep. 13.

Trezor to end privacy-enhancing coinjoin feature as Wasabi Wallet steps back

Universal Music and Google in talks over deal to combat AI deep fakes: Report

Universal Music and Google are reportedly in negotiations over a tool that would allow for the creation of AI tracks using artists’ likenesses in a legal way.

Universal Music Group — one of the world’s leading music companies — and Google are in negotiations to license melodies and vocal tracks of artists to be used in songs generated by artificial intelligence (AI), according to a report from the Financial Times. 

The talks have been confirmed by what the FT reports are “four people familiar with the matter.” The companies are reportedly aiming to create a partnership between the music industry and Big Tech in order to manage the rampant emergence of AI-generated deep fakes.

Mainstream AI usage has sparked concern among major music industry leaders due to the amount of “deep fakes” using musicians’ likenesses. Clips of AI-generated Drake and Kanye West began to go viral around April. Many have since been taken down.

Reportedly, the discussions between the two industry giants are still in the early stages, with no impending product launch or guidelines. However, the FT sources say the goal is to develop a tool for creating tracks legally with copyrights rightly attributed. 

The sources said that artists would have the right to opt in for their voices and music to be used. Another source claimed that Warner Music Group (WMG) has also been in conversation with Google regarding a similar product.

Cointelegraph reached out to WMG for further information but has not received a response.

Related: Music with AI elements can win a Grammy, Recording Academy CEO says

In April, Universal Music Group asked streaming services like Spotify to remove all AI-generated content due to copyright infringement.

A few weeks later, Spotify said it was ramping up policing of the platform and began actively taking down content in violation.

However, some artists are fully on board with their voices being used in AI-generated music. Grimes said she’s eager to be a “guinea pig” for this type of content and will split royalties 50/50 with the creators. 

She also created Elf Tech, alongside a team of developers, which is her own voice simulation program available for public use.

Google and Meta have recently launched their own tools called Music LM and AudioCraft to create music and audio using generative AI.

Many in creative industries are worried about the implications of AI being used to create artistic and creative products. However, in an interview between Cointelegraph and the CEO of the Recording Academy, he said AI can be used as a “creative amplifier.”

Magazine: BitCulture: Fine art on Solana, AI music, podcast + book reviews

Trezor to end privacy-enhancing coinjoin feature as Wasabi Wallet steps back

Sarah Silverman sues Meta and OpenAI for copyright violations

Author Sarah Silverman and two others opened a lawsuit against OpenAI and Meta for using copyrighted work without permission to train their AI systems.

The American comedian and author Sarah Silverman, along with two other authors Richard Kadrey and Christopher Golden, have filed lawsuits against Meta Platforms’ LLaMa and OpenAI’s ChatGPT over copyright infringement. 

Meta and OpenAI are alleged to have used the plaintiffs’ content for training their respective artificial intelligence (AI) systems without obtaining any prior permission.

According to the court documents against Meta, many of the plaintiffs’ books under copyright appear in the dataset that “Meta has admitted to using to train LLaMA.”

Similarly, in the case against OpenAI, the lawsuit alleges that when ChatGPT generates summaries of the plaintiffs’ work it is an indication of the training via copyrighted content.

“The summaries get some details wrong. This is expected since a large language model mixes together expressive material derived from many sources. Still, the rest of the summaries are accurate…”

In order to obtain this data the suits claim that the companies retrieved the copyrighted data from what are known as “shadow libraries,” such as Bibliotik, Library Genesis, Z-Library, and others.

Related: Japanese AI experts raise concern over bots trained on copyrighted material

These shadow libraries are websites that use torrent systems to make books “available in bulk," says the lawsuit. Such sites are illegal and are unlike open-source data that comes from databases such as Gutenberg, which collects books that have copyrights that have run out.

“These shadow libraries have long been of interest to the AI-training community because of the large quantity of copyrighted material they host.”

Along with complaints about copyright infringement of their own personal work, the authors filed the complaint on behalf of a class of copyright owners across the United States whose works were also allegedly infringed. 

Cointelegraph reached out to OpenAI and Meta for comment on the case, though neither responded prior to publication.

In May writers across the U.S. a part of the Writers Guild of America, took to the streets in an authorized strike -the first one in 15 years- which highlighted many issues faced in the industry including the usage of AI.

Magazine: Super Mario: Crypto Thief, Sega blockchain game, AI games rights fight — Web3 Gamer

Trezor to end privacy-enhancing coinjoin feature as Wasabi Wallet steps back

ChatGPT, Midjourney, other AI tools to make their way into EU legislation

The bill will classify the risk of AI tools and force developers of generative-AI applications to disclose the use of any copyrighted materials.

Controversies around artificial intelligence (AI) and its use of copyrighted material have been popping up around the web left and right after a major uptick in the use of the technology for content creation. 

Legislators in the European Union have responded to the growing usage of AI in a vote on April 27, which pushed forward a draft of a new bill designed to keep the technology and companies developing it in check.

Fine details of the bill will be finalized in the next round of deliberations among legislatures and member states. Though it currently stands, AI tools will soon be classified according to their risk level. The risk levels begin at minimal, limited, and high and go until unacceptable.

According to the bill, the high-risk tools will not be completely banned, though they will be subjected to tougher transparency procedures. In particular, generative AI tools, including ChatGPT and Midjourney, will be obliged to disclose any use of copyrighted materials in AI training.

Svenja Hahn, a deputy of the European Parliament, commented in response to the current status of the bill as a middle ground between too much surveillance and over-regulation that protects citizens, “ as well as foster innovation and boost the economy."

The bill is a part of the EU’s Artificial Intelligence Act and was proposed as draft rules nearly two years ago.

Related: Elon Musk threatens Microsoft with lawsuit, claims AI trained on Twitter data

In the same week, the European think tank Eurofi, composed of enterprises in the public and private sectors, released the latest edition of its magazine that included an entire section on AI and machine learning (ML) applications in finance in the EU. 

The section included five mini-essays on AI innovation and regulation within the EU, particularly for use in the financial industry, all of which touched on the upcoming Artificial Intelligence Act.

One author, Georgina Bulkeley, the director for EMEA financial services solutions at Google Cloud, said in reference to the legislation:

“AI is too important not to regulate. And, it’s too important not to regulate well.”

These developments come shortly after the EU’s data watchdog voiced concern for the potential troubles AI companies in the United States will run into if they are not in line with GDPR.

Magazine: Crypto regulation: Does SEC Chair Gary Gensler have the final say?

Trezor to end privacy-enhancing coinjoin feature as Wasabi Wallet steps back

Musician Grimes willing to “split 50% royalties” with AI-generated music

The Canadian musician took to Twitter to voice her support of AI-generated music using her voice, saying she is willing to be a “guinea pig” for the new technology.

The swift rise of artificial intelligence (AI)- generated art has shaken creatives across various industries. While many have highlighted copyright infringement issues involving AI-generated art, not all artists are against the fusion of AI and their intellectual property. 

According to a tweet from Canadian musician and producer Grimes, she says will treat AI creators using her voice the same as other artists she collaborates with. Grimes wrote that she would want to “split 50% royalties on any successful AI generated song” that uses her voice.

Grimes mentioned that she has no label, and therefore, “no bindings” to any major entity in the music industry which could cause IP rights issues. The artist continued to say she finds it “cool to be fused with a machine” and that she is in favor of open-sourcing art, ultimately “killing copyright.”

She continued saying she is “curious” about what creators can do with the technology and is “interested in being a Guinea pig.”

In the initial tweet, Grime posted an article on the recent outcry surrounding AI-generated tracks of Drake and the Weekend which have been floating around the internet. On April 13 music industry giant Universal Music Group sent an email to all major streaming services to block AI from accessing its catalogs for learning purposes.

The company said it won’t hesitate to do what is necessary to protect its rights and those of the artists it represents.

Related: Over half of Americans fear ‘major impact’ by AI on workers: Survey

In a separate statement from Grimes, she revealed that she is creating a voice simulation program along with a team of developers, which will be made publicly available.

However, AI-generated deep fakes utilizing images and voices of individuals are already causing major headaches and ethical concerns

Recently a German tabloid used AI to generate a fake interview with the former Formula One driver Michael Schumacher. Concerns are even circulating within the companies producing the technology, after reports revealed Google employees’ worries over its forthcoming AI-chatbot.

Magazine: Crypto regulation: Does SEC Chair Gary Gensler have the final say?

Trezor to end privacy-enhancing coinjoin feature as Wasabi Wallet steps back

Midjourney, other AI devs strike back in court, claiming their material is not similar to artists

The response by the AI firms raises questions about how copyright law principles such as authorship, infringement and fair use will apply to content created or used by AI.

Midjourney, Stability AI and DeviantArt issued a response on April 18 to a group of artists who accused them of extensive copyright infringement. The artists claimed that these companies had used their work in generative artificial intelligence (AI) systems without proper authorization.

The companies filed their motions in a San Francisco federal court seeking the dismissal of the proposed class action lawsuit brought by the artists. They contended that the AI-generated images were dissimilar to the artists' work and that the lawsuit lacked specific information about the allegedly misused photos.

In January, Sarah Andersen, Kelly McKernan and Karla Ortiz filed a lawsuit against the companies, claiming that their rights had been violated. The artists alleged that their works were used without permission to train the systems and that the resulting AI-generated images, created in their styles, were also infringing.

In their filing on Tuesday, Stability AI, a deep learning, text-to-image model AI company, argued that the artists ‘fail to identify a single allegedly infringing output image, let alone one that is substantially similar to any of their copyrighted works.’ Midjourney, an AI company that generates images from natural language descriptions, said that the lawsuit also does not ‘identify a single work by any plaintiff’ that it ‘supposedly used as training data.’

DeviantArt, an online community for artists that offers a service enabling users to generate images using Stability AI's Stable Diffusion system, supported the same arguments as Stability AI. Additionally, it claimed that it was not responsible for any alleged wrongdoing by the AI companies.

There is a possibility of AI programs infringing copyright by generating outputs that resemble existing works. In accordance with US case law, copyright holders can establish that the outputs produced by an AI program infringe upon their copyright if the program had access to their works and the resulting outputs are deemed ‘substantially similar.’

Related: Elon Musk threatens Microsoft with lawsuit, claims AI trained on Twitter data

Recent innovations in AI are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. Generative AI computer programs such as Stability AI’s Stable Diffusion program and Midjourney’s self-titled program are able to generate new images, texts and other content or outputs in response to a user’s textual prompts or inputs.

These generative AI programs are trained to generate such works partly by exposing them to large quantities of existing works such as writings, photos, paintings and other artworks.

Magazine: All rise for the robot judge: AI and blockchain could transform the courtroom

Trezor to end privacy-enhancing coinjoin feature as Wasabi Wallet steps back

Trademarks filed for NFTs, metaverse and cryptocurrencies soar to new levels in 2022

Trademark applications filed for NFTs alone grew from a total of 2,142 filed for 2021 to 6,855 by the end of October 2022.

The number of companies filing trademarks for nonfungible tokens (NFTs), metaverse-related virtual goods and services, and cryptocurrencies has grown rapidly in 2022. 

According to data compiled by licensed trademark attorney Mike Kondoudis, the number of trademark applications filed for digital currencies, as well as their related goods and services, has reached 4,708 as of the end of October 2022 — surpassing the total number filed in 2021 (3,547).

The number of trademark applications filed for the metaverse and its related virtual goods and services also soared to 4,997 from the 1,890 filed in 2021. This seems to suggest a massive appetite for the metaverse and its related products, despite the setbacks the ecosystem has faced in becoming fully functional.

The desire for NFTs as a technology still appears to be on the rise, despite a recorded decline in NFT trading volume and sales. According to Kondoudis’ statistics, the total number of trademark applications for NFTs and their related products increased from 2,142 in 2021 to 6,855 as of October 2022.

Related: What remains in the NFT market now that the dust has settled?

Within the past month, a number of companies have filed fresh trademark applications to join the Web3 ecosystem. On Oct. 21, makeup and cosmetic giant Ulta filed a trademark application for plans to include NFTs and virtual makeup and salon services among its offerings.

Luxury watchmaker Rolex also filed a trademark application with plans to bring NFTs, NFT-backed media, NFT marketplaces and a cryptocurrency exchange to its empire.

Trezor to end privacy-enhancing coinjoin feature as Wasabi Wallet steps back