1. Home
  2. generative fill

generative fill

What is fair use? US Supreme Court weighs in on AI’s copyright dilemma

Many firms with generative AI models are being sued for copyright infringement, and the Supreme Court may have just ruined their primary legal defense.

Generative artificial intelligence models such as OpenAI’s ChatGPT are trained by being fed giant amounts of data, but what happens when this data is copyrighted?

Well, the defendants in a variety of lawsuits currently making their way through the courts claim that the process infringes upon their copyright protections.

For example, on Feb. 3, stock photo provider Getty Images sued artificial intelligence firm Stability AI, alleging that it copied over 12 million photos from its collections as part of an effort to build a competing business. It notes in the filing:

“On the back of intellectual property owned by Getty Images and other copyright holders, Stability AI has created an image-generating model called Stable Diffusion that uses artificial intelligence to deliver computer-synthesized images in response to text prompts.”

While the European Commission and other regions are scrambling to develop regulations to keep up with the rapid development of AI, the question of whether training AI models using copyrighted works classifies as an infringement may be decided in court cases such as this one.

The question is a hot topic, and in a May 16 Senate Judiciary Committee hearing, United States Senator Marsha Blackburn grilled OpenAI CEO Sam Altman about the issue.

While Altman noted that “creators deserve control over how their creations are used,” he refrained from committing not to train ChatGPT to use copyrighted works without consent, instead suggesting that his firm was working with creators to ensure they are compensated in some way.

AI companies argue “transformative use”

AI companies generally argue that their models do not infringe on copyright laws because they transform the original work, therefore qualifying as fair use — at least under U.S. laws.

“Fair use” is a doctrine in the U.S. that allows for limited use of copyrighted data without the need to acquire permission from the copyright holder.

Some of the key factors considered when determining whether the use of copyrighted material classifies as fair use include the purpose of the use — particularly, whether it’s being used for commercial gain — and whether it threatens the livelihood of the original creator by competing with their works.

The Supreme Court’s Warhol opinion

On May 18, the Supreme Court of the United States, considering these factors, issued an opinion that may play a significant role in the future of generative AI.

The ruling in Andy Warhol Foundation for the Visual Arts v. Goldsmith found that famous artist Andy Warhol’s 1984 work “Orange Prince” infringed on the rights of rock photographer Lynn Goldsmith, as the work was intended to be used commercially and, therefore, could not be covered by the fair use exemption.

While the ruling doesn’t change copyright law, it does clarify how transformative use is defined. 

Mitch Glazier, chairman and CEO of the Recording Industry Association of America — a music advocacy organization — was thankful for the decision, noting that “claims of ‘transformative use’ cannot undermine the basic rights given to all creators under the Copyright Act.”

Given that many AI companies are selling access to their AI models after training them using creators’ works, the argument that they are transforming the original works and therefore qualify for the fair use exemption may have been rendered ineffective by the decision.

It is worth noting that there is no clear consensus, however.

In a May 23 article, Jon Baumgarten — a former general counsel at the U.S. Copyright Office who participated in the formation of the Copyright Act — said the case highlights that the question of fair use depends on many factors and argued that the current general counsel’s blanket assertion that generative AI is fair use “is over-generalized, oversimplified and unduly conclusory.”

A safer path?

The legal question marks surrounding generative AI models trained using copyrighted works have prompted some firms to heavily restrict the data going into their models.

For example, on May 23, software firm Adobe announced the launch of a generative AI model called Generative Fill, which allows Photoshop users to “create extraordinary imagery from a simple text prompt.”

An example of Generative Fill’s capabilities. Source: Adobe

While the product is similar to Stability AI’s Stable Diffusion, the AI model powering Generative Fill is trained using only stock photos from its own database, which — according to Adobe — helps ensure it “won’t generate content based on other people’s work, brands, or intellectual property.”

Related: Microsoft urges lawmakers, companies to ‘step up’ with AI guardrails

This may be the safer path from a legal perspective, but AI models are only as good as the data fed into them, so ChatGPT and other popular AI tools would not be as accurate or useful as they are today if they had not scraped vast amounts of data from the web.

So, while creators might be emboldened by the recent Warhol decision — and there is no question that their works should be protected by copyright law — it is worth considering what its broader effect might be.

If generative AI models can only be trained using copyright-free data, what kind of effect will that have on innovation and productivity growth?

After all, productivity growth is considered by many to be the single most significant contributor to raising the standard of living for a country’s citizens, as highlighted in a famous quote from prominent economist Paul Krugman in his 1994 book The Age of Diminished Expectations:

“Productivity isn't everything, but in the long run it is almost everything. A country’s ability to improve its standard of living over time depends almost entirely on its ability to raise its output per worker.”

Magazine: Crypto City: Guide to Osaka, Japan’s second-biggest city

Crypto payments firm MoonPay mulls $150M Helio acquisition: Report

Blizzard and Adobe tap generative AI tools to be ‘co-pilot’ to humans

The goal of introducing generative artificial intelligence tools by the two tech firms isn’t to replace humans but to help them.

Generative artificial intelligence tools are being rolled out by tech firms Adobe and Activision Blizzard, though each claimed the AI tools are there to assist humans in creating content and will not replace jobs.

On May 23, graphic software giant Adobe launched “Generative Fill” which will allow users to “generate content from inside Photoshop with a text prompt.”

The same day, The New York Times reported that Allen Adham, chief design officer at gaming firm Activision Blizzard, told employees in an email last month that it’s exploring the use of an image-generating AI to assist in game design.

Adobe’s new tool is intended to be a “co-pilot” alongside humans rather than to replace graphic designers.

Andrew Guerrero, vice president of global insights at Blizzard, voiced a similar sentiment, saying that the goal for its AI tool — Blizzard Diffusion — “is to remove a repetitive and manual process and enable artists to spend more time on creativity.”

Meanwhile, Adobe’s Asia-Pacific director of digital media and strategy, Chandra Sinnathamby, told The Guardian on May 23 that its tool was “intended as a co-pilot to speed up the process rather than to replace graphic designers altogether.”

Sinnathamby said precautions had been implemented to avoid confusion over what humans have made versus those generated by AI. Artists who contributed stock photos are also paid when used by the AI, he said.

Adobe and Blizzard are not the only technology companies excited by generative AI.

Related: AI financial tools: A smart way to manage money or a risky experiment?

On May 23, Nikesh Arora, the chief of the cybersecurity firm Palo Alto Networks appeared on Mad Money with Jim Cramer to tout the benefits of generative AI for cybersecurity.

He declared its implementation will significantly increase efficiency and allow the company to “double in size within the next few years without having to proportionally scale employees.”

The developments come as ChatGPT creator OpenAI warned that in 10 years, “AI systems will exceed expert skill level in most domains” and called for increased government oversight of AI development.

Many have aired concerns of the potential job losses due to the advancement and adoption of AI while others have claimed otherwise, saying the technology could create a similar amount of new jobs to those that are lost.

Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?

Crypto payments firm MoonPay mulls $150M Helio acquisition: Report