Digital Dibs: Rival Views of Generative AI Copyrights
GAI platforms like ChatGPT and OpenAI often require very little human input, shattering this legal landscape’s framework by posing a simple question: Who authored the material? We’ll explore how two countries are answering this question in different ways.
Imagine a world where machines churn out masterpieces with minimal human intervention, leaving copyright laws doing the limbo. This isn’t the premise of a new science fiction novel. It’s the reality of today’s generative artificial intelligence (AI).
Copyrights have been a part of U.S. law nearly since its founding, with the first federal law passed in 1790 to allow an author to reap the benefits of her work for a set period of time. Over time, the law expanded, providing the author with the exclusive right to reproduce or distribute her creation.
But these laws extend protection only to human innovation. Generative AI platforms like ChatGPT and OpenAI often require very little human input, shattering the legal framework by posing a simple question: Who authored the material?
Is it the developer of the AI model who arguably enabled the “creative” process? Is it the owner of the AI model? Or is it the person who prompted the AI tool to generate the content? Is there even an author?
We’ll explore how two countries are answering these questions in different ways.
China vs. the United States
Though generative AI is novel, we’ve seen the underlying issue before. Legal uncertainty associated with new technologies always creates risk, requiring new approaches that strike a balance between existing rights and innovation. Generative AI is only the latest example.
Going further back, the Internet itself broke the norms and bounds of copyright law, requiring new legislation and judicial guidance on the protection of online content, legal responsibility for uploading and downloading content, ownership of user-generated content, and the liability of online intermediaries like hosts and internet service providers (ISPs).
So too with AI. The world will soon see legal developments that seek to consider the interests of all stakeholders, including the developers, owners, users—and the humans who produce the content needed to train these tools.
In fact, we already have two global superpowers—the United States and China—taking the first steps toward blending generative AI and existing copyright law.
In the Zarya of the Dawn case, the U.S. Copyright Office denied copyright protection to two images in the comic book Zarya of the Dawn after it learned the artist, Kristina Kashtanova, used a generative AI tool called Midjourney to create the images. After initially granting protection for the entire comic, the Copyright Office became aware that Kashtanova relied on Midjourney to create some or all of the images contained in Zarya of the Dawn. The Copyright Office concluded that the information in Kashtanova’s application was “incorrect or, at a minimum, substantively incomplete” and initiated cancellation proceedings because Kashtanova admitted she was not the sole author of the entire work.
Ultimately, the Copyright Office cancelled registration for the images, concluding that under United States law a work must have a human author. Obviously, Midjourney wasn’t human, but what about Kashtanova? Nope, reasoned the Copyright Office, because Kashtanova started the image generation process with a “field of visual ‘noise’” provided by Midjourney. Though she had “influence[d]” the generated image through her various prompts, she had not exercised necessary control over the images to be the “mastermind” behind them.
The end result? At least in the view of the Copyright Office, images created by AI with iterative human prompting can have no author and, therefore, no copyright protection.
But in a very similar case, China came to a different conclusion, holding that the human creator was the author and that his work was entitled to copyright protection. In Li v. Liu (case version in Chinese, unofficial version in English), Li used another generative AI model, Stable Diffusion, to create an image. Like Kashtanova, he prompted the program, continuing to change and modify the results. After a woman used the final image on her website without any attribution to Li, he sued.
Like the U.S. Copyright Office, the court in Li v. Liu determined that Stable Diffusion couldn’t be the author because it wasn’t human. Similarly, the designer of the program couldn’t be the author because that person merely created the tool used, not the image created.
Instead, the Chinese court determined that Li was the author and, therefore, the image was copyrightable. The court held that a work must be “original” and “an intellectual achievement” to be copyrightable. The court reasoned that Li’s work was original because it didn’t exist prior to his creation of it. Further, the work was an intellectual achievement since he prompted and modified the image until he was finished.
Though the two countries came to different conclusions about whether the work was copyrightable, both agreed that the author of the work must be human.
And we breathe a sigh of relief: The machines can’t take over (yet).
What Now?
So, authors must be human, and different countries approach intellectual property differently. Why does it matter?
First, considering how fast digital work travels, these differing approaches will likely prompt many lawsuits in the United States and abroad, as courts struggle to mesh competing laws. U.S. digital creators, lacking protection, will likely limit how much they invest in AI-related projects. Meanwhile, their counterparts in China will have a head start, quickly moving forward with innovation and enjoying the economic rewards of creative automation.
Though both countries have backed into a way to square generative AI with existing law, the question of authorship is far from answered. For example, would the result have been different if only humans had altered the images after AI produced them? What if the AI was so advanced that it was capable of the “intellectual achievement” required by the court in Li? These gray areas open to the door to potentially conflicting decisions.
In any event, while regulation and the judicial system catch up to innovation—only for innovation to inevitably race ahead again—those developing and using AI should remain vigilant in managing their content to safeguard against intellectual property-related risks. Companies should objectively assess the copyright risks associated with generative AI, keeping abreast of legal and regulatory developments and ensuring that they align internal policies and training with emerging laws and guidance.
See also:
- What ChatGPT Means for Finance
- A Sound AI Policy Mitigates Risk and Eliminates Ambiguity
- ‘Existential Risks’: AI Anxiety Fueling Stream of Shareholder Proposals
Greg Moreman is the vice president of operations and compliance at Level Legal. Prior to this role, he was a consulting director for Level Legal for more than 11 years, with a relentless commitment to creating the most efficient solutions for clients. Moreman has more than a decade of experience developing compliance solutions for a diverse array of clients in multiple industries. He is a former prosecutor, and in private practice he coordinated discovery for numerous cases in a variety of practice areas.
This article first appeared in Cybersecurity Law & Strategy, an ALM publication for privacy and security professionals, chief information security officers, chief information officers, chief technology officers, corporate counsel, Internet and technology practitioners, and in-house counsel.