Music AI in the courts: Legal battles and strategic responses
On March 6, 2025, Water & Music hosted a webinar to explore where exactly we stand today on music, AI, and copyright.
In the past year, we've witnessed the AI landscape give way to several landmark lawsuits and legal decisions, as well as an entirely new set of strategic questions that weren't even on our radar eighteen months ago.
To help navigate this complex terrain, we brought together two attorneys with deep expertise in intellectual property, copyright, and emerging tech, each representing different sides of the AI debate:
- Scott Sholder, Partner at Co-Chair of the Litigation Group at Cowan, DeBaets, Abrahams & Sheppard LLP. Sholder has been at the forefront of several cutting-edge copyright cases, including The Authors Guild’s ongoing class action lawsuit against OpenAI.
- Elizabeth Moody, Partner and Chair of the New Media Group at Granderson Des Rochers. Moody advises nearly a dozen generative voice and audio AI companies, including AudioShake, Sureel, Copyright Delta, and Fairly Trained, while also working with select rights holders on AI strategies.
Below are the key insights from our conversation — spanning copyright fundamentals, analysis of pivotal legal cases, and strategic implications for industry professionals working in this rapidly evolving space.
Follow along
Recording:
Slides:
The foundation: What is copyrightable in an AI world?
Our discussion began with the fundamentals: What actually qualifies for copyright protection when AI is involved in the creative process?
The US Copyright Office's recent recommendations made three key points clear:
- No change needed to existing law: The current legal framework is sufficient to address generative AI. We can still draw a clear line between what is or isn't copyrightable.
- What is excluded: "Copyright does not extend to purely AI-generated material, or material where there is insufficient human control over the expressive elements." Importantly, based on currently available technology, “prompts alone do not provide sufficient control” to qualify for copyright.
- What is included: You can still use AI tools in the creative process and have your work qualify for copyright protection, as long as there's clear evidence of “original expression … created by a human author.”
Both speakers emphasized a critical takeaway for creators: Documentation is key.
"It's going to be very important to keep a record of where you used AI, and make sure that you're adding some creativity," noted Moody.
Sholder added that this is a departure from prior registration practices, in that determining copyrightability of work involving AI now requires a "a stronger chain of title … a record of what you've done.”
What is the line between human and AI creation?
One of the most nuanced aspects of the current landscape is determining what constitutes human authorship versus AI generation.
Moody highlighted that while purely AI-generated content (simply inputting a prompt and using the raw output) isn't copyrightable, the reality of creative workflows is far more complex.
"In real-world scenarios, what one would subjectively call the 'good content' has really been created by artists and talented creators, who are using AI music services as a tool for inspiration," she explained. "The content that's really good is the content that has human-created elements.”
Sholder provided examples of where the Copyright Office has drawn the line in visual art applications:
- In 2023, the graphic novel Zarya of the Dawn had its MidJourney-generated images denied copyright protection, but the text and the act of compiling all the novel’s components (“selection, coordination, and arrangement of the Work’s written and visual elements”) were deemed protectable.
- Jason M. Allen’s AI-generated painting Théâtre d’Opéra Spatial, which won the Colorado State Fair’s art competition in 2022, was denied copyright registration the following year. Even though Allen claimed he did "hundreds of iterations and touch-ups," the iterative process alone wasn't sufficient.
For music specifically, some registrations have been granted on sound recordings with AI components, but only for the parts not generated by AI. "If the hook was generated with AI, but the lyrics were written by a person — or the composition was written by a person, but the sound recording was created by AI — then some parts are going to be copyrightable and other parts are not," Sholder explained.
Ultimately, the Copyright Office claims that “whether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis.”
Both of our speakers expressed concern about the practical implications of this approach. "I'm a little bit cynical about the ability for us in the future to tell the difference between AI-generated outputs and where there's a human contribution," said Moody, questioning whether the Copyright Office will be able to handle the volume of analysis required as AI becomes increasingly integrated into creative workflows.
Similarly, Sholder emphasized that while "the only bright line we have is that just raw output is not copyrightable," everything else falls into "shades of gray" — making documentation of the creative process increasingly critical for creators seeking copyright protection.
Three pivotal cases
We explored three major legal cases that are actively defining the boundaries of AI copyright in the US, and examined their implications for the music industry.
1. Thomson Reuters v. Ross Intelligence
This case represents the first major AI training ruling in the United States.
Ross Intelligence, a now-defunct legal research tool, used copyrighted headnotes (written legal summaries) from Thomson Reuters to train their legal search model. The court found this was not fair use and constituted copyright infringement.
Moody broke down the four factors that led to this conclusion:
- Character of use: Ross' use was commercial and not transformative — they essentially replicated headnotes and developed a competing product.
- Nature of the copyrighted work: While the headnotes were somewhat factual, they also involved editorial judgment and creativity in summarizing legal opinions.
- Amount used: Ross used a substantial portion of the headnotes, both in quantity and quality.
- Effect on the market: Ross' product directly competed with Reuters', potentially harming its market value.
Importantly, Ross is not a generative AI platform in the sense of synthesizing and creating new content itself. “Rather, when a user enters a legal question, Ross spits back relevant judicial opinions that have already been written," in the words of Judge Stephanos Bibas, who drafted the summary judgment.
Hence, some have suggested that this fair use ruling might not apply to generative AI platforms. Sholder disagreed with this notion: "I don't think the distinction between generative and not generative is going to matter very much. It's either the use for AI training is transformative and commercial, or it's not."
For music specifically, Moody noted several important distinctions that could make music AI cases even stronger for rights holders:
- Music is purely expressive and creative, with no factual components like legal headnotes, and courts have historically given more protection to creative works.
- Many music AI models train on entire songs, not just summaries, potentially strengthening the "amount used" factor.
- Market competition could be even clearer with music AI, as generated outputs potentially compete directly with human artists.
2. Publishers v. Anthropic
This ongoing lawsuit between major music publishers (Concord, UMG Publishing, and ABKCO) and Anthropic centers on the use of copyrighted lyrics to train Claude, a large language model competing with ChatGPT.
While the case hasn't been fully resolved, Anthropic agreed to maintain guardrails that prevent the output of published lyrics. Sholder explained that guardrails are "adjustments … on the backend to prevent the model from putting out, in response to a prompt, verbatim replication of song lyrics that may have been ingested during the training process."
Moody added that there are also input-based guardrails that some music companies are putting in place around “training data selection — making sure you're obtaining explicit licenses or using public domain content." Key examples of this approach in practice include BMAT’s partnership with Voice-Swap and Fairly Trained’s certification program.
Limitations of the output-based guardrail approach
However, there’s a significant distinction between prompting an AI to directly reproduce copyrighted content, versus finding ways to generate similar content through creative prompting.
Sholder offered an example from image generation, where users learned to bypass direct restrictions on generating images of copyrighted characters (e.g. “Mario and Luigi”) by describing their visual characteristics instead of using their names (e.g. “two video game plumbers wearing red and green overalls” ). The resulting outputs were still recognizably the copyrighted characters, despite the guardrails intended to prevent their reproduction.
References to name, likeness, or style in an AI prompt (e.g. “in the style of X”) are also not inherently protectable under the same laws as copyright. "Style is, in the copyright world, equated to an idea, and ideas are not copyrightable,” explained Sholder. “So just a ‘country song,’ a ‘metal song,’ a ‘rap song,’ a genre... is not copyrightable.”
However, he cautioned that the line between style and protected expression can be nuanced: "There's a fine line between where that vibe ends and where protected expression begins." While prompting an AI to create "in the style of Taylor Swift" isn't directly infringing her copyright, other legal issues could certainly arise if the output sounds too similar to specific songs by Swift, or if her name is used commercially without permission.
This limitation highlights the importance of clarifying whether a given case focuses on the inputs (training data and prompts) or the outputs (generated content). In the Anthropic case, while guardrails can prevent verbatim reproduction of copyrighted lyrics on the output, they don't resolve the underlying question of whether using lyrics for training input constitutes fair use.
3. Labels v. Suno/Udio
All three major labels have sued two of the biggest music AI generation platforms, Suno and Udio, claiming they unlawfully copied recordings to train their AI models. This case is still in earlier stages compared to the others.
Sholder highlighted several notable aspects of these cases:
- They're the first sound recording lawsuits in an AI context, with interesting distinctions between pre-1972 and post-1972 recordings, which have different legal protections (federal copyright protection for sound recordings only began in 1972).
- Unlike some other cases, the labels’ complaints focus primarily on the input (training), rather than claiming substantially similar outputs. (That said, in the absence of explicit knowledge about training data, the labels’ argument does rely on inferential evidence — examining Suno and Udio outputs, and claiming that it would be impossible for those companies to achieve the quality and variety of output without training their models on vast amounts of copyrighted music.)
- The defendants' responses are unusually aggressive, containing "lengthy substantive narratives" that accuse the labels of monopolistic behavior and copyright misuse.
- The AI companies essentially admit they likely copied plaintiffs' works while training their models, yet still claim fair use protection.
Suno’s counter-complaint claims (bold emphasis added): “No one owns musical styles. Developing a tool to empower more people to create music, by analyzing what the building blocks of different styles consist of, is quintessential fair use under longstanding copyright doctrine."
In response, Sholder argued that this claim may be missing the specific point of the case. "I don't agree with the characterization that the record industry is trying to own styles,” he said. “That's just not what they're saying."
Strategic implications: What should rights holders and AI developers do now?
Given the evolving legal landscape, our speakers offered practical guidance for both sides of the AI equation.
For AI companies and developers
Moody outlined four key strategies for companies looking to develop AI tools responsibly:
- License your training data: "Making sure that either you're obtaining explicit licenses for the music that you're training on or the lyrics you're training on, or using public domain or open source licenses." (Stability’s documentation of training data for their Stable Audio 2.0 model is an exemplary case study.)
- Implement output filters: "Algorithms that help detect and modify the outputs that are too similar to the melodies or compositions that were trained on." (While this tech is still in development, Audible Magic has partnerships with the likes of Suno and Stability to prevent users from uploading copyrighted content as audio prompts for their generations.)
- Use transparent metadata: "Embed information about the AI's role in the creation process in the metadata... so the output will be clearly marked as having some AI involvement." (As an example, users of several AI tools can now upload their creations directly to SoundCloud, where they will be tagged automatically to show the tool used.)
- Focus on attribution: "I'm excited about seeing how [attribution technology] grows... it's a new technical way of providing an opportunity for remuneration and compensation to the artists whose music was trained on."
For rights holders
Sholder advised rights holders to thoroughly evaluate potential AI partners by asking key questions:
- Is their training data publicly available?
- What security measures do they have in place to protect your data?
- Can they sublicense your data or use it in other models?
- Have they made deals with anyone else?
- Are there guardrails for verbatim or substantially similar output?
- What happens if infringing output is discovered?
- Who owns the output resulting from using your data?
As Sholder emphasized: "It'll all come down to the contract.”
Moody similarly argued that regardless of how these legal cases resolve, partnership between AI developers and rights holders is becoming increasingly urgent, especially as certain trends accelerate that make traditional licensing approaches less relevant or effective.
For instance, synthetic data — artificially created datasets that mimic the properties of real-world data without directly copying it — could potentially allow AI developers to train models without requiring licenses for original content. Open-source music generation models, like China’s YuE, are also being shared freely among consumers and developers, allowing broader access to AI capabilities without centralized control or licensing frameworks.
"If there isn't collaboration between the AI community and the rights holders soon, it'll become much more complicated,” said Moody.
Key takeaways for Water & Music members
- Document your creative process if you're using AI tools. The more human input and editing you can demonstrate, the stronger your copyright protection.
- Different elements of the same work may have different protection. AI-generated components likely won't be copyrightable, but your original contributions still can be.
- The Reuters v. Ross decision sets an important precedent that could strengthen rights holders' positions in music AI cases, especially given music's purely creative nature.
- Guardrails are becoming standard practice but have key limitations — both technical and practical — in preventing all forms of copyright infringement.
- Business partnerships are moving ahead of legal clarity — don't wait for final court decisions before developing your AI strategy.
Want to stay on top of these rapidly evolving music AI developments? Our complete Music AI Market Tracker includes detailed profiles of 200 music AI startups and tracking of 500+ AI news developments.
What's next? Keep an eye out for our upcoming deep dive on AI attribution technology, which many of our speakers highlighted as a critical development for the future of fair compensation in the AI era.
For any questions or to share your thoughts, please reach out to our inbox at members@waterandmusic.com.