A coalition of independent musicians has filed a proposed class action lawsuit against Google, claiming the company trained its Lyria 3 AI music model on 44 million clips and 280,000 hours of audio scraped from YouTube without permission. The suit was filed March 6-9, 2026, in the US District Court for the Northern District of Illinois.

What Happened

The plaintiffs, including Sam Kogon, Magnus Fiennes, Michael Mell, Attack the Sound, Stan and James Burjek, and Directrix, allege that Google used YouTube's vast music library to build a tool that now competes directly with the artists whose work made YouTube valuable in the first place. According to Billboard, the lawsuit frames Google's shift as going from "distributor to competitor" by extracting copyrighted audio from its own platform.

Google launched Lyria 3 on February 18, 2026, through the Gemini app. The model generates 30-second music tracks from text prompts. Music Business Worldwide reports that the plaintiffs' legal claims include copyright infringement, removal of copyright management information, circumvention of access controls, false endorsement under the Lanham Act, and violations of Illinois' Biometric Information Privacy Act (BIPA).

Google has maintained that Lyria 3 was trained on music it had "a right to use under our terms of service, partner agreements and applicable law." The plaintiffs dispute that characterization, arguing that YouTube's terms of service do not grant Google the right to train generative AI models on uploaded content.

Why It Matters for Creators

This lawsuit is significant for several reasons. First, the same group of indie artists has now sued four AI music developers: Suno, Udio, Mureka, and now Google. As Digital Music News notes, this marks a sustained legal campaign targeting the entire AI music generation sector, not just startups.

Second, the Google case raises a distinct legal question. Unlike Suno or Udio, Google owns the platform where much of the training data lives. The lawsuit directly challenges whether a platform can repurpose user-uploaded content for AI training under existing terms of service. The outcome could reshape how every major tech platform handles creator content.

Third, the BIPA claims add a novel angle. If the court finds that audio voiceprints qualify as biometric data under Illinois law, it could open a new front in AI copyright litigation nationwide.

Meanwhile, major record labels continue their separate legal actions against Suno, signaling that both independent and major-label artists view AI music training as a fundamental threat to their livelihoods.

What to Do Next

Independent musicians and creators should review the terms of service for every platform where they upload content. Understanding what rights you grant upon upload is now critical as AI training disputes escalate.

Creators who believe their work was used to train Lyria 3 or other AI music tools should document their catalog and monitor the case for updates on class membership. The Northern District of Illinois docket will contain all filings as the case progresses.

This lawsuit will likely take months or years to resolve, but it is already shaping the legal boundaries of AI training. Whether you are an independent artist, a label executive, or a developer building AI tools, the outcome will set precedents that affect the entire music industry.


This story was covered by Creative AI News.

Subscribe for free to get the weekly digest every Tuesday.