Three major record label settlements, a landmark UK High Court ruling, and new guidance from the US Copyright Office reshaped the rules for AI-generated content between late 2025 and early 2026. For the estimated 150 million people worldwide now using generative AI tools for creative work, these legal shifts determine what you can sell, what you can protect, and what could get you sued.
This research guide maps the current legal landscape as of April 2026, covering copyright registration, commercial licensing terms across major platforms, training data disputes, and practical steps creators should take now.
The Current Legal State
The past 12 months produced the first wave of resolutions in AI copyright disputes. Some questions now have clear answers. Others remain in active litigation that could take years to resolve.
What Has Been Settled
Music licensing deals are done. Warner Music Group settled its lawsuit with Suno in November 2025 and signed a licensing partnership. The deal gives artists and songwriters control over whether their names, likenesses, voices, and compositions are used in AI-generated music. Suno committed to launching new licensed models in 2026, with current unlicensed models being deprecated.
Warner also settled with Udio in November 2025, while Universal Music Group reached a separate settlement with Udio to develop a licensed AI music platform trained on UMG catalog content. These deals established a new baseline: major AI music platforms must license training data from rights holders.
The UK ruled on image training. In November 2025, the UK High Court delivered its judgment in Getty Images v Stability AI, rejecting Getty's core copyright infringement claims. The court found that AI model weights do not constitute a "copy" of training images under UK copyright law. Getty abandoned its primary copyright and database infringement claims before closing arguments. The ruling was narrow in scope (UK law, secondary infringement only), but it gave AI companies a significant legal precedent.
The Copyright Office clarified copyrightability. In January 2025, the US Copyright Office published Part 2 of its AI report, establishing that purely AI-generated outputs cannot receive copyright registration. The Office followed with Part 3 in May 2025, addressing training data and fair use. Together, these reports form the most comprehensive government guidance on AI copyright to date.
What Remains in Court
NYT v OpenAI is the biggest pending case. The consolidated In re OpenAI Copyright Infringement Litigation, which bundles 16 lawsuits from news organizations and authors, moved into discovery in early 2026. A federal judge ordered OpenAI to produce 20 million anonymized ChatGPT conversation logs after the company tried to limit disclosure. The case centers on whether large language models "memorize" copyrighted text, a question with massive implications for every generative AI company.
Getty v Stability continues in the US. Although Getty lost in the UK, the company is pursuing a separate US lawsuit against Stability AI, where copyright law differs significantly. The US case could produce the opposite result under American fair use doctrine.
Sony remains in litigation. Sony Music is still suing both Suno and Udio, having not joined the Warner or UMG settlements. The outcome could affect whether unlicensed music training is permissible even if some labels have struck deals.
Can You Copyright AI-Generated Content?
The US Copyright Office's guidance, combined with several registration decisions since 2023, establishes a spectrum of copyrightability based on human involvement.
Pure AI Output: No Copyright
Content generated entirely by AI with minimal human input (typing a prompt and selecting from outputs) cannot be registered for copyright. The Copyright Office denied registration for AI-generated images in the Zarya of the Dawn case (2023) while granting copyright to the human-authored text and selection/arrangement of those images.
Human-Directed AI: Partial Copyright Possible
When a human makes substantive creative decisions throughout the generation process, selecting specific elements, iterating with detailed direction, and arranging outputs into a cohesive work, some elements may qualify for copyright. The key test is whether the human exercised "sufficient creative control" over the expressive elements, not just the concept.
AI-Assisted Editing: Likely Copyrightable
Using AI as one tool in a larger creative process (generating a rough draft, then substantially editing; using AI to remove backgrounds in photos you shot; applying AI audio cleanup to recordings you made) preserves copyright in the final work. The human modifications and creative decisions provide the authorship the Copyright Office requires.
The Practical Standard
The Copyright Office evaluates registrations on a case-by-case basis. Their consistent position: prompts alone are generally insufficient for authorship because the user does not control the specific expressive output. But selecting, arranging, and modifying AI outputs with creative judgment can meet the authorship threshold. The more human creative labor involved in shaping the final work, the stronger the copyright claim.
Commercial Use: What Is Actually Allowed
Copyright protection and commercial use rights are separate questions. Even if you cannot copyright an AI-generated image, you may still have a contractual right to sell it. Here is how major platforms handle commercial licensing as of April 2026.
Image Generation Tools
| Platform | Commercial Use | Key Terms |
|---|---|---|
| Midjourney | Yes (paid plans) | Full ownership of outputs. Companies over $1M revenue need Pro ($60/mo) or Mega ($120/mo) plan. No attribution required. |
| DALL-E / ChatGPT | Yes (all users) | Users own outputs and may use commercially. Rights to reprint, sell, and merchandise explicitly granted. |
| Adobe Firefly | Yes (paid plans) | Trained on commercially licensed content. IP indemnification for paid subscribers, the only major tool offering this protection. |
| Stable Diffusion | Varies by model | Open-source models have varying licenses. SDXL uses an open license allowing commercial use. Check each model's specific license. |
Adobe Firefly stands out for risk-averse commercial use because its training data is fully licensed and it provides IP indemnification, meaning Adobe will defend you if someone claims your Firefly output infringes their copyright.
Video Generation Tools
| Platform | Commercial Use | Key Terms |
|---|---|---|
| Runway | Yes (paid plans, Standard+) | Non-exclusive, royalty-free license for commercial use. Free/Standard/Pro plan outputs may be used for model training by default. |
| Kling | Yes (paid plans) | Commercial use on paid tiers. Terms vary by region. |
| Sora (OpenAI) | Yes | Same ownership terms as DALL-E. Users own and can commercialize outputs. |
For AI video generation, the critical consideration is not just commercial rights but whether the platform uses your outputs as training data. Runway uses Standard/Pro tier outputs for training by default; Enterprise customers get different terms.
Music Generation Tools
| Platform | Commercial Use | Key Terms |
|---|---|---|
| Suno | Yes (paid plans) | Pro/Premier users own outputs. New licensed models launching 2026; current models will be deprecated. |
| Udio | Licensed platform only | Pivoting to licensed fan platform after UMG/WMG deals. Commercial terms tied to label agreements. |
| ElevenLabs Music | Yes (paid plans) | Commercial use rights on paid tiers. Voice cloning requires explicit consent documentation. |
The music space changed the most in 2025. Udio's pivot to a licensed fan platform after its label settlements means the freewheeling era of unlicensed AI music generation is ending. Suno's commitment to deprecate unlicensed models signals the same direction.
Voice Cloning: Consent Is Mandatory
Voice cloning occupies a distinct legal category. Tennessee's ELVIS Act (2024) was the first state law to explicitly extend right-of-publicity protections to AI voice clones. As of 2026, at least 12 additional states have introduced or passed voice cloning consent legislation. Illinois' BIPA explicitly covers voiceprints. New York's Right of Publicity law now protects voices under its likeness clause.
The practical rule: never clone a voice without documented written consent specifying use, territory, and duration. This applies whether you are using ElevenLabs, Resemble AI, or any other voice synthesis tool.
The Training Data Question
The unresolved core of nearly every AI copyright dispute is whether using copyrighted works to train AI models constitutes fair use. Two competing frameworks have emerged.
The Fair Use Argument
AI companies argue that training is transformative use because the model learns patterns and concepts rather than storing copies. The UK High Court's Getty ruling partly supported this view, finding that model weights are not "copies." US courts have not yet ruled definitively, though the NYT v OpenAI case's focus on "regurgitation" (models reproducing near-verbatim text) could narrow or expand fair use depending on the findings.
The Licensing Argument
Rights holders argue that training on copyrighted works without permission is infringement, regardless of whether the output resembles the original. The music industry's success in forcing licensing deals with Suno and Udio demonstrates this argument's practical power even before courts rule.
Opt-Out Mechanisms
While the legal debate continues, technical opt-out mechanisms are becoming standard. The EU AI Act requires GPAI model providers to respect rights reservations expressed through machine-readable protocols like robots.txt. Starting August 2026, providers must publish detailed training data summaries and demonstrate compliance with opt-out requests, with potential fines reaching 3% of annual worldwide turnover.
In the US, opt-out remains voluntary. Major AI companies offer varying levels of opt-out tools (OpenAI's GPTBot disallow, Google's Google-Extended), but there is no legal requirement to honor them yet.
What Creators Should Do Now
Based on the current legal landscape, here is a practical checklist for creators using AI tools commercially.
1. Document Your Creative Process
If you plan to register copyright for works that involve AI tools, keep records of every human creative decision. Save your prompts, iterations, manual edits, and selection rationale. The Copyright Office evaluates "sufficient human authorship" case by case, and documentation of your creative process is the strongest evidence.
2. Use Commercially Licensed Tools
Verify that your AI tool's terms of service explicitly grant commercial use rights for your subscription tier. Free tiers often have restrictions. Adobe Firefly's IP indemnification makes it the safest choice for high-value commercial projects. For music, use only tools with label licensing agreements in place.
3. Understand Each Tool's Data Practices
Read the training data clause in every tool's terms. Some platforms use your inputs and outputs to train future models by default. If you are generating proprietary creative work, this could mean your concepts enter the training pool. Enterprise tiers typically offer opt-out from data training.
4. Get Voice Consent in Writing
Before cloning any voice, obtain written consent specifying: the identity being cloned, permitted uses, geographic scope, duration, and compensation terms. This is legally required in a growing number of US states and practically essential everywhere.
5. Add Human Creative Value
The strongest legal position combines AI generation with substantial human editing, selection, and arrangement. Use AI to generate starting points, then apply your creative judgment to shape the final work. This approach both strengthens copyright claims and produces better results.
6. Keep Records of Tool Terms at Time of Creation
Platform terms of service change. Screenshot or save the relevant licensing terms when you create commercial work. If a platform later restricts commercial use, your records show what was permitted when you created the work.
What to Watch
Several developments in the next 6 to 12 months could significantly shift the landscape for creators.
NYT v OpenAI discovery phase. The 20 million ChatGPT logs now in discovery could reveal how often language models reproduce copyrighted text verbatim. If the data shows widespread memorization, fair use defenses weaken significantly across the industry.
Getty v Stability AI (US case). The American case applies different copyright law than the UK ruling. A finding that training constitutes infringement under US law would force fundamental changes to how image generation models are built.
EU AI Act enforcement (August 2026). The European Parliament is pushing for stronger creator protections, including mandatory itemized lists of copyrighted works used in training and a centralized European opt-out register maintained by EUIPO. Enforcement begins August 2026 with fines up to 3% of global revenue.
US federal legislation. Multiple bills addressing AI and copyright have been introduced in Congress, though none have passed as of April 2026. The most significant proposals would create a federal right of publicity, require training data disclosure, and establish licensing frameworks for AI-generated content.
Sony's ongoing litigation. Sony has not settled with either Suno or Udio. A court ruling in Sony's favor could establish legal precedent that voluntary licensing deals cannot, potentially making unlicensed training definitively illegal regardless of individual deal structures.
Methodology
This research article synthesizes information from primary legal sources including US Copyright Office publications, court filings and rulings, platform terms of service, and legislative text. Settlement details come from verified reporting by TechCrunch, Hollywood Reporter, and Rolling Stone. EU regulatory information comes from official European Parliament and European Commission publications. Platform licensing terms were verified against current terms of service pages as of April 2026. Voice cloning regulations were cross-referenced against state legislative databases and legal analysis from Duquesne University School of Law. All external links were verified as accessible at time of publication.
Frequently Asked Questions
Can I sell AI-generated art commercially?
Yes, if your platform's terms of service permit it. Midjourney (paid plans), DALL-E, Adobe Firefly, and most major image generators grant commercial use rights to paying subscribers. However, purely AI-generated work likely cannot be copyrighted, meaning others could legally reproduce your exact output. Adding substantial human editing strengthens both your legal protection and your commercial position.
Do I need to disclose that I used AI in my creative work?
In the US, there is no general legal requirement to disclose AI use in commercial creative work as of April 2026. However, the US Copyright Office requires disclosure when registering works that contain AI-generated material. The EU AI Act will require labeling of AI-generated content starting August 2026. Some platforms and marketplaces have their own disclosure policies. When in doubt, disclose.
What happens if an AI tool I used changes its terms of service after I created my work?
Generally, work created under previous terms of service retains the rights granted at the time of creation, though this can depend on the specific contract language. This is why documenting the terms at the time you create commercial work matters. Some platforms include clauses allowing retroactive changes, so read the full terms carefully before relying on any AI tool for revenue-critical projects.