Lawsuits to Licensing: AI Music Enters Its Next Phase
- Henry Marsden

- Jun 9, 2025
- 6 min read
There’s been a pretty seismic shift in the AI x Music conversation over the past few weeks.

The Major record labels and two of the most prominent generative music platforms, Suno and Udio, have reportedly moved from trading lawsuits to negotiating licenses. These generative AI platforms were(/still are) facing litigation for using copyrighted music to train their models. Now they’re sitting at the table with the very same rights holding companies, working out deals that will both legitimise and monetise their models.
It’s a stark change in tone and a clear sign of where things are heading. The question has finally, and inevitably, moved from if AI-generated music will be licensed to how. But as these negotiations are behind closed doors, many of the structural dynamics that shaped the streaming economy, for better or worse, are playing out again in real time.
We’re watching the future of music being negotiated, but only a few players are in the room.
Who will be included in that future, and who won’t?
Déjà Vu?
When Spotify and the other streaming services first emerged, major labels struck licensing deals that exchanged legitimised access to their vast catalogs for two key things:
Advance payments or minimum guarantees, and
Equity stakes in the platforms themselves.
The latter meant labels benefitted not just from the performance of their artists, but from the platform’s overall growth. It was a prescient move. It also gave them leverage over platform policy and long-term business model decisions- leverage that many independents, songwriters, and artists wouldn’t share.
There was also much discussion about how the subsequent windfall from offloading these stakes was then shared out with artists themselves (noting Universal is still yet to liquidate its share- perhaps maintaining its leverage over policy and in particular, periodic licensing discussions?).
We're seeing early signs that history could be repeating itself. The majors' negotiations with Suno, Udio and others are likely to include not just flat fees or per-track royalties (more on this below), but also equity stakes in the companies themselves. Is this another forward thinking manoeuvre- especially if these tools become widely adopted by users and creators alike?
Licensing your own Competition
Here’s where things diverge from streaming: Suno and Udio aren’t platforms where you play existing songs. They’re platforms generating new content itself- often in the style of (and hence trained on) existing artists and catalogs. That generated content doesn’t just sit beside the original- it competes with it, significantly changing the economics at play.
It also makes the idea of a blanket license raise more questions that it answers. Would it be a flat fee to train, or a fee based on what is generated? How would the revenue get split if there’s no performance/usage data, and perhaps most importantly- no clear attribution back to the original works used to train the model? As with the rest of the music industry and our previous explorations of getting creators paid via metadata- attribution is the necessary bedrock to accurately and efficiently share and apportion generated value.
If we can’t attribute, how do we compensate?
Attribution is the Holy Grail
Some argue that models like Udio and Suno should be able to attribute their outputs back to training data- offering a kind of digital fingerprint for every AI-generated song. The current, and brutal, reality is that’s difficult to achieve. Generative AI models are hugely complex and powerful probabilistic algorithms deploying advanced statistical analysis of prompts and ‘trained data’ to effectively guess at responses.
This means in the current setup there is no reliable way to trace influence, or determine how much of a particular song or artist shaped the output. It’s another question whether this was always going to be the way- whether the tech establishment has skipped friction by not prioritising structures that facilitate this sort of traceability.
This has essentially always been the general case with innovations- create the compelling product/service first (to validate both the idea and commercial potential), and if time allows figure out imbedded assumptions/hurdles that are considered less riskier later. We're in the doldrums now- the horse is out of the gate, and we may be too late.
If attribution does become part of the licensing framework, the value of clean and structured metadata will skyrocket. Publishers and labels with accurate work-to-recording links, songwriter splits and identifiers will be better positioned to maximise any attributable revenue- to get paid. It’s much the same with the digitisation of consumption- it forced rights holders to digitise their catalog. This was especially true as policies matured to mandate the presence of certain metadata points- e.g. ISRCs for ingestion to streaming services (as arguably should also be the case now with ISWCs).
We're in danger of being caught in the gap (... again). Either ushering in a new wave of enforced infrastructure investment to play catchup, or creating yet another gulf between those with leverage and those without. The incentives, I hope, will align so the best-organised rights holders will benefit the most- clean data is king.
Will other stakeholders get a say?
If (/when) deals are struck, the question lingers: will artists be able to opt out of having their work used in AI training?
In all honesty, it may already be too late. Many models have already been trained on copyrighted music- hence the companies seeking forgiveness rather than permission. This was a pre-cursor to the lawsuits being filed- the data is already ingested. In some cases the training also deliberately occurred in jurisdictions with looser copyright laws, creating cross-border enforcement headaches mirroring those in piracy and streaming.
Even if opt-outs become a norm going forward, will it only apply to future training? What about the existing datasets already built? Will models be retrained? Artists’ choices are not likely to apply retroactively, in all likelihood rendering them ineffective on models that do not get re-trained.
Right now, the negotiations are being led by the majors. Once again, independents and creators aren’t at the table. This risks recreating the same imbalance we saw with streaming: big rights holders strike lucrative deals, while the long tail is left to operate under pre-negotiated terms they had no voice in shaping, and likely are left only picking up the scraps.
Without coordinated pressure from publishers, collection societies, or lawmakers, we may once again end up with a system designed around scale rather than fairness. The framework for which AI generated content is monetised is being thrashed out now- and many stakeholders just aren’t at the table (again though, nothing particularly different from how history has previously played out).
Interestingly, there’s another, more ethical, approach. Rather than scraping music from the internet or betting on fair use defences, some platforms are choosing to build upon a foundation of consent. Last week in Cannes I was introduced to Delphos.ai. Their model only trains on music a user has explicitly given it, meaning every piece of output is inherently attributed and can be cleared for use, with the rights holders part of the loop from day one.
This kind of baked-in optionality changes everything:
Creators retain agency. You choose whether your music is included, and how it’s used.
Revenue can be shared. Attribution is clear, and monetisation can be cleanly split.
Output is legally usable. No legal grey zones, no takedowns, no risk (other than the big one we haven’t even touched upon here- is material fully generated without direct human input even subject to copyright at all?)
It’s slower to scale than scraping the web. But it’s more ethical, more artist-friendly, and likely more sustainable in the long run as lawmakers and licensing models catch up and the copyright debate plays out.
The Bottom Line
AI music is no longer hypothetical. It’s here, and the economic frameworks are being built as we speak. But, we have to ask tough questions:
Will all rights holders be compensated fairly? And how?
Will artists have any agency over how their work is utilised?
Will attribution and transparency be baked into the model, or bolted on too late?
If we don’t get this right, we risk falling into the same traps of the streaming economy- where a few benefit immensely and opaquely, and the rest are left behind.




Comments