AI Implementations of Today and Tomorrow
- Henry Marsden

- Sep 8, 2025
- 6 min read
‘Artificial Intelligence’ is no longer a theoretical technology hovering on the horizon, but is quickly (and with ever increasing speed) being embedded into everyday workflows- powering products and increasingly shaping entire industries. For the music business, as for every other sector, the question is no longer if AI will matter but how it is being applied in practice today- and where it is likely to drive competitive advantage tomorrow.

I spent last week at the fantastic MUSIC FRONTIERS in Berlin- a conference set firmly at the intersection of AI and the Music Business. A consistent theme from the 2 days was that AI is both “now” and “not yet”- it is everywhere, infecting workflows in the immediate day-to-day and through to the proposed strategies of the next 5 years. Most businesses seem to imply they are heavily reliant on AI already, but very few yet have a value proposition centralised around core AI functionality.
Everywhere on this spectrum though it's clear that AI adoption is happening at multiple levels simultaneously. Each level has its own practicalities, its own opportunities, and its own risks.
Broadly speaking, we can think of current AI adoption in three distinct layers:
Day-to-day, ad-hoc usage - individuals using AI as an assistant or co-pilot.
Scaled implementation - organisations embedding 3rd party models into their processes.
Proprietary trained models - companies training their own systems on large datasets to create tailored AI for specific outcomes.
Each represents a different degree of sophistication, autonomy, and investment. Each also tells us something about where the music industry (amongst others) might be heading.
1. Everyday Copilot: AI in the Flow of Work
The most universal, and often underestimated, layer of AI adoption is the ad-hoc, everyday usage. AI today has become a co-pilot- a tool used by individuals to accelerate their “jobs to be done”.
In this mode, humans remain firmly in control. AI is not making strategic decisions or running processes autonomously; it is simply speeding up existing workflows, removing friction, and offering ideas. It as a hyper-efficient assistant that can be deployed at any moment, and for any context.
I’m sure the common examples will already be reflected in your day-to-day:
Drafting and editing emails
Exploring long documents/multiple sources and turning into digestible summaries
Acting as a quick tutor or reminder for spreadsheet formulas
Replacing Google Search
Brainstorming marketing copy or presentation ideas
Providing a creative sounding board for campaigns, strategies and proposals
This is the level of AI that affects everyone- whatever your niche, and whatever your industry (let alone how it touches our personal lives- meal planning, exercise training regimes, present ideas). It’s about time savings and mental bandwidth. Tasks that used to take an hour can take ten minutes, or can be achieved with more context than even Search was able to provide.
I haven’t yet seen it deployed as a top down policy, but it’s clear staff at every level are now able to reallocate time. A royalties analyst can spend less time cleaning spreadsheets and more time actually analysing trends. A sync executive can cut down the hours spent on email back-and-forth and instead focus on creative placements, or simply digesting new music! Marketing teams in particular are using AI to expand the breadth of campaigns they test and execute without adding new headcount.
It’s not necessarily glamorous, but it is transformative. As with any efficiency tool in history- from the typewriter to email- adoption is uneven. What is already clear though is that those who embrace it are gaining a material productivity edge.
2. Scaled Implementation: AI at the Core of Processes
The second level of adoption is organisations moving from ad-hoc usage to embedding AI systematically into repeatable workflows. This is less about one-off productivity boosts and more about scaling efficiency or capability wider- across teams and products.
A clear example is software development. Tools like Cursor have collapsed development cycles by integrating AI directly into the coding environment. Instead of Googling endlessly for debugging help, developers can stay in flow while AI suggests fixes, writes functions, and even runs commands for testing. This isn’t replacing developers, but it is supercharging them.
Also take note that the rate of change of tool availability is also increasing- from 2022 (the launch of chatGPT) through to today, increasingly sophisticated tooling has been released in ever decreasing cadences. A key takeaway, however, is that humans are still required ‘in the loop’- purely generated AI code is not structured well enough to standalone. Domain experience is crucial. AI is an accelerant, not a replacement.
But it isn’t just coding- other verticals are also integrating AI at scale:
Translation: Multilingual websites and product documentation now updated consistently and automatically.
Data aggregation: Summarising data points in a consistent and systematic manner.
Content moderation: Platforms scanning comments and reviews for sentiment analysis.
It should be noted that in 90% of cases these are simply ChatGPT API integrations. They are also not deep value creators, but bolted on (and often poorly thought through) AI additions. Strava’s “Athlete Intelligence” is a good case study, where AI takes raw fitness data and turns it into (typically useless) insights for users:

In the music world, this second layer of adoption is an easy entry point for publishers, PROs, and platforms. Translation tools can make lyrics instantly available in dozens of languages. AI-driven dashboards can aggregate income streams across hundreds of territories into real-time insights (Absolute Anthology is leading the way here).
This is where AI stops being just an assistant and becomes an operational multiplier. It still requires human oversight, but the leverage created and resources saved is significant.
3. Proprietary Models: Training AI on Your Own Data
The third layer is the most advanced and the most strategic: proprietary trained large language models (LLMs).
Instead of relying on off-the-shelf tools like ChatGPT or Claude, companies can train their own models- crucially on exclusively their own datasets. This can be done via open-source models (like Meta’s Llama), or by building entirely bespoke architectures.
Why does this matter? Because proprietary training allows companies to create AI systems that are uniquely optimised for their specific data, tasks, and objectives. They aren’t biased or diluted by non-relevant or outdated data, and can be tailored to specific tasks.
In the music industry, this could mean:
Work-recording matching: Using proprietary models to reconcile compositions with recordings across millions of data points. This is one of publishing’s hardest problems, and one with direct revenue implications.
Royalty forecasting: Training a model on decades of income data to predict cashflows under different market or licensing scenarios.
Generative creativity: Models trained on cleared data can be used for co-writing, remixing, or custom sound design. This is already how generative-AI plays work under the hood- e.g. Suno and Udio.
Of course, proprietary training still raises thorny questions- are companies truly training on their own data, or are they quietly incorporating publicly available or copyrighted datasets? The legal and commercial frameworks are slowly emerging, and the industry is watching closely.
One thing is still clear: the companies that succeed here will not just be “using AI” but will be AI-native. They will build business models where AI is the core operating engine rather than a bolt-on afterthought. This is also the layer where the most potential upside lies, and the most risk. Proprietary models are costly, complex, and resource-intensive.
But for those who succeed, they offer defensibility and differentiation.
Themes Across All Three Layers
Stepping back, what ties these three layers together?
If you’re not using AI at all, you’re already behind. Even the most basic everyday use cases create time savings that compound quickly. In competitive markets, refusing to engage with AI is akin to refusing to use email in the 1990s or the Internet in the 2000s.
Prompt engineering is key. Whether drafting an email or training a model, the ability to manipulate inputs to get the desired outputs is critical. Effective AI usage is not “fire and forget”- expertise is still required to frame the prompt correctly, and to be able to discernibly filter the output.
Humans are still central. AI-generated tech stacks can be patchy, inconsistent, and often fail to adhere to professional standards. Developers, analysts, and creatives are not obsolete; their expertise is needed to guide, refine, and deploy AI effectively.
Conclusion: The Early Days of a Truly AI-Native World
The final frontier, proprietary LLMs, is where the most profound change will emerge. These models will power companies as “truly AI” businesses- with AI not as a tool, but as the core of their operating model.
We are not there yet. We are still in the early days, much like the dawn of the internet, or the rise of Web 2.0 and the first years of social media. The environment is messy, experimental, and full of hype. Many projects will fail- but the ones that succeed will define the next era of business.
For the music industry, this means we should expect disruption not just in obvious areas like generative AI, but in the very infrastructure of publishing, licensing, and rights management. The companies that go deep early, and that build with AI at their core, will gain first-mover advantages that will reshape the landscape for decades.
In the meantime, the imperative for everyone else is clear: start using AI, start experimenting, and start building literacy. Whether as co-pilot, process multiplier, or proprietary engine, AI is here- and it is already changing the way we work and create value.




Comments