Why creative teams shouldn’t be thinking about rights at all

Creative agencies are doing extraordinary things with AI. But rights clearance hasn’t kept up, and the gap between what’s technically possible and what’s properly licensed is widening by the week.

By

From generative campaigns that react in real-time to conversational experiences powered by talent likenesses, the creative ceiling has never been higher. However, the tools are moving faster than the contracts, and the creative teams are outrunning the legal teams. This piece isn’t here to slow anyone down - quite the opposite - it’s about building the infrastructure that lets creative agencies move faster and with more confidence, because the rights are already handled.

The Rights Gap Nobody Is Talking About

Generative AI is now integral to agency operations. From ideation and content creation to testing, optimisation, and delivery, it’s embedded in the workflow. But this is where it gets complicated. When your AI tool generates an image, a voice, a likeness, a style, or a piece of content that draws on someone’s intellectual property, a series of rights questions kick in that most creative workflows aren’t set up to answer. Was the underlying model trained on licensed or unlicensed content? Does the output use a real person’s name, image, likeness, or voice, and do you have AI-specific consent for that? What usage rights exist for the generated content, and can it be used across all markets, platforms, and durations the campaign requires?

These aren’t theoretical questions. Over 70 copyright lawsuits have been filed globally against AI companies, and the regulatory landscape - from the EU AI Act to the US NO FAKES Act - is tightening fast. The agencies that build the right foundations now will avoid legal problems and be able to move faster.

What Proper AI Rights Clearance Actually Looks Like

The starting point is talent IP. If your campaign features a real person’s likeness, voice, or persona - whether that’s an athlete, actor, creator, or public figure - you need AI-specific consent. Traditional endorsement contracts almost never cover generative AI use. The SAG-AFTRA AI guidelines, the WGA’s disclosure rules, and state-level legislation in New York and California all require express written consent for digital replica use. That means documented, scoped, time-bound permission that specifies platforms, markets, formats, and use cases.

It’s not just talent likenesses that need protecting, either. Brands increasingly hold libraries of digital twins, 3D product assets, and ecommerce imagery that are equally vulnerable to unauthorised AI training. If your client has invested in high-quality product photography or CGI, those assets carry significant commercial value and should be treated with the same rigour as talent IP. Watermarking, data labelling, and style mimicry protection - a layer that causes AI tools to read content differently, preventing visual replication without affecting how the image looks to the human eye - all apply here too.

Then there’s the question of scope. Not all AI-generated content carries the same rights, and the breadth of the licence matters enormously. Importantly, it needs to be defined before the creative work begins, not after: can the content run on social and paid? In which territories? For how long? Can it be adapted, remixed, or extended? These are the same questions agencies have always navigated in traditional production, but in AI workflows they’re routinely overlooked because the content feels like it was created from scratch. It wasn’t. Somewhere in the chain, someone’s IP contributed.

Model governance matters too. When you’re working with AI tools, you need to understand what they were trained on. Was the training data licensed? Does the model provider offer indemnification? If you’re using custom-trained models - training on a specific talent’s visual assets, for instance - the data governance requirements become even more specific: where is the data stored, who has access, and what happens to it when the project ends? Model deletion clauses are becoming standard in best-practice contracts, and rightly so.

Finally, there are approvals. Talent and rightsholders increasingly expect to see and sign off on AI-generated content before it goes live, especially for likeness-based work. Building approval workflows into the creative process is what builds trust, and trust is what unlocks access to better talent and better terms.

The Invisible Layer: Data Labelling, Watermarking, and Provenance

This is the part most agencies haven’t thought about yet. But it’s coming, and fast. Data labelling in the AI context means embedding machine-readable information directly into content that signals how it can and cannot be used. Think of it as a set of instructions baked into the asset itself. A Do Not Train signal tells any AI system that this content must not be ingested into a training dataset. An Authorised for Training label means it has been cleared, with defined parameters. Style mimicry protection means that even if the content is scraped, the AI cannot accurately replicate the visual style, aesthetic, or likeness. Usage permissions travel with the content, specifying territories, platforms, durations, and permitted use cases.

The C2PA standard, the Coalition for Content Provenance and Authenticity, backed by Adobe, Microsoft, Google, and the BBC, among others, is rapidly becoming the industry framework for this. Content Credentials act like a nutrition label for digital content: cryptographically signed, tamper-evident metadata that records the full provenance chain from creation through every edit and distribution.

Alongside metadata, invisible watermarking embeds identifiers directly into the pixels, audio waveform, or video frames themselves. Unlike metadata, watermarks survive resizing, compression, screenshotting, and re-uploading. They persist even when the file is stripped of its metadata, which happens routinely on social platforms. Google’s SynthID and similar technologies are leading this space, and a recent Microsoft report found that layering C2PA signing with imperceptible watermarking delivers the highest-confidence provenance authentication available today. For creative agencies, this means the content you produce can carry its own proof of rights clearance, its own usage permissions, and its own provenance record wherever it travels, creating a huge competitive advantage.

Rights Clearance Doesn’t Have to Be a Bottleneck

One of the biggest anxieties in the agency world is that proper AI rights clearance will be expensive and slow, and that it’ll kill the speed advantage that makes AI exciting in the first place. That anxiety is understandable, but it’s based on a broken model of how rights clearance has historically worked. When the infrastructure is built properly, rights clearance becomes a layer in the workflow rather than a gate in front of it. Usage-based pricing, as opposed to six-figure flat fees, means you pay for what you use, scaled to the platforms, markets, and durations you actually need. Pre-cleared IP pools - libraries of talent likenesses, voices, and visual assets already licensed for AI use - give instant access with clear terms. Automated approval workflows route AI-generated content to rightsholders for sign-off at speed, not through weeks of back-and-forth. And transparent reporting gives talent and their representatives a clear view of exactly how their IP was used, building the kind of trust that leads to longer, deeper relationships.

The same frameworks that made stock imagery affordable and accessible can apply to talent IP in AI contexts. The technology exists. The question is whether the rights infrastructure is built to match it.

The Creative Work Is the Point

Creative agencies should be focused on the most ambitious, inventive, award-worthy ideas that AI makes possible. They should be pushing boundaries, experimenting, imagining things that weren’t conceivable two years ago.

They should not be spending their time worrying about whether the likeness they’re using is properly cleared, whether the training data is licensed, whether the content can run in Germany as well as Brazil, or whether the watermarking meets the EU AI Act requirements. That’s not where creative energy belongs.

The best creative work happens when the infrastructure underneath it is invisible. When the rights are already cleared. When the consent is already documented. When the content carries its own provenance and usage permissions. When the pricing is fair and the process is fast. When the agency can say yes to the idea before anyone has to say wait.

That’s the world we’re building at TrueRights - consent registries, usage pricing engines, licensing frameworks, content labelling, watermarking, and compliance monitoring - so the people making the work can focus entirely on making it extraordinary.

More Articles

Get in touch with us, we'd love to hear from you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.