As AI plays a bigger role in creative processes, the issue of how datasets are trained and whose work is included has become even more important. And this isn’t limited to written or visual works. The same issue applies to human likenesses, such as faces, voices, and identities, which are increasingly scraped or reconstructed without permission. In this age where AI can be trained on a person as easily as on a piece of content, consent becomes even more critical. In this article, we unpick whether models should be able to train on creative works unless creators choose to opt out. Or should they need clear opt-in consent?
The risks of the opt-out model
In an opt-out framework, the assumption is that creative works can be used for training unless creators actively withdraw consent. On the surface, this model seems efficient and scalable for those who are building systems, with fewer barriers to collecting large volumes of data, developers can move quickly.
Recently, the Content Overseas Distribution Association (CODA), which represents major Japanese publishers such as Studio Ghibli, requested that OpenAI stop using their copyrighted works for training without prior permission. CODA argued that under Japanese law, copying creative works for machine learning could violate copyright. Creators must give permission beforehand, not after the fact. This case reveals a significant flaw in the opt-out model. It assumes that permission is granted and puts the responsibility on creators to object. This lack of choice can lead to reputational risks for creators, potential legal issues, and a loss of trust.
The opt-out model reduces a platform’s relationship with the creative ecosystem. When creators believe their work is used without a real choice, the relationship shifts from collaborative to transactional. For tools built on human originality, such dynamics can turn into liabilities rather than strengths.
The benefits of consent-first frameworks
In contrast, an opt-in approach flips the default. Creators actively choose whether their content gets used for training. This model places creators' control at the forefront, inviting participation rather than assuming inclusion. It aligns closely with emerging ethical, regulatory, and business norms focused on transparent, rights-respecting data usage.
A strong example of this shift is the landmark agreement between Universal Music Group (UMG) and Udio. This deal establishes a platform where artists and songwriters can opt in to allow their works to be used for training, ensuring they receive compensation whenever their recordings or derivative creations are utilised. By structuring training usage around explicit consent, this agreement reframes creators from passive data providers to active stakeholders. Their creative works no longer act as assets but become part of a meaningful value exchange. From a business perspective, this enhances authenticity, fosters trust, and lays the groundwork for enduring partnerships.
For companies, the upside is equally significant. Consent-first data reduces legal exposure, improves dataset quality, and creates more predictable, brand-safe pipelines for AI training.
For creators, the opt-in model offers clearer boundaries. They can choose whether their work is used to train models and how it may be used. For platforms, this lowers legal risks, leads to higher-quality datasets (as consented content is usually better documented), and boosts brand value in a system that increasingly appreciates creator rights.
Key questions for creators and talent managers
If you’re a talent or talent manager navigating this new landscape, consider these key questions:
1. How is consent being secured? Does the platform ask for your explicit permission, or do they rely on the assumption that you won’t object?
2. What are the commercial terms? If your work gets used for training, how will you be compensated, and how will usage be tracked?
3. What rights are preserved? Do you maintain control over how your likeness, voice, or work is used in outputs, and can you withdraw or opt out later?
This is where TrueRights comes into play, specifically our platform, TalentRights. We designed the first public database that allows creators to state how their likeness and work should be used, or not used, in AI. TalentRights also allows talent agencies to manage rights at scale, all from one centralised platform, and includes an API for seamless AI platform integration. Too often, AI content is produced without permission, leaving creators vulnerable. TalentRights changes that by putting creators in charge and making their preferences clear, reliable, and enforceable. In an era when AI can quickly replicate faces, voices, and likenesses, consent is crucial. It’s a right that creators deserve to have control, transparency, and protection in the digital space. If AI is going to enhance creativity, it should do so on terms that honour the people behind it.


