Copyright symbol

As artificial intelligence becomes more powerful and sophisticated, it is reshaping the creative landscape by generating text, images, music and multimedia content with unprecedented speed and fluency.

The strikingly human quality of these outputs is no longer simply a technological curiosity. It is raising complex questions about copyright ownership and turning what was once a niche legal discussion into a practical operational risk for organisations.

When creativity is produced through algorithms rather than human hands, the most fundamental copyright question becomes harder to answer: who is the author?

And if authorship is uncertain, what does that mean for ownership, protection and commercial value?

Asset Bank’s Product Manager, Paul Mulvee, considers these questions in a recent DAM News article: Whose rights is it anyway? Managing AI-generated content and copyright in DAM.

As Paul rightly says, for digital asset management (DAM) professionals, these questions are far from theoretical. They sit at the centre of compliance, risk management and the long-term integrity of digital content libraries.

If an organisation cannot confidently answer who owns an asset, how long it is protected for, and what rights apply to it, then every reuse of that asset carries risk.

In an era of rapid content generation, that uncertainty matters.

What UK copyright law says about AI-generated works

Under the UK’s Copyright, Designs and Patents Act 1988 (CDPA), the author of a computer-generated work is defined as the person who made the necessary arrangements for its creation.

At first glance, this seems straightforward. But in modern generative AI workflows, the reality is more complicated.

Who actually made the “necessary arrangements”?

Possible candidates include:

  • The developer who built and trained the model
  • The operator running the AI system
  • The user writing prompts and refining outputs
  • The employer directing the creative work

In practice, ownership may depend on the specific circumstances of creation.

There is also another unresolved issue: the boundary between computer-generated and computer-assisted works.

If an AI system operates with minimal human involvement, the result may be considered computer-generated.

But when a human provides detailed prompts, iterates outputs and directs the result, that contribution may qualify as authorship.

The difficulty is that the law does not clearly define where this line sits.

UK copyright law and AI-generated works

  • UK law recognises computer-generated works under the Copyright, Designs and Patents Act 1988.
  • The author is defined as “the person who made the arrangements necessary for the creation of the work”.
  • Copyright duration differs from human-created works.
  • AI-generated works typically receive 50 years of protection from creation, rather than life of the author plus 70 years.
  • Moral rights generally do not apply to computer-generated works.

Why AI copyright differs from traditional copyright

Even where protection exists, AI-generated works are treated differently from traditional creative works.

For example:

While this may seem like a technical detail, it has real implications.

For organisations building large asset libraries, copyright duration affects:

  • long-term licensing value
  • brand asset planning
  • reuse rights across campaigns and regions

Another key difference concerns moral rights.

Moral rights include the right to be credited as the author and the right to object to derogatory treatment of a work. They protect reputation and integrity rather than economic value.

Unlike copyright, which can be transferred or sold, moral rights typically remain with the human creator – and sometimes their heirs.

However, these rights do not currently apply to AI-generated works.

For DAM teams, that distinction changes how attribution, editing rights and downstream asset use should be managed.

Training data disputes and the copyright debate

Authorship is only one part of the AI copyright debate.

Another major area of conflict concerns training data.

Generative AI models are trained on vast datasets, often collected from across the internet. These datasets may contain millions of copyrighted works including books, images, articles, music and artwork.

As AI systems become commercially valuable, creators have increasingly challenged these practices.

High-profile disputes include:

Artists, writers and musicians have also publicly raised concerns that AI companies are building commercial systems using unlicensed creative work.

Developers argue that training AI resembles human learning. Models absorb patterns, styles and structures rather than storing exact copies of works.

However, when AI outputs appear strikingly similar to existing copyrighted material, the legal defence becomes harder to sustain.

If courts find that AI models have been trained in a way that is deemed infringing, developers may be required to license huge amounts of content.

This debate is not simply about law. It concerns the economic foundations of creative industries.

Key legal disputes shaping AI copyright

  • Getty Images sued Stability AI over alleged use of copyrighted photographs in training data.
  • The New York Times has brought legal action against OpenAI and Microsoft over alleged use of news articles in AI training.
  • Creators argue AI companies used copyrighted material without permission.
  • Developers argue AI training extracts patterns rather than storing original works.

Legislative responses and regulatory direction

Governments are beginning to respond to these tensions.

In the United Kingdom, policymakers have acknowledged that uncertainty around copyright and AI may slow growth in both technology and creative sectors.

Consultations have explored introducing a text and data mining exception, allowing copyrighted works to be used for AI training unless rights holders opt out.

In the United States, the Generative AI Copyright Disclosure Act has been proposed to require companies to disclose datasets used to train AI models.

The European Union has taken a different approach through the EU AI Act, which introduces transparency obligations for AI systems.

EU AI Act transparency requirements

  • Article 50 requires AI-generated content to be clearly labelled and detectable as artificial.
  • Applies to text, images, video and audio created by generative AI systems.
  • Systems generating deepfakes or public information must disclose AI use.
  • Transparency obligations are expected to apply from 2 August 2026.
  • A Code of Practice on transparency is expected in June 2026 to guide implementation.

Emerging standards for AI provenance

Alongside legal developments, technical standards are emerging to improve transparency around digital content.

One important initiative is C2PA (Coalition for Content Provenance and Authenticity).

C2PA allows digital files to carry cryptographically signed metadata recording:

  • how the asset was created
  • which tools were used
  • whether AI systems were involved
  • who edited or modified the content

This metadata is attached using secure digital signatures that allow the authenticity and integrity of a file to be verified.

Major technology companies, media organisations and hardware manufacturers are adopting the standard to address concerns around misinformation, synthetic media and digital trust.

However, supporting C2PA creates technical challenges for DAM systems.

Because the cryptographic signature depends on the file’s hash, any modification – such as cropping, resizing or compression – breaks the signature.

To support provenance correctly, systems must:

  • preserve original files
  • validate manifests on ingest
  • generate new signed manifests for derivatives
  • maintain a verifiable provenance chain

This shifts DAM systems from passive storage platforms into active participants in the digital authenticity ecosystem.

What this means for DAM governance

For digital asset management leaders, the implications are clear.

AI-generated content introduces new layers of uncertainty:

  • ownership may be disputed
  • originality thresholds are evolving
  • training data disputes continue
  • regulations are emerging but incomplete

At the same time, technical standards such as C2PA are introducing new expectations around provenance and authenticity.

This means governance becomes critical.

Organisations must record and manage key information about their assets, including:

  • who created the asset and when
  • whether AI tools were used
  • what level of human input shaped the output
  • what rights position the organisation believes applies
  • how the asset may be reused, edited or licensed

As legal and technical frameworks evolve, organisations that can demonstrate clear decision-making and provenance will be better positioned to respond.

Conclusion

Generative AI is transforming how content is created.

But it is also challenging long-standing assumptions about copyright, ownership and originality.

UK law offers some guidance through the Copyright, Designs and Patents Act, yet many questions remain unresolved – particularly around human contribution and authorship.

Meanwhile, legal disputes and new regulations, such as the EU AI Act, show that the global copyright landscape is actively evolving.

For organisations managing large content libraries, waiting for perfect clarity is not a viable strategy.

What matters now is building governance processes that record provenance, document rights decisions and maintain transparency.

Because in the age of generative AI, the most valuable capability is not simply producing more content.

It's staying compliant while doing it – and being able to prove it.

Chat with the team to learn more about managing copyright with a digital asset management platform.

Book a demo of Asset Bank


FAQs

Who owns AI-generated content in the UK?

Under the Copyright, Designs and Patents Act 1988, the author of a computer-generated work is defined as the person who made the arrangements necessary for its creation. In practice, this may be the developer, user, operator or employer depending on the circumstances.

How long does copyright last for AI-generated works?

In the UK, copyright in computer-generated works typically lasts 50 years from the date of creation, whereas human-created works are protected for the creator’s lifetime plus 70 years.

Do AI-generated works have moral rights?

No. Moral rights, including the right to be credited as the author and to object to derogatory treatment, generally apply only to human creators.

Why are training datasets controversial in AI copyright law?

Many generative AI models are trained on large datasets that may include copyrighted works. Creators argue their work has been used without permission, while developers argue AI training extracts patterns rather than reproducing original content.

What does the EU AI Act require for AI-generated content?

The EU AI Act introduces transparency rules requiring AI-generated content to be labelled and detectable. These obligations are expected to apply from 2 August 2026.

What is C2PA and why does it matter for DAM?

C2PA is a technical standard for recording the provenance of digital content. It attaches cryptographically signed metadata showing how a file was created and edited, helping organisations verify authenticity and track AI involvement.

 

Dont’ forget to share this post


Related Articles

Back to blog