Facial recognition in dam software

Facial recognition often comes up as a “must-have” feature in digital asset management.

The idea of automatically identifying people across images and streamlining tagging is highly appealing – especially for busy marketing teams managing large volumes of content.

But when you look beyond the surface, the reality is far more complex.

In many cases, facial recognition introduces significant legal risk.

Here’s where things stand in 2026, why Asset Bank does not offer facial recognition today, and what you need to understand if you are considering it.

Facial recognition is biometric processing – not just tagging

Facial recognition is not simply an advanced search tool; it involves analysing and matching a person’s facial features to identify them across images.

Under European GDPR, facial recognition is classed as biometric data – a special category of personal data that is subject to strict regulation.

That distinction is critical.

Processing this type of data is prohibited by default unless a very specific legal basis applies.

In most DAM use cases, that legal basis would need to be explicit written consent from every individual whose face is processed in the DAM. This includes the faces of people who do not get a name applied, as their faces are still biometrically processed during the matching algorithm.

That’s a much higher bar than many organisations expect.

Facial Recognition Scan of Womans Face

But what if we operate outside of Europe?

Although the GDPR sets a high bar for biometric processing (including facial recognition) in Europe, many non‑EU jurisdictions also regulate biometrics as “sensitive” (or via biometrics‑specific laws), typically demanding heightened notice/consent, strict purpose/retention limits, and strong security.

For organisations using digital asset management systems, this means biometric compliance must be assessed for every country involved in capture, storage, access, and processing, not just the EU.

In practice, DAM deployments should treat facial recognition as a high‑risk capability and only enable it with clear governance (lawful basis/consent where required, DPIA‑style risk assessment, minimisation and short retention, and vendor/cross‑border transfer due diligence).

Why Asset Bank made the decision not to offer facial recognition

We did explore facial recognition. In fact, we built it! The use case felt intuitive, and we had strong interest from customers.

However, after working with UK legal advisors, insurers, and specialist counsel in the United States, we were advised not to proceed with the feature we’d built.

The issue was not the technology itself – it was the legal reality of how it would be used.

Most DAM environments include images with multiple people, often collected over many years, where consent is incomplete, unclear, or entirely absent.

In that context, it becomes extremely difficult to ensure lawful processing.

At that point, the question was not whether the feature would be useful – it was whether it could be used safely and compliantly.

Offering facial recognition can open clients up to potential risk if they don’t check the small print

Some DAM providers do offer facial recognition, which can make it feel like a standard capability.

But when you look more closely, the model is often the same: The platform provides the feature, and the customer takes on the compliance burden.

Typically, this is written into terms and conditions, requiring clients to ensure they’re using the feature in line with GDPR requirements for special category personal data (GDPR article 9).

This usually includes:

  • Obtaining valid consent
  • Managing biometric data lawfully
  • Ensuring appropriate safeguards are in place

In practice, that means the software enables biometric identification, but you carry the legal risk.

Liability often sits with the customer, not the software provider.

As a provider positioned as the DAM for compliance, we don’t think that’s right.

We don’t feel it is responsible to introduce a high-risk feature and expect customers to navigate complex biometric regulations on their own.

Most organisations:

  • Do not have full visibility of consent across all assets
  • Cannot realistically validate consent at scale
  • Must rely on their data protection officer (DPO) or legal teams

And in many cases, those teams will not approve the use.

Rather than shift that burden onto our customers, we choose not to offer facial recognition until it can be delivered in a genuinely compliant way.

EU_AI_Act_Coverpage_wide

Why consent is the real barrier

For facial recognition to be lawful under GDPR, every identifiable person must have given explicit, informed consent specifically for biometric processing.

That consent must be:

  • Clearly documented
  • Traceable
  • Linked to the relevant assets
  • Easy to withdraw

Key point: consent must apply to every individual in every image processed

In theory, this is manageable. In practice, it quickly breaks down. Most DAMs contain:

  • Large volumes of historical content
  • Images with multiple individuals
  • Assets where consent is missing or outdated

Even if you fix the process going forward, legacy content creates ongoing risk. And without complete coverage, the entire system becomes legally vulnerable.

This is why “legitimate interest” is not a viable fallback – regulators consistently reject it for biometric identification due to its intrusive nature.

The role of your data protection officer

Any organisation considering facial recognition will need input from a data protection officer (DPO) or equivalent legal authority.

In most cases, this will trigger a Data Protection Impact Assessment (DPIA), given the high-risk nature of biometric processing.

What we see in practice is that DPOs often raise significant concerns. These typically centre around:

  • The inability to guarantee consent across all individuals
  • The challenges of fulfilling data subject rights
  • The broader risk of non-compliance at scale

For many organisations, this results in the feature being restricted or not approved for use at all.

Data Protection Officer (stock)

Additional pressure from the EU AI Act

The EU AI Act adds another layer of scrutiny.

Facial recognition systems are classified as high-risk AI, which brings obligations around:

  • Risk management
  • Documentation and transparency
  • Human oversight

While not outright banned in this context, biometric identification is clearly an area of heightened scrutiny – particularly where systems enable identification across large datasets.

This reinforces the direction of travel: tighter control, not wider adoption.

What you should be asking before adopting facial recognition

We learned that, if facial recognition is being considered or presented as part of a DAM solution, it is important to go beyond the feature itself and examine how it can be used safely.

You should be clear on:

  • How consent is captured, stored, and linked to images
  • What safeguards prevent processing without proper authorisation
  • Where legal liability sits between provider and customer
  • How the system handles real-world scenarios like legacy content or withdrawn consent

And crucially, has your data protection officer approved this use case, and has a DPIA been completed?

A compliant future use case

Facial recognition is not permanently off the table.

There are scenarios where it could be used responsibly, but they are far more controlled than typical DAM workflows.

For example, in a closed photoshoot environment where every participant has signed a consent form that explicitly covers biometric processing, facial recognition could be used to link images to those consent records.

In this case, the technology supports compliance by making it easier to track where individuals appear and manage permissions over time.

However, even here, strict guardrails are required. The system must not process images beyond that defined scope, and it cannot be used to scan unrelated or historical content.

The bottom line

Facial recognition in DAM is not just a feature decision – it is a compliance decision.

In most real-world scenarios, it is extremely difficult to implement in a way that fully satisfies GDPR requirements. The addition of the EU AI Act only increases the level of scrutiny.

Some providers choose to offer the feature and place the responsibility on their customers.

Asset Bank takes a different view.

As the DAM built for compliance, we believe it is our responsibility to protect our customers from unnecessary risk – not pass it on to them.

Stay compliant with Asset Bank

Rather than relying on high-risk automation, we focus on giving you the tools to manage usage rights, permissions, and consent properly.

We want to help you stay on the right side of image licences, get consent right every time, and keep your content in the right hands.

Want to learn more about Asset Bank? Speak to a member of the team.

Book a demo of Asset Bank

Dont’ forget to share this post


Related Articles

Back to blog