AENA’s €10M Fine: A Reminder that DPIAs Must Be Real, Not Reactive

The recent €10.04M fine imposed on AENA by the Spanish Data Protection Authority (AEPD) has drawn attention because it involves facial recognition in major airports. But the true importance of the case lies not in the technology itself, but in how organisations are conducting — or failing to conduct — Data Protection Impact Assessments (DPIAs) for high-risk processing.

AENA deployed a facial-recognition system without a valid prior DPIA capable of demonstrating necessity, proportionality and adequate risk mitigation. Even though passengers were asked for consent, the AEPD made it clear: consent is not enough to legitimise a highly intrusive technology when less invasive alternatives exist or when necessity cannot be justified.


The limits of consent and the central role of necessity

Biometric data is among the most sensitive categories under GDPR, and its use requires a strong legal and technical justification. In this case, the regulator concluded that AENA failed to demonstrate why facial recognition was indispensable for the intended operational purpose.

If the same goal can reasonably be achieved through less intrusive means, the processing will fail the GDPR’s necessity test — regardless of whether individuals consent.

This is one of the clearest lessons of the case: intrusion must be proportional to the need, not to organisational convenience.


When risk analysis is too generic, it stops being useful

Another key aspect highlighted by the AEPD is the quality of AENA’s risk analysis. A recurring issue in many organisations is DPIAs that remain too high-level or generic, especially when advanced technologies are used.

For high-risk processing, a DPIA must identify:

  • the specific risks introduced by the system (e.g., biometric identification, continuous monitoring),
  • the inherent risk before controls,
  • and the residual risk after applying safeguards.

If the residual risk remains high, that should trigger a clear conclusion:

the system must be adjusted, reinforced or reconsidered before deployment.

A DPIA that does not lead to decisions, changes or mitigation measures is not fulfilling its purpose.


Accountability: evidence, not assumptions

The accountability principle under GDPR requires organisations to demonstrate compliance — not just claim it.

In the AENA case, the documentation did not sufficiently justify:

  • why the technology was needed,
  • how risks were assessed,
  • what alternatives were evaluated,
  • and how proportionality was determined.

A DPIA must be precise, complete and carried out before the processing begins.

Timestamped versions, methodological explanations and detailed lifecycle descriptions are essential elements when authorities evaluate whether the assessment was properly conducted.


Final thought: necessity is the threshold that cannot be bypassed

When adopting intrusive technologies like biometrics, organisations must begin with a fundamental question: Is this truly necessary?

If valid alternatives exist, or if the technology introduces risks that cannot be sufficiently reduced, the processing will not meet GDPR standards — regardless of consent, technical sophistication or operational benefits.

The AENA case shows that DPIAs are not administrative paperwork; they are a core governance tool. And they only work when they provide clear evidence, not generalities.

For companies exploring biometric systems, AI solutions or other high-risk technologies, this decision is a reminder that privacy by design begins with challenging the necessity, not just documenting it.