There is a telling irony at the centre of modern data protection. The organisations spending the most on privacy technology are often the ones furthest from genuine compliance. They invest in the latest encryption stacks and deploy automated consent management platforms… and yet they remain systematically non-compliant with one of GDPR’s most foundational principles: Article 25, data protection by design and by default.
A recent analysis from the IAPP puts this tension into focus. As of 2026, the framework built around four assessment factors, including the state of the art, cost of implementation, nature of processing, and risk to individuals, has never been more important, and never more widely misunderstood. The emergence of AI regulation across virtually every major jurisdiction has raised the stakes dramatically. And the uncomfortable truth is that many organisations are not ready.
The technology limitation
We’ll start with the most common mistake: the belief that buying better technology is the same as practising better privacy. It isn’t. The IAPP analysis is especially blunt on this point. State of the art is a moving target that demands not just technical investment, but the organisational capacity to continuously reassess whether current measures still reflect current norms. When a company deploys a cutting-edge data minimisation tool and then fails to revisit whether it is still appropriate eighteen months later, it has not achieved state-of-the-art compliance. It has achieved a snapshot. And a snapshot, by definition, ages.
This matters more than ever because AI systems have compressed the pace at which the technological landscape shifts. Generative AI and large-scale inference systems are new processing contexts that Article 25 was not specifically written to address. Regulators are catching up. The EU AI Act’s full provisions are landing this year. AI-specific risk assessment obligations are multiplying. Controllers who have not embedded design-stage thinking into their AI governance workflows will find themselves scrambling to retrofit compliance onto systems already in production. That is precisely what Article 25 was designed to prevent.
Organisational measures not to be ignored
If the over-reliance on technology is the first failure, the systematic undervaluing of organisational measures is the second. The IAPP makes clear that data protection by design is not exclusively a technical discipline. It is also a management discipline-one that encompasses access controls, staff training, default functionality restrictions, and ongoing process review.
This is an area where the compliance gap is perhaps most embarrassing, because organisational measures are often the cheapest interventions available. Clear access rules and privacy-respectful default settings can substantially reduce risk at a fraction of the cost of enterprise-grade privacy tech. Yet because these measures are less visible, they tend to be deprioritised.
The consequence is an organisational environment that undermines even the best technical architecture. A sophisticated pseudonymisation system is only as strong as the access controls that govern who can re-identify the data. A well-designed consent mechanism is only as meaningful as the staff who implement it and the defaults that operationalise it. As illustrated by recent proceedings in court as part of DSG Retail Ltd v Information Commissioner, data protection compliance ultimately depends on the effectiveness of both technical and organisational measures in practice.
By design and by default: not the same thing
One of the more consequential errors identified in the IAPP’s analysis is the conflation of “by design” and “by default” into a single undifferentiated requirement. They are not the same. They are complementary, but distinct.
Design embeds capability. Default operationalises it. A system that is designed to enable data minimisation, but whose default settings collect the maximum permissible data, has satisfied the letter of the design obligation while completely undermining its spirit. The data subject is left to take affirmative action to protect themselves which is the opposite of what the regulation intended.
This distinction becomes particularly sharp in the AI context. If the default behaviour of an AI system is to retain everything until told otherwise, the organisation has not implemented data protection by default. It has implemented data hoarding with a privacy policy attached. Regulators in 2026 are beginning to look precisely here at what systems do absent instruction, not merely what they can do under the right configuration.
The cost question nuance
A third area of consistent misunderstanding involves how to account for cost of implementation. The IAPP framework is explicit: cost is not an excuse to avoid effective measures. It is a prompt to find effective measures that are proportionate. These are very different things.
Controllers are required to actively search for and compare available alternatives. The market for privacy-enhancing technologies has matured significantly. They are deployable tools, and the global PETs market is projected to reach over twelve billion dollars by the end of the decade. The argument that privacy-respecting architecture is prohibitively expensive is becoming harder to sustain with each passing year.
But cost accounting in this context should also factor in the cost of getting it wrong. Enforcement is no longer symbolic. California’s Privacy Protection Agency has moved from advisory letters to fines totalling hundreds of thousands. GDPR enforcement in Europe continues to accelerate. Non-compliance with Article 25 is increasingly visible in regulatory investigations.
What good looks like in 2026
So, what does genuine data protection by design and by default look like this year? It looks like privacy review integrated into sprint planning, in its most simple form. It looks like AI systems with data retention limits set as default. It looks like DPIA processes that are proportionate to risk and revisited when processing contexts change, not filed once and forgotten.
It looks like an organisation that can demonstrate, if required, that it considered privacy implications at each stage of a system’s design lifecycle, and that it made deliberate choices, informed by current standards, about the measures it implemented and why.
Most importantly, it looks like an organisation that understands the difference between compliance posture and compliance theatre. The former is adaptive, documented, and embedded in culture. The latter is a folder full of shelf-ware.
