Crux Digits Blog

Privacy & AI: What’s Allowed in the Netherlands?

 Your Data Has Rights. Do You Know Them?

Imagine this: you open a new app, tap “I agree” without reading a word, and suddenly a company has your location data, browsing habits, and maybe even your health information. Sound familiar? You’re not alone. Millions of people across the Netherlands do this every single day.

Now add artificial intelligence into the mix. AI systems that analyse your behaviour, predict your preferences, make decisions about your loan application or job interview, and even identify your face in a crowd. It’s not science fiction.

But here’s the exciting part: the Netherlands is part of a legal ecosystem that actually gives you real, enforceable rights over how your data is used. Between the General Data Protection Regulation (GDPR), the Dutch implementation law (UAVG – Uitvoeringswet AVG), and the EU AI Act, there is a powerful framework in place that governs exactly what AI can and cannot do with your personal information.

Whether you’re a business leader deploying an AI tool, a professional managing customer data, or simply a curious citizen who wants to understand your rights, this is for you. Let’s break it all down in plain, energising language. No legal jargon. No confusion. Just clarity.

“In a world driven by data, understanding your privacy rights is not just smart. It’s essential.”

The Legal Foundations: What Governs AI and Privacy in the Netherlands?

The Netherlands operates under a multi-layered privacy and AI governance framework. Here’s a quick breakdown of the key pillars:

The GDPR (General Data Protection Regulation) is the cornerstone EU regulation that applies across all member states, including the Netherlands. It sets out the rules for collecting, storing, processing, and sharing personal data. The GDPR came into effect across the EU in May 2018.

The UAVG (Uitvoeringswet AVG) is the Dutch national implementation law that complements the GDPR with country-specific provisions. It gives the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, or AP) its mandate to enforce privacy rules nationally.

The EU AI Act is the world’s first comprehensive legal framework specifically regulating artificial intelligence. It categorises AI systems by risk level and sets obligations for developers, deployers, and users. The regulation was formally adopted by the European Council in May 2024 and is being phased in progressively.

Together, these frameworks create a system that is both protective of individuals and practical for businesses, provided they play by the rules.

What Is Considered “Personal Data” in the Context of AI?

Before we can talk about what AI is allowed to do, we need to understand what it’s working with. Under the GDPR, personal data is any information that relates to an identified or identifiable natural person.

In the context of AI, this can include a surprisingly wide range of data types. Your name, email address, and phone number are obvious examples. But personal data also includes your IP address, location data, browsing and purchasing behaviour, biometric data such as facial recognition or fingerprints, health and medical information, financial data, and inferred data such as AI-generated profiles about your preferences or likely future behaviour.

The last category is particularly important. Even if an AI system creates a “profile” of you from aggregated or anonymised inputs, if that profile can identify you, it is still personal data under the GDPR. This is a critical point that many businesses overlook.

The Six Lawful Bases: What Gives AI the Green Light?

Here is one of the most important concepts in privacy law for AI practitioners. Under the GDPR, any processing of personal data must have a lawful basis. There are six of them, and at least one must apply for AI-powered processing to be legal.

•       Consent: The individual has freely, specifically, and unambiguously agreed to the processing.

•       Contract: Processing is necessary to fulfil or prepare a contract with the individual.

•       Legal obligation: Processing is required to comply with a legal duty.

•       Vital interests: Processing is necessary to protect someone’s life.

•       Public task: Processing is carried out as part of an official public function.

•       Legitimate interests: Processing is necessary for the legitimate interests of the controller or a third party, unless the individual’s rights override those interests.

For most commercial AI systems in the Netherlands, the relevant bases are consent and legitimate interests. And here’s where things get tricky. Consent under the GDPR must be freely given, which means it cannot be made a condition of using a service. It must be informed, meaning users genuinely understand what they are consenting to. It must also be unambiguous, requiring a clear affirmative action such as ticking a box.

Legitimate interests require a careful balancing test. The company’s interest in using AI must genuinely outweigh the privacy impact on individuals. Vague commercial interests rarely pass this test when sensitive data is involved.

High-Risk AI: Where the Netherlands Draws a Hard Line

Not all AI is created equal. The EU AI Act introduces a risk-based classification system, and the Netherlands, as an EU member state, fully applies it. Understanding this framework is essential for anyone building or using AI systems in the country.

Unacceptable risk AI is outright banned in the EU. This includes AI that manipulates people using subliminal techniques, AI that exploits vulnerabilities of specific groups, most uses of real-time remote biometric identification in public spaces by law enforcement, and social scoring systems by public authorities.

High-risk AI is permitted but heavily regulated. Examples include AI used in hiring and employment decisions, credit scoring and financial assessments, educational admissions and grading, critical infrastructure, medical devices, and AI that assists in law enforcement or migration decisions. Companies deploying high-risk AI must conduct conformity assessments, maintain detailed technical documentation, implement human oversight, and register in an EU database.

Limited-risk AI such as chatbots and AI-generated content must meet transparency obligations. Users must be told they are interacting with an AI system.

Minimal-risk AI such as spam filters and AI-powered games face no specific obligations under the AI Act.

Automated Decision-Making: The Right to a Human in the Loop

One of the most powerful protections in the GDPR directly challenges a core capability of AI: the ability to make decisions about people automatically. Article 22 of the GDPR gives individuals the right not to be subject to a decision based solely on automated processing, if that decision produces legal effects or similarly significantly affects them.

In practice, this means that if an AI system is making decisions about whether you get a loan, whether you’re shortlisted for a job, or whether your insurance claim is accepted, a human must be able to review and potentially override that decision.

The right to explanation is closely linked here. Individuals have the right to receive a meaningful explanation of the logic involved in automated decisions that affect them. Saying “the algorithm decided” is not sufficient. The explanation must be clear and accessible.

The Dutch Autoriteit Persoonsgegevens actively enforces this. In its guidance, the AP has made clear that profiling and automated decision-making require a lawful basis, transparency about the logic used, and the technical capability for human review.

Special Category Data: The AI No-Go Zones

The GDPR draws a sharp distinction between regular personal data and special category data. The latter receives the highest level of protection, and AI systems that process it face strict additional requirements.

Special category data includes data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data used for unique identification and health data.

In the Netherlands, as across the EU, processing special category data is prohibited by default. It is only permitted in specific, narrow circumstances such as where the individual has given explicit consent, where processing is necessary for healthcare purposes with appropriate safeguards, or where required for employment law obligations.

AI systems that inadvertently infer or process special category data create serious legal exposure. For example, an AI that infers health conditions from purchasing data, or that uses facial recognition to infer ethnicity, would be processing special category data even without explicitly being designed to do so.

Privacy by Design: Building AI the Right Way in the Netherlands

One of the most forward-looking concepts in EU privacy law is privacy by design and by default. This principle, embedded in Article 25 of the GDPR, requires that data protection be built into AI systems from the very beginning, not bolted on as an afterthought.

What does this mean practically? AI systems should collect only the minimum data necessary for their purpose. Personal data should be anonymised or pseudonymised wherever possible. Access to personal data within AI systems should be restricted to those who genuinely need it. Retention periods should be defined and enforced automatically. Data subjects should be able to exercise their rights such as access, correction, and deletion easily.

In the Netherlands, the AP has consistently emphasised privacy by design in its enforcement priorities. Companies that can demonstrate their embedded privacy considerations from the start are far better positioned in any regulatory review.

The Dutch government’s own guidelines for public sector AI use (published through the Ministry of the Interior) reinforce this, stating that government AI systems must be explainable, non-discriminatory, and respectful of fundamental rights.

What About AI and Children’s Data? Extra Care Required

The Netherlands applies particular vigilance when it comes to children’s data. Under the GDPR, children deserve special protection because they are less aware of the risks and consequences of sharing personal data.

In the Netherlands, under the UAVG, parental consent is required for data processing of children under the age of sixteen. This is the maximum age the GDPR permits member states to set, and the Netherlands has chosen to apply the full protection.

AI systems directed at children, or that are likely to be accessed by children, must therefore be designed with this in mind. Age verification mechanisms, parental consent workflows, and age-appropriate privacy notices are not optional. They are legal requirements.

AI-powered educational platforms, gaming applications, and social tools that operate in the Netherlands and handle data of minors face heightened scrutiny from the AP.

The Autoriteit Persoonsgegevens: The Watchdog with Real Teeth

Regulation only works if it is enforced. In the Netherlands, the Autoriteit Persoonsgegevens (AP) is the national data protection authority responsible for enforcing the GDPR and the UAVG.

The AP has been increasingly active in scrutinising AI-related data processing. It has the power to investigate organisations, issue warnings, impose corrective orders, and levy fines of up to 20 million euros or four percent of global annual turnover under the GDPR, whichever is higher.

The AP has published specific guidance on AI and machine learning, emphasising that AI systems must be explainable, fair, and lawful. It has flagged particular concern about AI systems that process large volumes of personal data, use opaque decision-making algorithms, or involve facial recognition and biometric identification.

Being proactive with the AP, such as seeking prior consultation for high-risk processing activities, is not just good practice. For certain types of data processing under Article 36 of the GDPR, it is a legal requirement through Data Protection Impact Assessments (DPIAs).

International Data Transfers and AI: The Cloud Question

Many AI systems rely on cloud infrastructure hosted outside the European Economic Area. This raises a specific and important legal question: can personal data of Dutch residents be transferred to servers in the United States, India, or elsewhere for AI processing?

The answer is: sometimes yes, but with conditions. The GDPR sets out specific mechanisms for lawful international data transfers. These include adequacy decisions, where the European Commission has determined that the destination country provides an equivalent level of protection. Standard Contractual Clauses, which are approved contractual provisions that create enforceable obligations on the recipient. Binding Corporate Rules for intra-group transfers within multinational companies.

The EU-US Data Privacy Framework, which replaced the invalidated Privacy Shield arrangement, currently provides a mechanism for transfers to participating US companies. However, businesses must verify that their specific US provider has certified under this framework.

For AI providers in the Netherlands using foreign cloud services or AI APIs, this is a live compliance issue. It is not enough that the AI model itself is privacy-respecting. The data transfer mechanism must also be legally valid.

AI Can Be Brilliant and Privacy-Respecting. Here’s Your Next Step.

The Netherlands sits at the frontier of a global conversation about how to make AI work for people, not against them. The legal framework is among the most comprehensive in the world. And yes, it is complex. But it is also navigable, and compliance is absolutely achievable.

Here’s the energising truth: businesses that get privacy right are not just avoiding fines. They are building trust. They are differentiating themselves in markets where customers increasingly care about data ethics. They are future-proofing their operations against tightening global regulation.

And for individuals? Knowing your rights means you are not powerless in the age of AI. You can request access to your data. You can ask for explanations of automated decisions. You can object to profiling. You can withdraw consent. These are real, enforceable rights.

The conversation between privacy and AI is not a zero-sum game. Done right, they can absolutely coexist. The Netherlands, with its strong regulatory culture and innovative tech ecosystem, is proving that every day.

Privacy is not a barrier to innovation. It is the foundation for innovation that lasts.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top