AI Regulation UK GDPR
2026 Compliance Guide
2026 Compliance Guide
AI regulation UK GDPR rules changed significantly in June 2025 with the Data (Use and Access) Act. UK businesses can now use automated decision-making more freely, but the ICO is watching closely. This guide breaks down what's changed, what it means for your AI systems, and how to stay compliant in 2026.
AI regulation UK GDPR requirements have shifted quite a bit since June 2025. The Data (Use and Access) Act 2025 (DUAA) became law on 19 June, and it's the biggest change to UK data protection since we left the EU. The government wants to make it easier for businesses to innovate with AI whilst still protecting people's data.
Here's the thing though. Whilst the law itself has loosened up a bit, the Information Commissioner's Office (ICO) is taking enforcement much more seriously. They've published a new AI and Biometrics Strategy that makes it clear they're paying close attention to how businesses use AI. So yes, you've got more freedom legally, but you also need to be more careful practically.
We're also stuck in a bit of a mess around AI and copyright. The tech industry and creative sectors can't agree on how AI should be allowed to use copyrighted material for training, and that argument dominated the passage of this new law. It's still not resolved, which means uncertainty for anyone building or training AI models.
In this guide, I'll walk you through what's actually changed with the DUAA, what the ICO expects from you, the copyright situation, and what you need to do to stay compliant in 2026.
The Data (Use and Access) Act 2025 got Royal Assent on 19 June 2025. It doesn't replace the UK GDPR or the Data Protection Act 2018, but it does amend them quite substantially. Most of the changes need secondary legislation to kick in properly (expected around December 2025), but some took effect straight away or two months after Royal Assent (19 August 2025).
The government's goal is to cut red tape for businesses and make it easier to use data for research and AI development. Here's a quick overview of the main changes that affect AI systems:
| Area of Reform | Pre-DUAA Position (UK GDPR) | Post-DUAA Position (Amended UK GDPR) | Direct Implication for AI Systems |
|---|---|---|---|
| Automated Decision-Making (ADM) | Article 22: General prohibition on solely automated decisions with legal/significant effects. Permitted only if necessary for a contract, authorised by law, or based on explicit consent. | Amended Article 22: Prohibition lifted for non-special category data. ADM permitted under any lawful basis (except 'recognised legitimate interests' alone). Mandatory safeguards introduced. | Broader scope to deploy ADM for applications like fraud detection, dynamic pricing, and initial recruitment screening. Requires robust implementation of new safeguards, including the right to make representations and challenge decisions. |
| Scientific Research | Definition of 'scientific research' not explicitly defined in the articles, leading to ambiguity for commercial R&D. Consent required for specific purposes. | Amended Articles: Definition broadened to explicitly include commercial research and technological development. 'Broad consent' for general areas of research is permitted under ethical conditions. | Provides a clearer legal pathway for using personal data to train and develop commercial AI models under the research provisions. Simplifies the process of obtaining consent for long-term or exploratory AI development projects. |
| Legitimate Interests | Article 6(1)(f): Requires a three-part balancing test to weigh the controller's interest against the individual's rights and freedoms. | New Article 6(1): Introduces a list of 'recognised legitimate interests' (e.g., crime prevention, safeguarding) that do not require the balancing test. | Streamlines justification for ancillary data processing activities related to AI (e.g., system security monitoring). However, it cannot be used as the sole basis for significant ADM, limiting its direct application for core AI decisioning. |
| Data Subject Access Requests (DSARs) | Article 15: Right of access to personal data. ICO guidance suggested proportionality, but the legal text was absolute. No formal "stop the clock" provision in the articles. | Amended Article 15: Codifies that controllers need only conduct a "reasonable and proportionate" search. Formalises a "stop the clock" mechanism when seeking clarification. | Provides a crucial legal defence against disproportionately burdensome DSARs concerning complex AI systems where identifying an individual's data within a trained model is technically infeasible. Places a premium on having a documented, defensible search methodology. |
| Complaints | Article 77: Data subjects have a direct right to lodge a complaint with the ICO. | New Provisions: Data subjects are required to complain to the controller first. Controllers must acknowledge within 30 days and resolve without undue delay. | Creates an opportunity for organisations to resolve AI-related complaints internally before regulatory escalation. Requires new internal processes and resources for handling complaints effectively. |
The biggest change for AI businesses is how the law treats automated decision-making (ADM). Previously, Article 22 of the UK GDPR basically said you couldn't make significant automated decisions about people unless you had their explicit consent, it was necessary for a contract, or it was required by law. That was quite restrictive.
Now, for decisions that don't involve sensitive personal data (like health information or biometric data), that ban is gone. You can use any of the standard lawful bases under Article 6, including legitimate interests. This opens up a lot more possibilities for things like dynamic pricing, automated loan assessments (as long as you're not using sensitive data), and initial CV screening in recruitment.
However, if you're processing special category data (health, ethnicity, biometrics, etc.), the strict rules still apply. You'll still need explicit consent or a substantial public interest basis defined in law.
The catch is that whilst you've got more freedom, you now have to implement mandatory safeguards for any significant automated decision. You must make sure people can:
The Act also clarifies what counts as "solely automated" decision-making. It's not just about whether a computer made the decision. There needs to be "meaningful human involvement" in the process. This means you can't just have someone rubber-stamp what the AI says and call it human oversight. The ICO will be looking at the quality of that human input, not just whether a human was technically involved.
One of the most important changes for anyone building AI models is the expanded definition of "scientific research". The DUAA makes it crystal clear that research doesn't just mean academic studies. It includes commercial research, technology development, and privately funded projects. For UK AI companies, this is huge because it gives you a much clearer legal basis for using personal data to train commercial AI products.
Complementing this is the codification of 'broad consent'. The Act permits organisations to obtain consent for a general area of scientific research, even if all the specific purposes of the processing cannot be fully identified at the time of data collection. This provision is particularly relevant for the iterative and exploratory nature of AI development, where the full potential applications of a model may not be known at the outset.
To further streamline the use of data for AI development, the DUAA establishes that any further processing for research, archiving, or statistical (RAS) purposes is automatically considered compatible with the original purpose for which the data was collected. This simplifies the legal justification for re-purposing existing datasets for new AI training objectives, reducing the need for complex compatibility assessments.
These permissive measures are, however, contingent upon the implementation of appropriate safeguards. The Act mandates that processing for RAS purposes must not be used to make decisions about specific individuals (unless for approved medical research) and must not be likely to cause anyone substantial damage or distress. Furthermore, it requires the application of technical and organisational measures, such as pseudonymisation, to protect the rights of data subjects. While these provisions create a clearer route for accessing and using training data from a data protection perspective, they do not address the separate and unresolved legal challenges related to copyright, which remains a primary hurdle for many AI developers.
The DUAA adds a new lawful basis called 'recognised legitimate interests'. In theory, it sounds great because it lets you process data without doing the usual balancing test. The list includes things like crime prevention, emergency response, safeguarding vulnerable people, and national security.
For AI systems though, it's not that useful. The Act specifically says you can't use recognised legitimate interests as the legal basis for significant automated decision-making. So whilst you might use it for things like security monitoring of your AI platform, you can't use it to justify the actual AI decisions that affect people. It's deliberately limited to stop it becoming a backdoor way to avoid the proper safeguards for high-risk AI.
The DUAA codifies several aspects of existing ICO guidance on data subject rights, providing greater legal certainty for organisations, particularly those managing large and complex datasets typical of AI environments.
A key change concerns Data Subject Access Requests (DSARs). The Act gives legislative footing to the principle that controllers are only required to conduct a "reasonable and proportionate" search for personal data when responding to a DSAR. This provision, which came into force on 19 June 2025 and applies retroactively, is of immense practical value for organisations using AI. It provides a statutory basis to argue that an exhaustive search to identify every trace of an individual's data within a trained model or its vast underlying datasets is disproportionate and not legally required. This is expected to become a new area of legal contention, as the definition of "reasonable" in the context of opaque AI systems will likely be tested through complaints and litigation. Organisations will need to develop and document a defensible methodology for how they search their AI systems to meet this standard.
The Act also formalises the "stop the clock" mechanism for DSARs. This allows controllers to pause the one-month response deadline while they await necessary clarification from a requester regarding the scope of their request.
A significant procedural shift is introduced in the complaints process. The DUAA requires individuals to lodge complaints about data protection compliance with the data controller in the first instance, before escalating the matter to the regulator. Controllers are mandated to provide an accessible means for making complaints (such as an electronic form) and must acknowledge receipt within 30 days, taking appropriate steps to resolve the issue "without undue delay". While data subjects "may" still complain to the ICO, their automatic right to do so under Article 77 has been removed. This change is intended to empower organisations to resolve issues directly and reduce the ICO's caseload, allowing it to focus on more significant or systemic issues.
What are your thoughts on Navigating the New Frontier: A Definitive Analy...?