The Information Commissioner’s Office (ICO) is stepping up its supervision of AI and biometric technologies, launching a new strategy in Parliament today, which includes a review of the use of automatic decision-making in recruitment.
The UK data regulator has unveiled plans to support responsible AI innovation by increasing its scrutiny in areas of public concern, so people can trust that their personal information is being used responsibly.
Speaking at an ICO event with the all-party parliamentary group for AI (AI APPG) this morning, Information Commissioner John Edwards said: “Our personal information powers the economy, bringing new opportunities for organisations to innovate with AI and biometric technologies.
“But to confidently engage with AI-powered products and services, people need to trust that their personal information is in safe hands. It is our job as the regulator to scrutinise emerging technologies – “agentic AI” for example – so we can make sure effective protections are in place, and personal information is used in ways that both drive innovation and earn people’s trust.”
Artificial intelligence
CIPD appoints expert in AI to boost support for businesses
The ICO’s new AI and biometrics strategy aims to ensure organisations are developing and deploying new technologies lawfully, supporting them to innovate and grow while protecting the public.
Research on automatic decision-making (ADM) in recruitment and biometric technologies reveals that the public expects to understand exactly when and how AI-powered systems affect them.
They are concerned about the consequences when these technologies go wrong – for example, if a flawed automated decision impacts their job application, or if facial recognition technology (FRT) is used inaccurately. More than half of people surveyed (54%) shared concerns that the use of FRT by police would infringe on their right to privacy.
The ICO said it will provide organisations with certainty and the public with reassurance by:
- reviewing the use of ADM systems in recruitment and working with early adopters
- conducting audits and producing guidance on the lawful, fair and proportionate use of FRT
- setting clear expectations to protect people’s personal information when used to train generative AI models
- developing a statutory code of practice for organisations developing or deploying AI responsibly to support innovation while safeguarding privacy, and
- scrutinising emerging AI risks and trends, such as the rise of agentic AI, which enable systems to become increasingly able to act autonomously.
The ICO’s AI strategy
The ICO launched the strategy at its 40th anniversary event with the AI APPG in Parliament this morning, where politicians, industry leaders, and civil society gathered to discuss privacy in responsible AI use in the economy.
Edwards added: “The same data protection principles apply now as they always have – trust matters and it can only be built by organisations using people’s personal information responsibly.
“Public trust is not threatened by new technologies themselves, but by reckless applications of these technologies outside of the necessary guardrails. We are here, as we were 40 years ago, to make compliance easier and ensure those guardrails are in place.”
Lord Clement-Jones, Liberal Democrat peer and the AI APPG co-chair, said: “The AI revolution must be founded on trust. Privacy, transparency, and accountability are not impediments to innovation; they constitute its foundation. AI is advancing rapidly, transitioning from generative models to autonomous systems.
“However, increased speed introduces complexity. Complexity entails risk. We must guarantee that innovation does not compromise public trust, individual rights, or democratic principles.”
Dawn Butler, Labour MP and vice chair of the AI APPG, said: “Artificial intelligence is more than just a technology change; it is a change in society. It will increasingly change how we get healthcare, attend school, travel, and even experience democracy. But AI must work for everyone, not just a few people, to change things. And that involves putting fairness, openness, and inclusion into the underpinnings.”
The ICO strategy builds on its publication of policy positions on generative AI and action it has taken against organisations, such as ordering Serco Leisure last year to stop using biometric technology to monitor its employees.
The ICO said it will consult on an updated guidance for ADM and profiling, develop a statutory code of practice on AI and ADM, and produce a horizon scanning report on the data protection implications of agentic AI.
What is agentic AI?
Agentic AI is a more autonomous, proactive model of artificial intelligence. It can act without human guidance because it understands the user’s objectives. While agentic AI systems still use the creative abilities of generative AI models like ChatGPT, they focus on making decisions instead of creating content. They optimise specific goals and can undertake a series of activities.
Edwards said: “Agentic AI is the next chapter of the AI evolution, with systems becoming increasingly capable of acting autonomously. Whereas generative AI might be able to write your shopping list, an agentic AI might be able to access an online shop and use your ID and payment details to place an order.”
Sign up to our weekly round-up of HR news and guidance
Receive the Personnel Today Direct e-newsletter every Wednesday
HR business partner opportunities on Personnel Today
Browse more HR business partner jobs