The EU AI Act is due to come into force in February, and will impact any company that develops or deploys large language model tools, including UK employers. James Flint explains.
The excitement around the breakthroughs in AI over the last 18 months or so, combined with the uncertain economic climate, have driven a diverse range of HR departments to adopt the technology at pace.
Chatbots can answer first-touch candidate queries, help schedule interviews and provide progress updates on applications in a more personable way than a traditional website.
AIs can crawl job boards, social and professional networking sites and internal databases to find candidates with particular skill sets. When CVs come in, an AI can screen and rank them more effectively than an overstretched human team.
The technology can also automate the onboarding administration of document verification, help design and run training, analyse internal surveys and spot trends in employee sentiment, and even – slightly counter-intuitively – remove the kind of unconscious human bias that often creeps into performance reviews.
Forthcoming legislation
The guidelines for deploying tools like this used to be principally defined by the data protection principles as laid out in Article 5 of the GDPR (including accuracy, fairness, transparency and confidentiality) and the provisions around “automated decision-making” covered off in Article 23.
Artificial intelligence
But as of 2 February 2025, there’s a new kid on the regulatory block, the EU AI Act, which defines any AI activity in the areas of recruitment, HR or worker management as “high risk”.
Businesses in the UK that develop or deploy an AI system that is used in the EU will need to comply with the Act.
And this is likely to bring any forward-looking HR departments within the scope of the extra legislative overhead of which they need to be aware.
The technology should already be prompting questions such as, if the AI can be used so readily in all these areas, what happens when it starts underperforming or goes flat wrong?
If the AI adds bias instead of removing it, hallucinates candidates’ qualities instead of assessing them, or poorly analyses cohort data instead of correctly identifying patterns within it, the results can be catastrophic for both employer and employee alike.
The wrong people can be hired, the wrong teams reprimanded, and the wrong ads posted, all of which have a detrimental effect on company culture at best, and can end up in expensive litigation, PR disasters and worse once those effects filter down into people’s day-to-day lives.
‘High-risk’ activities
This is why the EU AI Act has chosen to add recruitment, HR and worker management applications of AI to its “high-risk” category, where they sit alongside things like AI in medical devices, autonomous vehicles, law enforcement, biometric identification and critical infrastructure.
If you’re creating AI systems for these kinds of use cases, this makes you an “AI provider,” and AI providers are subject to a whole list of regulatory obligations, including (but not limited to):
- Implementing an appropriate risk management process
- Using data sets that are fit for purpose and free of bias, maintaining technical documentation and human oversight
- Completing something called a conformity assessment, which is a bit like a data protection impact assessment (DPIA), but for AI
- Registering your model in the official EU database of high-risk AI systems
- Monitoring and correcting your system for performance and safety after it’s been deployed
It’s important to understand that you will be classified as an AI provider and charged with all these responsibilities even if you’re just taking an existing model, say an open-source large-language model (LLM) like Mistral or Llama, and then fine-tuning it with your own datasets or some retrieval augmented generation (RAG).
So, if you’ve got a proactive team that’s been building you a few fancy tools with all this new tech, beware – you might be in for more paperwork than you bargained for.
Even if you’re just deploying a system that you’ve bought in from elsewhere, the Act will still hold you to account for a list of requirements that include purpose limitation, human oversight, monitoring of input data, record-keeping, incident reporting, transparency to affected parties, and mitigation of bias and risks.
And this is before we even get to the ethical considerations that both providers and users must adhere to and ensure their systems align with.
Good AI governance
AI is not like traditional software. Traditional software is deterministic: it does what it’s told, and when it doesn’t, that’s a problem that can be fixed (in theory, at least) by making changes to the program. But AI systems are inherently probabilistic.
If you’re creating AI systems for HR use cases, this makes you an ‘AI provider’, and AI providers are subject to a whole list of regulatory obligations.
They work based on the statistical analysis of large amounts of data, and while this makes them much more robust when dealing with real-world uncertainty than traditional software systems, it also means that their outputs are inherently uncertain.
Compliance with the EU AI Act should be seen as a way of containing that uncertainty and keeping it permanently under review.
The focus should be on achieving full oversight of data, fostering transparency, and designing processes that serve both the business and its candidates.
This involves conducting thorough audits of existing AI systems, putting rigorous conformity-by-design practices in place for new ones, and training teams on the ethical use of AI.
By taking a proactive approach and implementing good AI governance from the outset, HR departments can avoid the pitfalls of rushed implementation and ensure that this exciting technology proves a boon for their organisations, not a menace.
Sign up to our weekly round-up of HR news and guidance
Receive the Personnel Today Direct e-newsletter every Wednesday
Change management opportunities on Personnel Today
Browse more Change management jobs