The UK privacy watchdog is set to probe whether employers using artificial intelligence in their recruitment systems could be discriminating against ethnic minorities and people with disabilities.
John Edwards, the information commissioner, has announced plans for an inquiry into the automated systems that screen job candidates, including looking at employers’ evaluation techniques and the AI software they use.
Over recent years, concerns have mounted that AI, in many cases, discriminates against minorities and others because of the speech or writing patterns they use. Many employers use algorithms to whittle down digital job applications enabling them to save time and money.
Regulation has been seen as slow to take up the challenge presented by the technology with the TUC and the All Parliamentary Group on the Future Work keen to see laws introduced to curb any misuse or unforeseen consequences of its use. Frances O’Grady, TUC general secretary, said: “Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment — especially for those in insecure work and the gig economy.”
AI adoption in the UK
How HR can mitigate the risks of AI
AI adoption: Skills shortages means UK lagging behind Europe
Edwards pledged that his plans over the next three years would consider “the impact the use of AI in recruitment could be having on neurodiverse people or ethnic minorities, who weren’t part of the testing for this software”.
Autism, ADHD and dyslexia are included under the umbrella term “neurodiverse”.
A survey of recruiting executives carried out by consulting firm Gartner last year found that almost all reported using AI for part of the recruiting and hiring process.
The use of AI in recruitment process is seen as a way of removing management biases and prevent discrimination, but could be having the opposite effect, because the algorithms themselves can amplify human biases.
Earlier this year Estée Lauder faced legal action after two employees were made redundant by algorithm. Last year, facial recognition software used by Uber, related to AI processes, was alleged to be in effect racist. And in 2018, Amazon ditched a trial of a recruitment algorithm that was discovered to be favouring men and rejecting applicants on the basis they went to female-only colleges.
A spokesperson for the Information Commissioner’s Office said: “We will be investigating concerns over the use of algorithms to sift recruitment applications, which could be negatively impacting employment opportunities of those from diverse backgrounds. We will also set out our expectations through refreshed guidance for AI developers on ensuring that algorithms treat people and their information fairly.”
These algorithms have essentially been left to their own devices, leading to thousands of people having negative impacts on their opportunities” – Natalie Cramp, data science expert
The ICO’s role is to ensure people’s personal data is kept safe by organisations and not misused. It has the power to fine them up to 4% of global turnover as well as to order undertakings from them.
Under the UK’s General Data Protection Regulation (which is enforced by the ICO), people have the right to non-discrimination under the processing of their data. The ICO has warned in the past that AI-driven systems could lead to outcomes that disadvantage particular groups if the data set the algorithm is trained and tested on is not complete. The UK Equality Act 2010, also offers people protection from discrimination, whether caused by a human or an automated decision-making system.
In the US, the Department of Justice and the Equal Employment Opportunity Commission warned in May that commonly used algorithmic tools including automatic video interviewing systems were likely to be discriminating against people with disabilities.
Legal comment
Senior counsel at Taylor Wessing, Joe Aiston, said that in addition to issues of unconscious bias “which inevitably regularly impact companies’ hiring processes where human decisions are being made”, care needs to be taken when using any form of artificial intelligence software when recruiting.
A particular issue for employers is that the software they may opt to use to streamline the selection process could be utilising discriminatory selection processes without their knowledge” – Joe Aiston, Taylor Wessing
Whilst some AI recruitment software is marketed as working to avoid biases and potential discrimination in the recruitment process, depending on the algorithms and decision making processes used, there is a risk that such software could result in discrimination issues of its own. For example, if recruitment software analyses writing or speech patterns to determine who weaker candidates might be, this could have a disproportionately negative impact on individuals who do not have English as a first language or who are neurodiverse. A decision made by AI to reject such a candidate for a role purely on this basis could result in a discrimination claim against the employer despite that decision not having been made by a human.
“A particular issue for employers is that the software they may opt to use to streamline the selection process could be utilising discriminatory selection processes without their knowledge. It is therefore important that the supplier of the software is made to clearly set out what selection criteria and algorithms it is intended to be used and how these will be applied in order that the company can assess any potential discrimination risk and so that this can be rectified.”
The law and the regulators were playing catch-up with this relatively new area of potential risk, Aiston added, but it was likely that further regulation would be introduced.
Natalie Cramp, CEO of data science consultancy Profusion, said the ICO’s investigation into whether AI systems showed racial bias was very welcome and overdue. This should only be a first step in tackling the dangerous of discriminatory algorithms, she added.
“There have been a number of recent incidents where organisations have employed algorithms for functions such as recruitment, and the result has been racial or sexiest discrimination. In many cases the problem was not uncovered for several months or even years. This is because bias has been either built into the algorithm itself or from the data that has been used. Critically, there has then been little human oversight to determine whether the outputs of the algorithm are not only correct but also fair.
“These algorithms have essentially been left to their own devices, leading to thousands of people having negative impacts on their opportunities,” said Cramp.
“Ultimately an algorithm is a subjective view in code, not objective. Organisations need more training and education to both verify the data they use and challenge the results of any algorithms. There should be industry wide best practice guidelines that ensure that human oversight remains a key component of AI. Organisations cannot rely on one team or individual to create and manage these algorithms.”
Sign up to our weekly round-up of HR news and guidance
Receive the Personnel Today Direct e-newsletter every Wednesday
An ICO investigation alone will not tackle these issues, she added. “Without this safety net people will quickly lose confidence in AI and with that will go the huge potential for it to revolutionise and better all our lives.”
Latest HR job opportunities on Personnel Today
Browse more human resources jobs