The government has published guidance on responsible AI in recruitment to help employers reduce the likelihood of introducing systems that perpetuate biases and discrimination, or exclude people without digital skills from jobs.
The Department for Science, Innovation and Technology’s guide says that AI and automation will simplify existing recruitment and HR processes, promising greater efficiency, consistency and scalability. However, there are numerous ethical risks of using AI in recruitment, including the potential for discrimination and bias against applicants at every stage of the hiring process.
The guidance outlines what organisations should consider before procuring an AI system, including asking what problems they are trying to solve and how AI can help address them, how the organisation will communicate the use of AI to potential job applicants, whether the AI systems on the market have the capabilities to produce the desired outputs, and whether employees will need training or additional resources to use the system.
AI in recruitment guidance
Workday accused of AI discrimination against applicants
AI-enhanced CVs perform better in selection
Use of AI in graduate hiring soars as grade requirements relaxed
It says that the procurement, deployment and use of AI in HR and recruitment must adhere to the government’s AI regulatory principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress.
Organisations should also consider any accessibility requirements to ensure people with disabilities or impairments are not disadvantaged by the system and should complete a data protection impact assessment to ensure their AI is legally compliant.
The guidance, which was developed alongside organisations including the CIPD, the Association of Professional Staffing Companies (APSCo) and the Recruitment and Employment Confederation, also outlines considerations employers must make when the system has gone live, including whether the supplier they have chosen conducts regular performance testing, processes for gathering feedback from employees and job applicants, and bias audits.
Tania Bowers, global public policy director at APSCo, said clear guidance from the government has a crucial role to play in mitigating risks from AI.
She said: “There are a number of principles that staffing companies must guarantee they are following so that they aren’t exposing their business to potentially discriminatory systems or inadvertently implementing AI that doesn’t follow the required functions or intentions that these tools should be used for.
“It’s important to add that these guidelines have been developed to inform staffing firms and aren’t written into law. In order to support our members as they navigate the complex AI landscape, we are producing a ten-step plan which provides recruitment firms a roadmap to follow so that they are compliantly implementing new tools into their solutions.”
Earlier this month, a job candidate brought a discrimination lawsuit against Workday, claiming that he has been rejected from 100 jobs that use the tech giant’s AI tools in their recruitment processes.
Sign up to our weekly round-up of HR news and guidance
Receive the Personnel Today Direct e-newsletter every Wednesday
The EU Parliament adopted its Artificial Intelligence Act earlier this month, which is likely to have an impact on recruiters or organisations with services in the EU. The act categorises different technologies based on their level of risk, ranging from “unacceptable” – which would see the technology banned – to high, medium and low risk.