The use of algorithms in “life-changing” decisions, such as recruitment, should be stopped immediately as they are often “infected with biases”, according to an expert on artificial intelligence.
Noel Sharkey, a professor of AI and robotics at the University of Sheffield, suggested that AI decision-making should be tested in the same way as new pharmaceutical drugs are evaluated before they are released to the market.
AI in recruitment
He told the Guardian: “There are so many biases happening now, from job interviews to welfare to determining who should get bail and who should go to jail. It is quite clear that we really have to stop using decision algorithm”.
He said there should be strict regulation of all decision algorithms. “There should be a moratorium on all algorithms that impact on people’s lives. Why? Because they are not working and have been shown to be biased across the board.”
Prof Sharkey suggested that they should be tested on hundreds of thousands of people in order to eradicate major in-built biases in the data they use.
The use of AI and algorithms in recruitment is expected to grow. Earlier this year an interim report from the government’s Centre for Data Ethics and Innovation suggested that there was enormous potential for algorithmic systems to improve recruitment – for example, being able to recommend jobs to people that they might not search for or think themselves able to apply for. However, recruitment firms needed to ensure that the recommendations they made were not discriminatory, it warned.
“Vendors of algorithmic recruitment tools, such as employment assessments to screen candidates, are exploring bias mitigation approaches but lack clear guidance on how to develop these,” the CDEI’s report says.
“We appreciate the sector has no clear regulator. As with financial services, we will consider with stakeholders the potential governance arrangements for overseeing the mitigation of bias across this sector.”
The report also notes that incidents such as Amazon’s decision to withdraw its recruitment algorithm in October 2018 because of gender bias highlighted that such technology can risk further embedding biases.
Suki Sandhu, CEO and founder of INvolve, a membership organisation which champions diversity, and executive search firm Audeliss, said the success of AI as a recruitment tool was often dependent on the team developing it. “If the teams building the algorithms and assistive software aren’t diverse in how they work and the data they are using is intrinsically biased then this will be reflected in the output and organisations will find themselves in a vicious circle.
“Finding the right person for any role must be at the core of all recruitment processes and a personal, human touch is necessary to ensure that. Ultimately, a company can’t rely on technology to create a diverse workforce. Where hiring is concerned, we must strike a healthy balance between human touch and automation.
“Artificial intelligence has the potential to be a brilliant way to remove bias from the recruitment process, and in some cases is already being effectively used,” he said.