To continue reading please register or login to your OHW+ account.
Unconscious bias can stop us making the best recruitment and progression decisions at work, but what if some or all of the processes involved were automated? Robert Bolton, partner at the Global HR Centre of Excellence at KPMG in the UK, examines the potential for technology to help employers shake off discrimination.
No matter how hard they try, people find it impossible to keep unconscious bias from affecting their decisions, leading to continued discrimination in employment and business practices.
But for computers, it’s a different matter. And this has enormous potential when it comes to the challenge of stamping out discrimination across organisations.
We’ve all seen the headlines about AI systems producing racist or otherwise discriminatory outputs, but it’s not the technology that is biased – it’s the data that it relies on.
Cognitive systems are trained by historical data sets that are laced with our subjective judgements, so of course they inherit failings in the system.
Now that we’re aware of this, we need to be creating rigorous testing techniques and new standards to assess algorithms for bias – particularly with cognitive systems being used for applications as diverse as policing, banking and recruitment.
AI also offers tremendous promise in helping humans to address their own unconscious biases. We’ve already seen that if a human doesn’t get to see the name and gender on a CV and just looks at achievements, they make different selections as a result.
Take Unilever, which has adopted a selection approach in which candidates perform in a series of games, and an algorithm assesses performance against a predetermined personality profile. That way, the company is not asking