Artificial intelligence: Could an algorithm rid us of unconscious bias?

Unconscious bias tends to be entrenched, so could artificial intelligence make it less widespread?imageBROKER/REX/Shutterstock
Unconscious bias tends to be entrenched, so could artificial intelligence make it less widespread?
imageBROKER/REX/Shutterstock

Unconscious bias can stop us making the best recruitment and progression decisions at work, but what if some or all of the processes involved were automated? Robert Bolton, partner at the Global HR Centre of Excellence at KPMG in the UK, examines the potential for technology to help employers shake off discrimination.

No matter how hard they try, people find it impossible to keep unconscious bias from affecting their decisions, leading to continued discrimination in employment and business practices.

But for computers, it’s a different matter. And this has enormous potential when it comes to the challenge of stamping out discrimination across organisations.

We’ve all seen the headlines about AI systems producing racist or otherwise discriminatory outputs, but it’s not the technology that is biased – it’s the data that it relies on.

Cognitive systems are trained by historical data sets that are laced with our subjective judgements, so of course they inherit failings in the system.

Now that we’re aware of this, we need to be creating rigorous testing techniques and new standards to assess algorithms for bias – particularly with cognitive systems being used for applications as diverse as policing, banking and recruitment.

AI also offers tremendous promise in helping humans to address their own unconscious biases. We’ve already seen that if a human doesn’t get to see the name and gender on a CV and just looks at achievements, they make different selections as a result.

Algorithm assessments

Take Unilever, which has adopted a selection approach in which candidates perform in a series of games, and an algorithm assesses performance against a predetermined personality profile. That way, the company is not asking someone whether they have the experience. Instead, the algorithm is assessing: does this person actually have those skills?

There are tech startups already working on using AI to do the initial job interview, and others working on facial recognition software to detect body language and emotion cues to help screen candidates.

In the future, such AIs will make a judgement, based on a job description, about whether a candidate meets the required personality profile.

If we can design systems like that – and rigorously test them to ensure the results are bias-free – then candidate shortlists are likely to be more objective and diverse as a result.

Of course, that presents a variety of challenges for the 60% of HR departments that are planning to adopt cognitive automation in the next five years, according to KPMG’s 2017 HR Transformation Survey.

Cognitive systems are trained by historical data sets that are laced with our subjective judgements, so of course they inherit failings in the system.

One of these challenges is identifying the kind of talent we want our AI assistant to find. Again, data can help.

Predictors of performance

By looking at existing employee data, it’s already possible to identify promising qualities in a job candidate; at KPMG, we now have an analytics capability that does that in near-enough real time. The results of this approach can be fascinating.

For a client we were able to identify predictors of upper quartile performance in a sales job nine months ahead, based on the first-month data of new starters. The results were often unexpected and subtle: one thing that was predictive was how new starters chose to network – or where they sought advice.

In another role, the indicators will vary. But it proves that if you can get at the information, you can identify some very interesting insights. But there can be a flipside: there was a bank that did similar work around upper quartile performance.

This bank crunched the numbers and worked out a set of six or seven factors that it felt had a causal relationship, and they recruited against this model.

However, there was then a change in regulation around the range of products that this part of the bank sold. And it was only after reassessing the model that it realised the regulation change had changed the indicators of performance.

Therein lies a lesson: the tendency of early cognitive systems will be to steer companies towards a monoculture. AI systems need to be rigorously assessed and retrained in order to ensure that the algorithm is up to date.

Diversity means innovation

Monocultures are anathema to innovation. Innovation comes from the boundaries of things: the interplay between domains of knowledge, of different cultures and mindsets.

It’s one department seeking to collaborate with another, with unexpected results. Many recent reports suggest that organisations with more diverse boards perform better in the long term for exactly this reason.

Companies introducing AI systems will need to think hard not just about what automation means for efficiency, but what it means for their company culture and values.

The companies leading this field are creating automation “centres of excellence” that can build out best practice. That’s something that should continue; as AI develops, we’re going to need rigorous assessment and reassessment of algorithms.

So, while AI certainly introduces new challenges when it comes to diversity, there are also tremendous opportunities.

The implementation of cognitive automation is not just a technological question: it’s a cultural one. Before you start transforming your company, ask yourself: “How do we want to use automation for the benefit of customers, employees and even for society? What kind of company do we want to be?”

Robert Bolton

About Robert Bolton

Robert Bolton is partner and co-leader of KPMG’s Global HR Centre of Excellence.
No comments yet.

Leave a Reply