Whether it is aptitude tests, Belbin, the nine-box grid or Myers-Briggs, HR professionals often find themselves facilitating cognitive or psychological assessments in areas such as recruitment, learning and development (L&D) and talent management. But as Jo Faragher finds out, the tools that are used aren’t always right for the job.
There was a storm of criticism in April 2013 when it was reported that the Department for Work and Pensions (DWP) had asked unemployed people to take psychometric tests that appeared to generate the same result, no matter what answers were given. According to newspaper reports, even the US Institute that devised the questionnaire told the DWP to stop using the tests, as they could not be scientifically validated.
With increasing pressure to evidence their decisions, it is natural that HR professionals want to call on an arsenal of tools to support their reasoning – whether in recruitment, talent management or L&D. And while the DWP example shows the way that tools can sometimes be implemented for the wrong reasons, in some cases organisations could be accused of blithely applying tools and approaches they’ve used for years with little consideration as to whether or not they’re still fit for purpose.
Outdated models and financial pressures
In 2012, the Chartered Institute of Personnel and Development’s Learning and Talent Development Survey picked out this theme. It found that the three most commonly used learning models were all more than 30 years old, and that practitioners had “low awareness” of more modern diagnostic tools from neuroscience, cognitive research and economics. John McGurk, a learning and talent development adviser, pointed to the ongoing popularity of models such as Myers-Briggs (published 1962), Belbin (1981) and the Honey and Mumford Learning Styles Questionnaire (1982).
McGurk estimates that – while awareness of other tools might have grown – there is still complacency among HR professionals to use what they know.
“One of the issues here is availability bias; people will use whatever’s available until they know of something different,” he says.
“It’s not necessarily the models themselves where the problem lies – people use cut-down, unofficial versions of these tools and don’t measure their employees against the full sample in the tool, so they draw their own conclusions. People can use Myers-Briggs online, for example, but it won’t always be a truly psychometric test if it’s not benchmarked.”
Since the economic downturn, budgets have also been squeezed, so departments are often faced with measuring people using fewer resources. This can place a greater burden on tools to make the process more efficient.
“We get focused on tests because they’re objective and fair, and they can make a selection for us. There’s an ‘everything will be sorted’ attitude,” says Binna Kandola, senior partner and co-founder of business psychology company Pearn Kandola.
He adds that even newer, ‘fashionable’ tools such as the nine-box grid [often used for talent management or succession planning] are “just a framework, and only as good as the person who’s operating them”.
Staying in the comfort zone
According to Ian Matheson, head of talent and assessment at Penna, what’s lacking is a desire to question the suitability of the tools used by HR: “For HR there’s a comfort zone, using things that are familiar, well tried, well tested. But do they still work? Take competency-based interviews, for example – we’ve been told for the past 20 years that a structured, competency-based interview is the way to go. But these are situational, they look at what people do now or have done in the past, whereas often we want to see how they will perform in different environments or how they handle change.”
In some cases, the tools being applied may simply not be relevant for the role, or for the candidate or employee that is using them. Many organisations, for example, will ask everyone to do a numerical reasoning test, even where they will not be working with numbers. Or the tests are administered in such a way that certain groups are put at a disadvantage.
“With numeracy tests there are still gaps between men and women, and some of this is to do with the way the tests are administered, the way conditions impact performance. When women and men are in a room together, women perform worse because they are more aware of their gender and the ‘expectation’ that women fare less well,” says Kandola. He adds that simply being aware of this, and making small changes, could improve the reliability of the test.
Identifying future potential
Increasingly, HR should focus on using tools or approaches that can help them predict how a person might behave. This could be by looking at how responses to certain questions have predicted behaviours in the past or by harnessing the power of “big data” – analysing ever-greater volumes of both internal and external measurements.
“With enough responses, we can track things against a universal data sample and come up with predictive insights,” says McGurk.
A recent survey by assessment company SHL highlighted a divide between popular measurement tools and the need to identify future potential. It found that the top tools used for selection purposes were structured interviews, CVs, background checks, application forms and pre-screening questions. However, only 24% of respondents to the survey said they had a clear understanding of the potential of their workforce.
“With the exception of a structured interview, that’s a whole list of processes that research has shown add little value to the hiring process. A CV is not a great predictor of a good hire, and while background checks are great for compliance, they don’t have any evidential link to performance,” explains Ken Lahti, SHL’s vice president of product development and innovation.
HR must also avoid the temptation to see the results generated by the tools they use as black or white.
“In selection, people are often looking for a binary assessment – to hire or not hire,” says Matheson “The smarter people are saying, I now have fantastic information on this person, how can I use it for development purposes or if we hire them for another role? This way you get a much better return on investment on your assessment tools.”
As organisations start hiring again, he says, those that continue to measure their people “against yesterday’s standards” could be putting their business at a disadvantage: “It’s like using a ruler with imperial measurements – an old way of measuring for a new world. If you want to select someone for a role, do a proper job analysis. Look at what is required for the role, then look at how best you can measure that. It might be the same tools you always use, it might be different ones.”
After years of reducing headcounts and struggling to maintain engagement with existing staff, the decisions these tools support are more impactful than ever, so it’s important to make sure they’re fit for the job.
“You’re hiring fewer people from more candidates, so you need to know that the tools you’re using will both predict their future behaviour, but not chase off the talent you’ve worked so hard to attract,” Lahti concludes.
XpertHR resources | |||
Policy on best practice in psychometric testing Sign up to our weekly round-up of HR news and guidanceReceive the Personnel Today Direct e-newsletter every Wednesday Employers’ use of psychometric testing in selection: 2012 XpertHR survey |