Making the grade

As
the practice of forced ranking comes under the spotlight, Keith Rodgers finds
that it needs to be used with other measurement tools to be truly effective

If you’re in the bottom 5 per cent of performers at Siebel Systems, the
Silicon Valley-based computer software company, you’d do well to start refreshing
your resumé. Every six months, using data aggregated from an ongoing
performance appraisal process, the company culls its lowest-ranking employees.
Taking its lead from a process evangelised by Jack Welch, the former head of
GE, Siebel effectively forces its managers to face up to tough questions: which
employees really add value to the organisation, and which are a drain?

This process of ‘forced ranking’, adopted by a number of US companies, has
come under the spotlight over the last year as the economic downturn forced
companies to pay closer attention to their bottom-line costs. Criticised in
some quarters for taking a mathematical approach to a complex human issue, in
many companies ranking is evolving into a highly sophisticated measurement activity,
supported by a growing array of software tools and business processes. More
importantly, however, it’s now being viewed not as a standalone activity that
can make or break individual careers, but as one part of an extensive HR
portfolio that incorporates techniques such as competency profiling and
e-learning. Carried out as an isolated management activity, forced ranking is
only as good as the metrics and management disciplines that underpin it: used
in association with other business measurement and workforce improvement tools,
however, it offers organisations the chance to really leverage their human
capital assets.

At a basic level, the processes behind forced ranking are deceptively
simple. Each employee is set objectives against a specific timeframe: at the
end of the period, they’re judged on a scale of one to five (or ‘a’ to ‘e’) as
to how effectively they hit their targets. That ranking is used in both formal
appraisal processes and to determine performance-related compensation. In
organisations like GE, the data is also aggregated to provide a checklist of
which employees are failing to make the grade. In theory, by culling the bottom
performers, the company improves the average level of performance, raising the
stakes for the rest of the workforce when the next review period comes round.

In practice, however, the process is far from simple. To begin with, judging
employees collectively assumes a level playing field that rarely exists.
Managers in different departments may set objectives that vary widely in terms
of how difficult they are to achieve, and measurement is rarely standardised.
If two employees are told to improve their sales presentation skills, for
example, one may be judged merely on how they were ranked in a training session,
another on whether they delivered a predetermined number of live presentations
and how the clients responded. Those are two different goals, and more
importantly, two very different sets of measurement – one a formalised training
process, the other a live sales scenario.

The playing field is further distorted by market and geographic conditions.
In customer-facing functions such as sales and marketing, the relative
performance of individuals operating within the same division can be affected
by numerous regional factors: expand that on a multinational scale and the
differences are greater still. Those variables have to be taken into account by
managers as they set objectives, bringing a degree of individual autonomy to a
process that theoretically should be standardised.

Finally, the scientific framework that underlies forced ranking takes little
account of the realities of people management, a point stressed by Mark Geary,
managing director of Hong Kong-based AsiaNet Consultants and a former senior HR
executive at companies such as ICI, Ladbroke and Inchcape. He believes that the
system can often be undermined because of the implications of poor ranking.
"Most managers are loathe to rank people lower than ‘c’ because they don’t
want to demotivate them," he says.

"Also, if the manager’s doing their job, they shouldn’t have to wait
for an appraisal system to see someone’s a ‘c’. And if they end up rating
someone as an ‘e’, what are they doing as a manager? That’s the weakness. So
you end up tolerating under-performance. The whole area is a real can of
worms."

Proponents of forced ranking, however, argue that if the right
infrastructure is put in place, many of these anomalies can be ironed out.
Anthony Deighton, director of Siebel’s Employee Relationship Management (ERM)
division in the US, argues that successful employee performance measurement
rests on a combination of business processes and software tools, driven by
well-understood business objectives driven from the top down. Siebel, which
markets an ERM software suite built on the back of its own internal employee
relationship management applications, has established a top-to-bottom ranking
process internally that includes a series of management checks and balances.
Company-wide consistency in terms of the metrics deployed by managers is
enforced through three processes – training and support from the HR department,
executive review and load-balancing analytics that spotlight variances (see
below). The fact that an employee is ranked low doesn’t necessarily reflect on
the manager, he argues – it may simply mean that an individual is in the wrong
job.

More importantly, ranking also has to be seen in a wider context. Leaving
aside negative attitudes, personality clashes and other "character"
issues, the most common explanations for poor performance are that individuals
have either been badly trained or that their skillsets don’t match the
requirements of their role. By linking the appraisal procedure to learning,
competency assessment and career development processes, organisations can
tackle both the causes and effects of underachievement.

Learning Management Systems, for example, provide the IT infrastructure for
self-paced training and Internet-based virtual classrooms, and allow
organisations to monitor which individuals have taken which courses. Used in
conjunction with other management tools, they can provide the basis for more
extensive performance analysis. Managers can link improvements in individual
ranking, for example, to the training courses undertaken by those employees,
establish patterns and use that data to determine whether to extend the
training programmes to other members of their team. There are caveats to this
kind of cause and effect analysis, of course. While there may be a correlation
between sales staff who’ve gone on a particular training course and an increase
in closed deals, the number of variables is high – on the one hand, the sales
may have closed anyway, on the other, failure to close a deal may reflect more
on the customer’s budgetary constraints than the quality of the sales pitch.
That said, early adopters of this kind of HR analytics in the US argue that the
correlations thrown forward have value simply in the fact that they raise
questions: finding the answers may require trial and error, but in many cases
those answers wouldn’t have been sought without the software application. As
Deighton argues, real value comes when analytics are translated into action: if
a specific training course appears to be achieving results, roll it out
elsewhere and validate the proposition. "You need an organisational
culture which allows people the flexibility to make changes, where they can
test different things – you can’t create a culture where people are so scared
to act that they can’t do anything."

The training data that’s gleaned from Learning Management Systems also form
part of the information set needed to build competency profiles, which again
link back to the appraisal process. Typically, organisations define at a broad
level the skillsets or profiles required for particular generic roles – these
are then customised by local managers for the specific requirements of the
positions in their department. The skillsets of the employees that fill each
post are then matched against the checklist of requirements, highlighting
disparities in competency levels and providing guidelines for future training
programmes, recruitment needs and career development. Populating the initial
profile database can be a daunting task – one US mobile telephone operator
estimates that it would take two people six months to build the templates
required for a 34,000-strong workforce. But the implementation timescales can
be radically reduced if employees are encouraged to build their own skills
profiles, monitored by their line of business manager – that typically requires
an internet-based IT infrastructure that gives controlled access to relevant
parts of the central competency database. Organisations like Hewlett-Packard,
the Silicon Valley-based IT systems and services company, have already rolled
out this kind of competency profiling system to its most senior employees,
covering some 10 per cent of its total workforce (see web feature).

While profiling has clear value at an individual level, the aggregate data is
also critical for gap analysis and workforce planning, providing senior
management with an understanding of organisational weaknesses and an insight
into the company’s capacity to expand its business or move into new markets.
Again, if the competency management process is linked to a forced ranking
system, the data will reflects not only skillsets, but also how effectively
employees’ deploy those skills in their day-to-day roles. As each element of
the HR function is integrated in this way, the combined value of the analytical
output increases exponentially.

Ultimately, this integrated approach to employee management extends beyond
the HR function and reaches right to the heart of business performance
measurement. "The appraisal isn’t something that takes place on an annual
basis – it should be continuous," argues Geary. "Do it the simple way
– you don’t need to do a full, big review which takes an hour or two per
individual – but you should be doing a 15 minute review of the objectives that
forms part of the quarterly business review. It’s people that deliver on the
company goals. Business performance consists of financial and people
performance, and the two need to go hand-in-hand."

Case study
Siebel: forcing the issues

Siebel Systems, the US-based developer
of customer and employee management software, has built its forced ranking
system on the back of corporate objectives that cascade down from the top of
the company.

On the first day of each quarter, chairman and CEO Tom Siebel
publishes his corporate objectives, generated from an off-site executive
meeting. By day three, senior managers will have reviewed the objectives and
created their own targets for their specific divisions. By day 15, all 8,000
employees of the company will have created their own sets of objectives in
conjunction with their managers. According to Anthony Deighton, director of
Siebel Employee Relationship Management (ERM), these objectives are reviewed on
a frequent basis through the quarter at both an individual and team level.

At the end of the quarter, employees write a self-assessment,
and discuss how effectively they hit target with their line manager – their
performance is measured against each objective, culminating in a one to five
overall ranking. Managers have the ability to override the automated ranking
calculation to take into account specific factors that may have influenced
performance, such as extended sickness.

In addition to the formal ranking, the review also covers a
range of other factors, including soft measures that are not objective-based.

Siebel employs three techniques to ensure the ranking process
is carried out as consistently as possible across the company.

The HR department supplies relevant  documentation, web-based training and an employee helpdesk in an
effort to standardise objectives and measurement techniques. Additionally, all
objectives are reviewed by the next layer of management. Finally, the company’s
ERM software generates a ratings and distribution report, which highlights
bands and trends.

"If someone has given everybody five, you make them
justify it," says Deighton. "If the manager sees something is skewed,
they can drill down, see details and reject a review."

This ranking system forms the basis of Siebel’s six-monthly
‘cull’ of the bottom 5 per cent of employees.

"We do the analytics, get the names, and then go and
interview them to find out if this is the right 5 per cent, or if there is a
different set," says Deighton. "This is not maths, it is people’s
lives – that 5 per cent is a blurred boundary."

Although the process may seem ruthless, Deighton argues that it
is ultimately constructive. Few people who fail to make the grade are ‘bad’
employees – maybe one-quarter or half a per cent of an organisation, he
believes. Most of them, however, are simply in the wrong job for their
skillsets – and it may be there is no suitable alternative opening within the
organisation.

"There has always got to be a bottom performer. You are
forcing managers to think about their people – who is more of a drain than a
plus? It is certainly seen as positive by the people who remain. If you do not
do it, the star performers will get frustrated and leave."

Comments are closed.