Measurement: Go and evaluate the positive, eliminate the negative

Most training managers will know that measuring the effectiveness of training is a labour that would make Hercules pause for thought. Yet it is a task that all of them must face, as sound evaluation is essential to the success of most training programmes.

Managers need to know whether new knowledge and skills have resulted in improved practice, and trainers need to understand how effective their delivery has been. Without this, there is no basis on which to improve.

Equally, if finance directors fail to get a sense of the value of training, they will struggle to know how much to invest, or whether processes should be more cost efficient.

The Chartered Institute of Personnel and Development’s (CIPD) 2005 training and development survey found that 87% of organisations use feedback given by participants as the most frequent method of evaluation, but only 42% took this further by analysing changes in individual performance or career progress.

There is only so much value a feedback form at the end of a training session is going to provide, so what should organisations be doing to measure effectiveness more accurately?

Danielle Durocher is the global process lead on workforce development at IT giant Hewlett-Packard. She says measuring learning must go beyond the traditional approach that most organisations focus on.

“Transcending the organisation to focus on strategic measures requires a top down methodology that cascades throughout the organisation,” she says.

Training can be measured in business terms if goals are created that are linked to strategic performance. “Training data in the traditional sense focuses on consumption and utilisation. But strategic goals, such as time to market, increased revenue, increased profitability, customer satisfaction, customer retention and employee retention, need to be measured,” Durocher says.

Because training is essentially about people development, it needs to take into account not just scientific, quantitative measures, but also more subjective, qualitative measures.

Godfrey Owen, chief executive at training company Brathay, says: “People often have an emotional reaction to training that is difficult to quantify, but it is an important effect that needs to be taken into account.”

One of the most famous models for measuring effectiveness was formulated by US training expert Donald Kirkpatrick in 1959. “This four-step model is what we all live by now and it has driven all the thinking on the subject since the paper was written,” says Martyn Sloman, CIPD adviser for learning, training and development.

The first stage of Kirkpatrick’s model is ‘reaction’. This basically means measuring how happy those being trained are with their teaching.

The second level is ‘testing’, which involves finding out what was actually learned.

The third level is more complex and is labelled ‘transfer’. The model suggests that the behaviour of students in the workplace should be assessed to help ascertain whether their performance has been improved following training at a more permanent, behavioural level.

Keith Dixon, coach at training provider Academee Learning Solutions, says it is more difficult to assess at levels two and three, but suggests using 360-degree feedback questionnaires. “If run regularly, questionnaires help to identify changes made at these levels,” he says.

The final level of Kirkpatrick’s model is called ‘organisational results’. According to Dixon, success at this level is particularly compelling for organisations. “We have entered into agreements with several clients so that part of our remuneration will be dependent on this. For two of our clients, we agreed to take 80% of our fee on delivery of the training, with an increase to 105% if it was shown that our intervention had an impact on their financial success.”

But Sloman says that most organisations fail to get past level two, as evaluation suddenly becomes too time-consuming.

“Factually, we know from every poll and CIPD survey that the only stage of measuring training that is done extensively by firms is stage one – measuring reaction. But we all hold our heads in horror and say why do we not do more?

“But, what matters is alignment of training and learning interventions. By getting obsessed with levels and stages, we may be missing this more important problem. Measuring effectiveness has to be a far more diffused process done over time. Focusing on different levels may not be the answer,” he says.

Sloman says Kirkpatrick’s model also falls down because it fails to measure return on investment (ROI). A standard formula for calculating ROI is: (total benefit – total costs) x 100 = ROI
                    total costs

But it can be difficult to identify the costs and benefits that are not measurable financially. “If someone is sent on a training course, there might be a value simply because it is a recognition of their worth to the company. “This has an impact on morale that may not be measurable immediately in pounds and pence but is, nonetheless, important,” says Dixon.

To make this calculation more effective, Barbara Greenway, former managing director of Parity Training, recommends using four clear steps before using it.

  • Step one is to define objectives so that the reasons and goals for the training are clear.
  • Step two is skills assimilation, where Greenway suggests taking tests three months’ after training to ascertain what has been learnt.
  • Step three is cost itemisation, which involves breaking down the costs of training. These include trainer development, programme materials, instructor, facilities, expenses and administration. The cost of the delegate’s time should also be taken into account, including the potential loss of earnings from attending the course.
  • Step four is benefit evaluation, which should include time savings, improved productivity, labour savings, improved quality and better morale.

Only after this should you attempt to use the ROI formula, says Greenway.

Another tool that can be used as part of the wider evaluation of training is a broad-based balanced scorecard.

Brian Sutton, director of learning at training provider QA, says this is a method his team uses.

“Rather than looking for evidence of success in the training event, we look at both in the workplace and the wider organisation,” he says.

Calculating ROI is never going to be a simple process, but it is undoubtedly an important one. There may be no easy short cuts, but being able to justify your training spend with confidence should be well worth the effort.

Tips for measuring ROI

Tony Dunk, principal of training consultancy CDA, outlines some tips for measuring ROI:

  • The need for training must be recognised by all stakeholders – including the board, line managers, HR and those being trained
  • Clearly define the strategic business objectives of the training
  • Agree a set of approved benchmarks for measuring achievement
  • Throughout the training, evaluate the skills learned using the agreed benchmarks
  • Following training, monitor whether the skills learned are sustainable and have met business objectives.

Case study

Mobile phone company O2 has a specialist retention team in Bury, Lancashire. Its task is to minimise attrition by talking to customers who phone in to transfer their number. It used a training programme to provide customer advisers with the skills needed to convince customers to stay with O2.

The programme started with a workshop for line managers and team coaches. Advisers then took part in a distance learning activity running over eight weeks; an interactive phone quiz that tested the knowledge they had acquired; and three one-hour motivation sessions where advisers looked closely at what they were being asked to do and reviewed how best to do it.

The effectiveness of the training was measured in the following ways:

  • Learner reaction was collected through interviews with advisers, team leaders and managers at the start and the end of the programme
  • Impact on knowledge was assessed through the results of the weekly phone tests
  • Impact on behaviour and individual/team performance was measured through performance assessment forms completed by team leaders at the start and end of the programme
  • Impact on the organisation’s performance was measured through:
    – Customer retention rate for the centre, measured on a minute-by-minute basis which showed a 9% improvement during the programme
    – Customer satisfaction – the centre recorded the highest ever satisfaction score at the end of the programme.

Comments are closed.