Paul Kearns analyses the results of the recent self-test on measuring the return on investment, and offers a few hints on how to better assess the training spend
In the September edition of Training we featured a self-test questionnaire and survey to help readers gauge how they calculate their return on investment in training (known as ROI). It was intended to set a very high standard, so please don’t feel disappointed if you felt your score was low – just regard it as a great opportunity for improvement.
I follow the principle that as long as you measure your performance and produce a baseline before you learn, then you can check your performance improvement afterwards. (Have you tried doing the questionnaire again recently?)
Maybe the questionnaire was not scientific enough for you – it was not intended to be.
My view is that although measurement is important in training, it is not as important as getting the fundamental principles of training right.
The questions are really geared to checking the principles you follow, not only in training evaluation but training analysis and design as well.
I have given my own guideline "answers" to the questions below. You may not agree with all of them, but they should provide a great deal of food for thought.
Also, even if you just improve in one or two areas, that is still improvement and that is the main reason for producing the questionnaire (and any other evaluation tool).
You should note that we refer to "evaluation levels" which are:
- Transfer/behaviour change and
- Business impact.
In this article we run through the questionnaire again, pointing out the likely pitfalls and offering comment on where some readers need to improve. The table opposite gives the scores in detail to allow readers to compare themselves to their peer group.
1 All our training is clearly linked to business objectives
Not as difficult a question as it may seem. Probably as much as 75 per cent of training is done simply because the business has to do it (eg safety, induction, systems, product knowledge). This is automatically done for an obvious business reason. The rest of training spend is discretionary and this is where problems arise. What was the business reason for your leadership programme (better leadership does not qualify as a business reason)? Any training for which you have identified a business improvement, in monetary terms, qualifies as a business reason and you should be able to produce an ROI for it.
The Training survey average 7.41 suggests most trainers at least believe they are linking training to the business.
2 All our training has clearly defined learning/training objectives
Relatively straightforward, hopefully. Even coaching and mentoring exercises can establish some pretty clear learning objectives. This is basic good practice. If you gave yourself a low score try and tighten up on this one straight away, it is a good discipline.
This should receive a confident 5 score – anything lower is of concern.
3 I can clearly define the difference between validation and evaluation
I’m sure we could all argue semantics here. For what it is worth, I regard levels 1-3 as validation only. They check that training delivered its training objectives, but will never tell you how much value training added. Only level 4 is evaluation, that is it checks that the training delivered its business objectives. Evaluation is about putting a real value on training. The only true value I know is one that has a pound sign.
When most trainers talk about evaluation they usually mean validation – it is interesting that there is a discrepancy between here and question 13. The average score for 4.18 here contrasts with only 3.18 on question 13.
4 I know what the PDCA cycle is, how it generates continuous improvement and where evaluation fits into it
PDCA stands for Plan, Do, Check, Act. It is a model that has been around since the 1920s and Deming used it to great effect in quality management. It only works if you use improvement measures. A key part of it is the check stage. Here you check whether the planned improvement has happened and then feedback the results, good or bad. It is very similar to Kolb’s Learning Cycle. If you replace the word "check" with "evaluate" you have a perfect, simple and very powerful system for iterative, continuous improvement and learning. If you have never tried it out try it tomorrow.
The survey showed an average 4.47 – but all trainers should aim for a score of 10.
5 I know what a feedback loop is and how it encourages learning
This follows on from the PDCA cycle. Without feedback loops, the organisation does not know what is working and what isn’t. Do you have effective feedback loops on all training? It is just as important to feedback when something isn’t working as when it is working. We learn from our mistakes.
Interesting that this scores higher than question 4, even though they are basically the same concept.
6 I can distinguish between basic training and improvement (added value) training
Basic training is training that the organisation needs just to stay in operation. Think of airline pilot training – the airlines could not operate without it. The only way to ROI basic training is to think negatively (how many planes would crash if we didn’t train the pilots properly?). Added value training identifies an improvement gap. So, for example, you train to improve sales and profit. This training is easy to do ROI calculations for.
7 All our added value training is subject to an ROI calculation
You can only score well on this if you scored well on Question 6.
Our survey showed a very low score of 1.15 – trainers need to be more up to speed with ROI – their businesses are demanding to know.
8 Reactions to all training are measured
Straightforward Level 1 happy sheets will do for a maximum score. Even sampling will qualify for a maximum score. I never set too much store by happy sheets, but they can still provide some useful indications.
9 Tests are used after all training to measure how much trainees have learned
Level 2. If you take training seriously then some testing should always take place. However, I will be the first to agree that testing is often highly contentious. Nevertheless, we do it when we have to for legal reasons, so why not all the time?
The survey showed quite a low score of 2.18 with some zeroes – this needs to be improved, otherwise trainees do not take the training seriously.
10 Observations are made to check how much learning is being applied in the workplace
Level 3, the most time-consuming level. Very meaningful, but can be very disappointing when you see very limited transfer to the workplace. Sampling is highly recommended and this would qualify for a maximum score.
I think 2.24 is high enough – it would be easy to get bogged down in level 3.
11 We evaluate the impact of all training on the organisation
I mean evaluation, but for basic training validation will suffice. I don’t know any organisation that does this 100 per cent, so if you gave yourself a maximum score you’re the tops – or bending the truth.
The survey’s range of 1 to 8.5 suggest wide variation in amount of evaluation taking place.
12 We do not design any training until we have already established the evaluation measures to be used
For me, this is a crucial question. The biggest problem that trainers have when they first try to evaluate is that they don’t realise they must establish the measures before the training was designed. Trying to produce measures afterwards, without baseline measures, is a rather pointless exercise. Again, validation measures will suffice for box 1 training.
The survey’s 3.18 average score suggests that evaluation considerations have moved up the agenda, even if some people still do not include it at all in the design phase.
13 I can clearly define added value
Actually, a great deal simpler than you might have thought, even though added value can be a slippery concept. If you make widgets you can only add value by increasing widget production/sales (without increasing average cost), reduce the average cost of widgets or charge a higher price. I would also accept an improvement in the quality of widgets in the belief that this will feed through to lower costs or higher prices or higher sales. Obviously, the same applies to the provision of services and the public sector just as much, although they rarely have an opportunity to dictate prices or charges. If you defined added value in terms of creativity and innovation you are probably right. But these things will only really add value if they produce the improvements cited above. Added value always, always, always (is that enough emphasis?) has a pound sign attached.
14 The amount of money we spend on training could not achieve a greater return if used elsewhere in the organisation
All added value training should produce an ROI greater than any other investment, and in practice you will often find this is easier to achieve than you might think. Effective, added value training generates significant returns (100 per cent at least). All basic training, assuming you validate its effectiveness, should be delivered as efficiently as possible.
Regardless of the actual scores there needs to be a very confident answer given to this question by trainers, otherwise the business loses faith and you lose credibility.
15 We could not achieve a greater return on our training investment than we currently do
You might want to change your answer to this one having seen all the answers above. You can only attempt to answer this if you have a very good evaluation/ROI system in place.
It is interesting that the survey’s actual average score of 2.26 is less confident than 3.21 for question 14.
Paul Kearns is senior partner at Personnel Works and an authority in training and development measurement and evaluation. The practical tools he has developed are explained in several titles in the Financial Times/Prentice Hall Management Series on www.business-minds.com Kearns can be contacted at Personnel Works, PO Box 109, Bristol BS9 4DH, Tel 0117 914 6984