When
it comes to multimedia courseware, trainers are spoilt for choice. But how can
they be sure that the programme they select will really deliver? Robert McLuhan
looks at two contrasting approaches to courseware evaluation
More
and more organisations are opting for e-learning solutions, attracted by their
ability to skill up large numbers of employees at a fraction of the cost of
classroom training. That has brought a proliferation of providers: every day
more companies enter a market that IDC estimates will be worth $11.4bn (£7.6bn)
by 2003. But the quality of the material on offer is by no means even.
NETg,
one of the largest suppliers, characterises much of what is currently available
as entertainment, packed with games-oriented video and animation sequences that
do little to transfer skills.
The
other type to avoid is the course that simply presents subject matter as a
page-turning exercise. Only programmes that genuinely engage the user can have
real value, it argues.
To
help trainers sort the wheat from the chaff the company has come up with a
Windows-based evaluation tool. The software is free and is being distributed
through the global Information Technology Training Association (ITTA) as a
contribution to creating industry-wide standards.
Eventually
the company hopes that the UK’s Institute of IT Training and other training
organisations around the world will follow suit.
The
name of the programme is ECG, suggested by the similarity of the graphs it produces
to medical electrocardiograms. The tool plots levels of engagement and
interactivity in a course by quantifying the use of simulations and
opportunities for feedback.
Effectiveness
Conversely,
reliance on quizzes demanding undemanding “yes-no” or multiple choice responses
indicates a low level of engagement. The resulting graphs provide an instant
visual comparison that enables training professionals to compare the likely
effectiveness of various courses under consideration.
To
save time the tool can be run through selected pages, on the assumption that
they will probably be representative of the course as a whole.
“People
learn better when they actually have the opportunity to be engaged in an
activity rather than simply being fed a piece of information,” points out Pam
Burton, NETg’s director of global marketing. “We believe that is a valid
measure of how much interactivity there is.”
Brewer
says, “Our objective is to help educate people who are evaluating
technology-based training or who are trying to come up with an e-learning
strategy. We are very committed to driving standards within the industry.”
Brewer
concedes that software alone will not achieve the same result as a manual
evaluation. It is intended as a starting point for buyers coming to the market
relatively unprepared, who may be vulnerable to products that are superficially
attractive but have little value. She says, “What we are providing is one more
tool in the buyer’s toolbox and certainly there will be other factors to
consider, for instance the quality of the after-sales service.”
Just
how much more there is for trainers to get to grips with before making a
purchase is made clear by Xebec McGraw-Hill, which recently published
comprehensive guidelines on the issues involved in choosing technology-based
courseware.
“E-learning
companies are popping up every day of the week and there is very little that
helps buyers choose between them,” says senior flexible learning consultant Tim
Drewitt. “Organisations either do it themselves and hope for the best or else
they rely on advice from resellers.”
Criteria
While
Drewitt welcomes any service that helps make an informed choice, he argues it
should be based on a number of criteria. Xebec McGraw-Hill’s book, Quality
Standards for Evaluating Multimedia and Online Training, outlines four main
stages in checking the usefulness of a package.
“First
you need to be sure the product matches the organisation’s needs,” Drewitt
explains. “That means asking whether the target audience described by the vendor
actually fits the profile of your users, and whether the course’s stated
objectives are the same as yours.
“Also,
will the product work on your technology platform and do in-built features such
as progress tracking actually do what you want them to?”
The
second step is to review the content for accuracy, depth and clarity. It will
be necessary to judge whether the learning skills are appropriate to the levels
of skills of learners and to roles in the organisation.
Then
there is the question of usability. Trainers will need to know whether the
course is easy to install, runs smoothly, and provides clear and consistent
instructions. “You want it to be intuitive, so that it is easy to spot what
will happen when you click on something,” Drewitt says.
Only
in the final stage do the guidelines tackle the area addressed by NETg’s ECG
tool, instructional design. Questions to ask at this stage would include:
–
Are the objectives presented at the beginning?
–
Is the course structured in an appropriate sequence?
–
Are there sufficient examples to reinforce learning points?
“You
want to know whether the students are actively involved in the learning methods
used,” explains Drewitt. “Are they given choices of learning methods and does
the course provide realistic opportunities for practice in different
scenarios?”
Non-threatening
Trainers
would want to ensure feedback and guidance is offered at every stage in a
timely, relevant and non-threatening manner.
Guidelines
such as these have the advantage of taking much more into account than a
software tool can realistically accommodate. The downside is that they demand a
willingness by the buyer to make conscious judgements, instead of carrying out
the process automatically.
Ultimately,
Drewitt says, the value of any aid will be to short-list likely contenders.
Other considerations such as cost make it difficult to base a decision on
quality alone.
Sign up to our weekly round-up of HR news and guidance
Receive the Personnel Today Direct e-newsletter every Wednesday
“At
the end of the day, the best test of the quality of the product is the user,”
he suggests. “If staff can’t get round the course they will soon tell you. But
if they are enjoying it, that’s a benchmark for you: when you buy a product in
the future you know your business likes this particular brand.”
As
course evaluation becomes a growing concern, aids such as these are likely to
be increasingly available. But no single approach seems to cover every aspect a
purchaser needs to know, which means buyers will rely on their understanding of
what makes for successful training.