Digital learning systems have begun to make deep, qualitative learning theories practical at scale. The industrial model of training and evaluation is a compromise between results and practicality, but with smart systems, those complex and rich views of training can come to life.Kirkpatrick’s Vision
One of those deep theories is Donald Kirkpatrick’s model of course evaluation. This methodology for evaluating the effectiveness of training has been incredibly popular, but implementing it in the truest sense is something that’s nearly impossible with only manual means.
The Kirkpatrick Model is simple in principle. There are four levels of evaluation:
To illustrate how xAPI and an LRS can make Kirkpatrick’s model come to life, we’ll frame it within the context of a case. An example that fits well is sales training.
You have a team of salespeople who need to go through a training course where they’ll learn a new set of advanced sales techniques.
We don’t just want to know that they’ve made it to the end of the course. We want to know how much they’ve absorbed, how well the course is working and ultimately whether it actually made a positive difference or not in the real world.
So how do we use xAPI and Kirkpatrick’s model to answer these questions?
Read on, or watch the video:
In xapiapps we create a training “pathway”, which is the sequence of steps that make up the training course as a whole. Using a tool called the Learning Experience Builder we can explicitly create this pathway, which acts as a literal guide to both the software and the trainee.
Name the learning experience “Sales Training” and then create the path in such a way that reflects the Kirkpatrick evaluation process perfectly.
It stands to reason that if a person enjoys the training they are undergoing, they’ll be more engaged, and therefore more likely to retain and master the content. Not to mention that they’ll have positive associations with what they’re learning!
So it’s important that we take the time to get a sense of their simple, emotional response to the training. In the Kirkpatrick model this is the first level of evaluation. The pure reaction of a trainee to the training course. As shown below.
In a pre-digital world we would have used something called a smile sheet, handed out to participants during training. They’d fill in their responses and then that information would be used to improve future courses.
With xapiapps you can slot in an instant feedback survey anywhere in the learning path you need it. You can even use that feedback to immediately intervene in that course. There’s no need to wait until everyone’s gone through the course before catching a problem and dealing with it.
The next level is to define whether learning is actually taking place. This is, as you can imagine, a rather crucial step in the training puzzle. After all, if your trainees learn nothing during training, there was little point in the exercise. Learning is not the end of the process, but it is a crucial step.
Using the Experience API, we can intersperse a measurement of learning at any point you want, to stop and take stock of learning progress. Usually this is in the form of an assessment package. Think of this as the exit exam, except with more depth and subtlety.
If anyone doesn’t make it through this part of the evaluation, they go back and do that part of the course again until ready to move on. Once someone finished the course, we wait a week and evaluate the third level.
Why wait a week to move on with the evaluation? Simple, it’s to take into account the science of memory and how we forget. Mainly it’s aligned to the observations made by Hermann Ebbinghaus and the speed with which we forget things that aren’t reinforced.
If the level of engagement in the course was just right and you made an impact, then most of what trainees were intended to learn would have stuck.
How do we measure if the learning has stuck? We have someone perform an observation checklist. For example, a person’s direct line manager could complete an observation checklist, evaluating the trainee against a set of standardized performance measures for a given task.
The results of this will tell us how effective the training has been.
There’s no need to involve the trainee at all, the assessment is slotted into the training path and the manager in question is automatically sent the checklist. It’s all done quickly and easily on a smartphone or other compatible mobile device.
The Power of Three
If the observation turns out well, the trainee in question is given the stamp of approval. Actually, they’re given a digital badge of approval, which then goes on their portfolio to show exactly what they’ve learned.
Job done, right? Except that you may recall Kirkpatrick’s model has four levels and the real magic only happens when you ascend to that degree of course evaluation.
You see, while all this trainee-centric activity was happening, xapiapps was sending data about the learning activities to a Learning Record Store (LRS).
These and more questions can be answered with rock-solid evidence, allowing you to fine tune your training courses so that they actually make things better and are not just assumed to do so.
For more on learning analytics and it’s role in modern instructional design, download the eBook ‘The LX Designer’s Handbook’ below: