When looking at the effectiveness of learning and development (L&D) there are typically three main areas to understand:
- How was the experience? Important for understanding ‘stuff’ which may be increasing cognitive load
- Was information successfully migrated to the learner? Important for understanding whether the key messages landed
- Did the activity subsequently impact performance? Ultimately, this is why we deliver training
Historically, training departments have tended to believe they are really good at measuring the experience, and to a degree the ‘migration’ of knowledge. But they’ve also believed that attributing any change in performance to L&D is very difficult.
As someone who specialises in data and operational performance, I have a different view. I’m on a mission to demystify data, and show you how easy it can be to measure it all – including performance.
Did you know?
We can help you implement coaching within your organisation!
We’ve teamed up with Coaching Culture to provide charities with the best coaching solutions.
Courses, events and resources are just some of the goodies included.
I’ve worked as Head of L&D Insight for Sky, Europe’s largest broadcaster, and throughout my career have seen inside the training departments of many large organisations. I’ve noticed that, on the whole, L&D is really bad at measuring experience and pays lip service to measuring the acquisition of knowledge. Yet – frustratingly – attributing a positive change in performance to L&D activity is easier than most people realise!
One thing at a time
Given the belief that we’re good at understanding the learner experience, I want to share how to take a data driven approach to both improve the quality of any survey responses and reduce the effort in understanding them.
Typically, to understand the experience for learners, organisations will issue happy sheets – digital or paper surveys – after a training event. This will contain a series of questions along the lines of ‘What did you think of the room’, ‘How could we do this better next time?’, ‘Did you enjoy the food?’, ‘How did you find the training?’ etc.
As a data and efficiency guy, I have two problems with this line of questioning:
1. Organisations do not ask these consistently.
One week, for one course, we might ask ‘Did you enjoy the training?’, the next week, for a different course we might ask ‘How did you find the eLearning?’ You cannot compare the answers to these questions across courses or over time as you’re not then comparing like with like. With these questions, we can never know if things are getting better, or worse.
2. Organisations will either invite respondents to write a free text, open ended response, or pick a choice from a drop-down list.
Limiting respondents to a drop-down list may feel efficient, but any list can only contain our guesses of what is important to learners, this does not allow for anything of which we are not aware.
Free text open responses are fantastic…. assuming you have a text analytics capability to remove bias from any interpretation of results; that your categorisations are consistent; and that you’re storing all the responses over time to understand shifting trends.
As an example, if we take the following piece of feedback: “The course was excellent, but I found the concepts difficult, the trainer was helpful, but the room was cold.”
By taking this approach we have made our life very difficult. To manually categorise this, we may interpret it as positive…. It was excellent, and the trainer was helpful. But someone else reading this may think it’s negative… the concepts were difficult, and the room was cold.
From a data perspective, this is actually a neutral statement, the math goes something like this:
Sentiment score = 0
It’s also hard to understand what is important to the learner, and what we need to change. Ok, we see the room was cold so the next time we run it we could turn up the heating. But what if everyone else was fine and we are now over cooking them? It’s very hard to take action on individual responses.
Once we get dozens, or hundreds, or thousands, or tens of thousands (depending on the size of your organization) of responses like this, who has the time to read all of these? And what do they do with this information?
More for Less
Incredibly, we can overcome interpretive bias, understand what is actually most important to our learners, gather actionable feedback, and analyse tens of thousands of responses in the click of a finger – with no specialist tools – by asking just one very simple question:
‘Describe your learning experience in one word’
This question not only reduces effort for your learners to a single question, requiring a one-word answer, but it will also nudge your learners to confess what is most important to them.
In the example above, the response may be ‘Excellent’, ‘Helpful’, ‘Cold’, ‘Complex’, or something altogether different. If you see the same or similar word appearing multiple times in responses, then you’ll know what needs to change.
Furthermore, you can use a simple *lookup in Excel to automatically count the number of positive and negative responses supplied by your learners. I.e. you can easily see the proportion of positive and negative responses, and you can track this over time to see if it’s getting better or worse, and specifically why.
It’s a myth that has been around for far too long that learning analytics is difficult, complicated, or expensive. Understanding learner experience to optimise it is actually cheap, easy and can be deployed in less than a day.
About Derek Mitchell
Derek Mitchell is an independent consultant specialising in L&D optimisation and People Analytics research
*For charitable organisations, I can provide a sentiment worksheet free of charge and talk you through how to use it (no technical knowledge required).