Learning at scale takes different forms, from offline, or even offsite, training interventions to content delivered and completed digitally. For a long time, however, the role of learning has been relegated to the completion of a module or mandatory attendance of a training program.
However, are these measures a true mark of success?
How can large organizations measure the reach and efficiency of their learning endeavors?
How can they use numbers and metrics to measure the Return On Investment of their learning journeys?
How can they encourage employees to bring these key learnings into their work?
As Peter Drucker famously said, "If it cannot be measured, it cannot be managed.” The same should hold true for learning as well, and we are finding ways to let data and numbers do the talking. In this article, we put together 8 aspects that can be studied over time to understand the performance of your new or ongoing L&D projects.
At The Systems Level
1. Measuring The Frequency At Which New Systems Are Implemented Peter Senge’s systems thinking approach is widely known and hardly implemented. Indeed, several organizations start with a bottom-up approach to systems thinking, waiting for the details to fall into place before developing the framework that could support them.
2. Habits At All Levels Are Formed And Broken At The Systems Level In his widely popular book, Atomic Habits, author James Clear highlights how a change in identity is the key to changing habits. For example, saying, “I quit smoking” has a vastly different psychological impact on the quitter than saying, “I do not smoke.” This subtle shift can also be used as a way to qualitatively measure how often learning and insights from these interventions are brought into the real world. It is up to each manager to subtly track the changes in behavior noticed as a result of the learning intervention.
Indeed, if people have learned something of value under any circumstance, it is likely that they would bring it up in conversations, meetings, and other areas of work. Most learning organizations today recognize that the best way to measure the effectiveness of a learning program is highly qualitative, and hence focus on building the systems is required to form a powerful learning habit. Results are visible in big and small ways:
How often do workers Google information they do not understand?
How many high-value questions get asked in meetings?
How comfortable do people feel asking for and providing help in areas that are strictly not their own?
3. Benchmarking Against Preset Goals When deploying a new learning intervention, it helps when organizations and their learning partners work closely to map out clear goals and outcomes. Asking, “Why do we need this intervention now?” and drilling down until at least three core reasons are found is the easiest, most effective way to measure success. Every goal also comes with a timeline, and learning interventions in our experience take at least six months to become processes within the organizations we work with. Mapping each goal to a timeline tells management teams and their L&D partners when to measure for success, how to measure it, and what is the basis of the parameters on which a company-wide goal is considered achieved.
4. The Half-Life Curve Analog Devices, a semiconductor manufacturer, implemented the half-life curve to measure learning efficacy as part of their TQM process. While it was initially implemented to understand the reduction in "half-life" on the factory floor, the principles can be extended to other areas as well. What is a half-life curve? It is the time taken to achieve a 50% improvement in a process. For systems-level analysis, this comes closest to being a quantitative metric. The goals set above can be used to measure improvement over time, while the time itself can be used to measure the half-life.
For example, the organizational goal from a learning intervention could be a reduction in customer churn from 20% YoY to 10% YoY, as a result of a better after-sales process. Introducing a 3-month learning intervention for the sales and post-sales teams and measuring churn at the six-month mark tells us how we are doing in terms of customer churn in that particular year. This can also be compared with the rates of module completion (detailed below) to effectively tie in KPIs with learning interventions. 5. Double-Loop Learning When we first posted this HBR article on our LinkedIn page, it created a bit of a ripple internally. The question we asked was “Which blind spots are you missing today?” Double-loop learning perfectly sums up the problems of a linear learning model and presents solutions. The linear model looks a bit like this:
Problem→Learning Intervention→Solution
However, a double-loop system works on the tenet that by the time we reach the solution stage, newer problems would have made themselves evident, and a feedback system is the only way to achieve effective learning outcomes. Here is what it could look like:
Problem→Learning Intervention→Outcome→Learning From Outcomes→Solution
Creating multiple loops until we reach the desired solution can take multiple iterations and can mean multiple outcomes, thus designing a learning journey as opposed to an intervention. This system is also a valuable metric of progress because it measures outcomes instead of simplistic, often theoretical solutions.
At The Individual Level
6. Completion Rate Of The Material While most of the measurements made at the systems level are large, complex, and hard to tie back to the learning solution, the individual level metrics are far more straightforward and direct to measure. The first among these is the rate of completion of the learning material. Out of a potential pool of 10,000 employees, if 6,000 of them complete their non-mandatory training, this gives the business a benchmark. The brief to their learning partner, and the KPIs as a result, could ask for “70% of all employees to complete their non-mandatory training” as a new benchmark for learning success. For mandatory training, data around how close to the completion date employees log in can be used similarly. For example, when given a three-month timeline for completion, if too many people prefer to rush through the course in the last week, this is a clear indicator that the business needs more buy-in, and perhaps better content, to drive home the importance of learning.
7. Movement In Job Performance KPIs One of the clearest ways to measure the success of a learning solution is to tie it in with job performance KPIs. Team managers, HR leaders, and even the employees themselves need to have an honest conversation while setting their performance metrics for the year. Leaders are perfectly placed to help find programs and learning solutions that would best match these KPIs at the organizational level. There’s no better time than now to make learning worth so much more. 8. Identifying Drop-Off Points As an extension to quantifying learning outcomes, identifying the most common drop-off points during a long learning intervention is immensely useful. If this point is common across the board, it gives your learning partner valuable information on what could have gone wrong and how to fix it. Sometimes, "dropping off" is a product of not feeling engaged enough, and a good learning solutions provider can work with you to identify how to make nuggets of content more engaging. Moreover, if several people are "dropping off" at a specific time of the year, it is worth considering if this is the period when the pressure of business outcomes is at its peak, and restructuring your learning dates accordingly.
It is quite rare, even in today’s times, for any organization to think of Learning and Development as a measurable solution with a clear impact on employee morale, retention, growth, and even the bottom line itself. However, the pandemic has given us the time, and the perspective, to do just that. If you haven’t already, give your learning partner a call and find out how they can help you design effective learning solutions for your new business needs, and don’t forget to talk about the metrics!
Comments