L&D’s relationship with Data

This title has been taken from the Towards Maturity report published in August of this year. I was immediately drawn to the title and anyone who knows what a maths geek I am will understand why.

I always loved numbers and even just playing with them, multiplying numbers by themselves repeatedly just for fun! Yes I know it’s not normal and I also appreciate that not everyone else has the same love of numbers, in fact quite the opposite. I have several friends who will admit that numbers are almost a phobia.

Reading this report ,it is quite evident that we, in L&D are not great at collecting and using data to its best advantage. Some of the figures that struck me were:

  • Of those aspiring data to affect change only 2/5 were able to say that it helped the, demonstrate business impact (40%)
  • Also only 3/5 were successful in using data to help solve business problems (60%)

Bear in mind that was from a sample size of 700+ and the two figures above were those people who were really trying to use data affectively. This means in reality that there will also be a number of people not even trying so the 40% and 60% are likely to be very optimistic figures.

The most likely reasons cited were:

  1. Data and it’s analysis is complicated
  2. Lack of L&D skills in this area

If I look at the second point first. Why are we not addressing this lack of skills? Is it this phobia of numbers? A fear of what to do once you start collecting? An expectation that things have to change once you start collecting data effectively? Maybe it’s a combination of all three? Or maybe a misconception around what it means to collect and analyse data?

For me it is quite simple (and this may address the first point). In L&D we need to get nosey. When someone asks us to deliver a leadership programme, we need to ask why and how will you know it has been successful? If the first person who asks you doesn’t know, then ask someone else. Is it a real need or a perceived need?

The perceived need may be something like employee engagement scores being low. What we really need to determine is what effect that is having on the performance of the business:

  • High recruitment costs?
  • Lack of agility in the marketplace because there is a high attrition rate, staff not as familiar as they should be about products?
  • Poor customer service because the tools they use have had little investment?

So when you look at these examples, you can start to see it really is not about data analysis, but curiosity, perseverance and a healthy dose of skepticism. If you can pinpoint what the problem is and it is a real business need, then what you need to measure will be very obvious:

  • Reduction in recruitment costs
  • Reduction in time to market with new products
  • Range of new products and uptake
  • Attrition rate
  • Customer satisfaction scores

These are not L&D statistics these are business measures and having highlighted the purpose of your L&D focus, the business will also want to measure it. That is not to say that at times you are not needed to do some data analysis and collection but I think we are over complicating it and not getting to the nub of the problem.

In my book (currently in first draft) “How not to Waste your Money on Training’ I will show people simply how to “find the story in the data”. Using a simple example of a scoring grid, I will show how you can, using a spreadsheet and playing with different graph types, discover little parts of the truth about what is going on. It takes a click and a small amount of curiosity. If you want to try it out then just use this example, using Excel to play around with different types of charts:

  • Bar chart
  • Pie chart
  • Stacked bar
  • Spider diagram

For each format ask yourself “what do I see now?”. Using this approach of curiosity and play I discovered:

  • A bar chart gives a good comparison one person against another for each part of their role
  • A spider diagram shows how well-rounded each team member is in their own right. Some are not rounded at all! Tracy seems the most well rounded.
  • Stacked column shows the teams strengths and weaknesses:
    • Who is the strongest in sales skills?
    • Who is the weakest in product knowledge and working independently (why might this be? Manager poor at delegating?)

So I would urge you L&D, before spending a lot of money on data analytics experts, get nosey and do some detective work yourselves. Keep it simple and dig into what is going on beneath the surface. Don’t just take one persons viewpoint or use just one method, mix it up and start finding the story in the data!

My conclusions from the report and my own anecdotal research suggests that:

  • L&D does not have the skills required for data analysis (I had better get that book finished!)
  • It is not as complicated as you think
  • It is about asking the right questions and finding the story in the data
  • We don’t always need data analytics experts to do this!

 

 

 

Kirkpatrick – should we be sulking about it?

I have had no issue with the Kirkpatrick model, as it has always made sense to me. Just recently I have been challenged to reconsider my position. Last week running the Learning Loop, one of the participants said they have started to use Will Thalheimers’ Learning Transfer Evaluation Model, which made me curious about what that model had to offer, that was different to Kirkpatricks.

This week I read a blog from Work Learning Research  saying how levels 3 and 4 of Kirkpatrick are misunderstood by most L&D people, that they use learner reactions as if they were valid level 3 and 4 measures. The blog got me ruffled a bit, particularly because the survey results:

1) use a sample size of TWO for vendors

2) there is no visible link with those 250 surveyed and whether they are actually even measuring at levels 3 or 4 (if they are not, then could they know the answer to the question?)

This makes any conclusions, drawn from the research, in my opinion tenuous to say the least.

What made me curious enough to look at Will Thalheimers’ model was any suggestion that it had something new to offer. What I did like was the focus on learning transfer, but then I thought – shouldn’t the focus be on performance? Will has a lot to say and criticise the Kirkpatrick model for, but I am looking for a deeper and better way to do evaluation. What I would like to offer is a deeper explanation of Kirkpatrick’s model and I hope you will make up your own mind how you might apply it.

“Begin with the end in mind” said Steven Covey. I agree wholeheartedly. If you do a thorough analysis, engage with stakeholders and line managers about the outcomes (in performance) required, then evaluation should be straight-forward. But please note if the analysis part is skipped over or done on a superficial level, THE EVALUATION WILL BE DIFFICULT WHICHEVER MODEL YOU USE.

  1. Level 1 – learner reactions – still an important part of seeing if you have had sufficient engagement and struck the right chord with the objectives (Business focused and learner-centred according to the 5 Secrets of Accelerated Learning.
  2. Level 2 – learning achieved – did they learn what they were supposed to and have they met those objectives? This is important to know for L&D and their line managers
  3. Level 3 – impact on performance – this is what is then observed on the job and in my opinion should not be L&D’s sole remit. If they have analysed the needs correctly and determined the correct outcomes, the line managers should be engaged enough to help imbed the learning as well as measure performance improvements.
  4. Level 4 – impact on the business- if the stakeholders have been engaged and the outcomes are focused on the business, the stakeholders will be interested in measuring the business impact.

Here are some ways in which you could do evaluate at different levels of Kirkpatrick. Let me know what you  think – I will be curious to hear your reactions!

In the graphic below I have overlaid Kirkpatrick model onto Will Thalheimers’ model.

Evaluation methods answers

 

 

ROI – how do we create a fresh approach to evaluating learning?

This question was raised in the Creativity Zone at Learning Live 2017 by a number of people. From observation, I have noted that a great many L&D professionals and teams find evaluation tricky. From Towards Maturity report “Driving the New Learning Organisation”, the “Top Deck” are twice as likely to identify metrics they want to improve through learning. That sounds so simple and yet there are many organisations not doing this. This may be for a number of reasons:

 

  • Lack of clarity about who to talk to about the important metrics
  • Lack of knowledge in how this data could be captured
  • Lack of confidence in an approach that might work

Our Learning Loop Approach gets people thinking about the end before they rush into a solution. Care is taken to engage with stakeholders. Objectives are set rather than woolly aims. Performance objectives are used to drive better performance. Learning objectives are leveraged to help improve performance and a culture of social and self-reliant learning is encouraged.

So what might we advise you to do to start a fresh approach to ROI:

  • Identify your key stakeholders
  • Spend most time with the “evangelists”, asking them what performance improvements they need and how these could be achieved through learning
  • Ask them which metrics will help THEM know the desired outcomes have been achieved
  • Get THEM to measure these outcomes instead of you in L&D sweating about how to get hold of the right numbers
  • Look at the MANY different ways that you can measure if learning has been effective as part of a larger evaluation strategy (see below for MANY different ways to evaluate)

Go on take a fresh approach and become a new type of learning leader that will forge a new way into this century!

Measure at all costs?

IMG_2532Measuring stuff in L&D is good and I am an advocate of using data to inform your decision making as well as demonstrating your worth. So is it a case of just measuring everything and it’s bound to be useful? For anyone who knows us and our approach, you know the answer to that already!

Here is what measurement I know happens already, in many places:

  • Number of people completed training, either face to face or online
  • Test scores from online quizzes
  • Amount of time taken for elearning and “engagement” during learning
  • Number of “no-shows” on courses
  • Number of training hours per year company wide

My question is “Does any of this help to improve performance?” The answer may be that sometimes training is not a performance issue but a compliance issue. The training has to be done, so we need to know how many hours we complete per year and do it in the most efficient way. Fair enough! If you have to do the training, make sure it’s effective and done in the most efficient way to not waste money.

What about measuring all these things when it’s related to performance? I can see value in this only if a good analysis has been done beforehand to:

  • Rule out issues that cannot be solved by training (poor systems, processes or lack of resources etc)
  • Identify the right stakeholders to work with, who will support you and put measures in place to measure the effectiveness
  • Determine the organisational outcomes that need to be met, with the appropriate stakeholders providing resources and support
  • Define learning outcomes that are geared towards improving performance and are both observable and measurable
  • Put in place follow up by line managers before the learning starts and performers know what they are going to get out of the learning before they attend

This sort of “joined-up” L&D, makes learning everyone’s responsibility and it also means that measurement is not just L&D’s responsibility.  It means that “learners” are transformed into “performers”. It means that those measures listed at the top of the page could be used to inform which methods have had the most engagement (not that that is always an indicator for success!)It would be wrong, I believe to suggest that good engagement with one successful cohort will guarantee the same success in another cohort.

To my thinking, any suggestion that there could be some sort of permanent link between engagement and performance misses the point entirely. In a closed system like a chemical plant, where putting in the same chemicals at the same rate with the same process can produce expected, achievable results. Learning is not however a closed system: the variables are always changing as are the participants and the influences on their behaviour. The true measure, in my opinion has to be what measurable differences the learning has made to the performance of an individual. How have customer complaints reduced? Income increased? Sales boosted? A one time analysis is not the answer because things change. So analyse, plan, implement and measure, then round again to make sure you are capturing the “now” and not the “yesterday”.

So am I in L&D Narnia, expecting the impossible? Measuring the unmeasurable? Quite simply, I believe that before investing in learning and merely measuring “engagement” dig, dig and dig deeper to find out what is missing and what you will need to learn and do to make changes. Keep asking why? Until you have some sensible answer other than “why not?”

Where on earth do you begin with evaluation L&D?

Where do you begin with evaluation?IMG_0853
A few weeks ago, we ran a Learning Loop showcase event called “Taking the Fear out of ROI”. There was a great mix of people from many different organisations and to say the discussions were lively, would be an understatement!

Steven Covey, a man of many wise words said “Begin with the end in mind”. So the answer to the question in the title, would of course echo Stephen Covey’s sentiments. If evaluation has been an afterthought to the process of delivering learning, then, quite frankly it will be a waste of time. It would be a little like starting to knit a jumper, without a pattern or any thought to shape, size or colour and then expecting it to fit your needs.

Here is the essence, for a good evaluation, you need:

  • A solid needs analysis, which identifies the impact you would like the learning to have on the organisation
  • Stakeholders engaged at the beginning, providing you with not only the resources to identify needs, but resources and support for the evaluation. (*for more on stakeholder management click here)
  • Clear organisational outcomes, which the stakeholders will monitor and measure
  • Learning outcomes that support the organisational ones
  • Time before the next new project to complete the evaluation analysis and reporting
  • Realistic expectations from the stakeholders about the expected outcomes

There are of course other factors, but this is brief run through of the key components. The last one is an interesting one, especially when there are multiple factors which may influence the outcomes. Let us take a simple example:

At the same time as the a customer service learning programme being rolled out, a new customer management system is also installed.

In this instance the relevant stakeholders may either:

  • Join forces and measure the overall impact of both
  • Agree percentages of the impact of the two separately

Whichever approach is used, there needs to be realistic expectations from the stakeholders as to some of the other factors which may prevent the objectives being achieved:

  • Lack of line manager support for the learners (one of the biggest reasons for learning not imbedding)
  • A long enough lead time, between the learning and the measurement, to allow the learning to imbed and for results to be observed
  • Time and space in the learners roles for the learning to be put into action

Again, this is not an exhaustive list, but some of the key areas that may be investigated, if the learning does not meet expectations.

At How to Accelerate Learning, we can help organisations to dig deeper into evaluation. Contact us to find out how we could help you.

Subscribe to access FREE monthly activities

Join our mailing list to receive the latest news and updates from our team.

Read how we use your data here

Krystyna Gadd & Associates will use the information you provide on this form to be in touch with you and to provide updates and marketing. Please let us know all the ways you would like to hear from us:

You can change your mind at any time by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at krystyna@howtoacceleratelearning.co.uk. We will treat your information with respect.

We use MailChimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.

You have Successfully Subscribed!

Pin It on Pinterest