This title has been taken from the Towards Maturity report published in August of this year. I was immediately drawn to the title and anyone who knows what a maths geek I am will understand why.
I always loved numbers and even just playing with them, multiplying numbers by themselves repeatedly just for fun! Yes I know it’s not normal and I also appreciate that not everyone else has the same love of numbers, in fact quite the opposite. I have several friends who will admit that numbers are almost a phobia.
Reading this report ,it is quite evident that we, in L&D are not great at collecting and using data to its best advantage. Some of the figures that struck me were:
- Of those aspiring data to affect change only 2/5 were able to say that it helped the, demonstrate business impact (40%)
- Also only 3/5 were successful in using data to help solve business problems (60%)
Bear in mind that was from a sample size of 700+ and the two figures above were those people who were really trying to use data affectively. This means in reality that there will also be a number of people not even trying so the 40% and 60% are likely to be very optimistic figures.
The most likely reasons cited were:
- Data and it’s analysis is complicated
- Lack of L&D skills in this area
If I look at the second point first. Why are we not addressing this lack of skills? Is it this phobia of numbers? A fear of what to do once you start collecting? An expectation that things have to change once you start collecting data effectively? Maybe it’s a combination of all three? Or maybe a misconception around what it means to collect and analyse data?
For me it is quite simple (and this may address the first point). In L&D we need to get nosey. When someone asks us to deliver a leadership programme, we need to ask why and how will you know it has been successful? If the first person who asks you doesn’t know, then ask someone else. Is it a real need or a perceived need?
The perceived need may be something like employee engagement scores being low. What we really need to determine is what effect that is having on the performance of the business:
- High recruitment costs?
- Lack of agility in the marketplace because there is a high attrition rate, staff not as familiar as they should be about products?
- Poor customer service because the tools they use have had little investment?
So when you look at these examples, you can start to see it really is not about data analysis, but curiosity, perseverance and a healthy dose of skepticism. If you can pinpoint what the problem is and it is a real business need, then what you need to measure will be very obvious:
- Reduction in recruitment costs
- Reduction in time to market with new products
- Range of new products and uptake
- Attrition rate
- Customer satisfaction scores
These are not L&D statistics these are business measures and having highlighted the purpose of your L&D focus, the business will also want to measure it. That is not to say that at times you are not needed to do some data analysis and collection but I think we are over complicating it and not getting to the nub of the problem.
In my book (currently in first draft) “How not to Waste your Money on Training’ I will show people simply how to “find the story in the data”. Using a simple example of a scoring grid, I will show how you can, using a spreadsheet and playing with different graph types, discover little parts of the truth about what is going on. It takes a click and a small amount of curiosity. If you want to try it out then just use this example, using Excel to play around with different types of charts:
- Bar chart
- Pie chart
- Stacked bar
- Spider diagram
For each format ask yourself “what do I see now?”. Using this approach of curiosity and play I discovered:
- A bar chart gives a good comparison one person against another for each part of their role
- A spider diagram shows how well-rounded each team member is in their own right. Some are not rounded at all! Tracy seems the most well rounded.
- Stacked column shows the teams strengths and weaknesses:
- Who is the strongest in sales skills?
- Who is the weakest in product knowledge and working independently (why might this be? Manager poor at delegating?)
So I would urge you L&D, before spending a lot of money on data analytics experts, get nosey and do some detective work yourselves. Keep it simple and dig into what is going on beneath the surface. Don’t just take one persons viewpoint or use just one method, mix it up and start finding the story in the data!
My conclusions from the report and my own anecdotal research suggests that:
- L&D does not have the skills required for data analysis (I had better get that book finished!)
- It is not as complicated as you think
- It is about asking the right questions and finding the story in the data
- We don’t always need data analytics experts to do this!
This blog is a follow up to the webinar of the same name delivered on the 25th of September 2018
In January, as soon a tickets became available for this event, there was a flurry of bookings and so they have poured in throughout the year. This has peaked my interest as to why this was such a popular event.
To find out more we went out a survey attendees and also in parallel a “Deep Dive” survey which addressed a wider look at the challenges in L&D. These were some of the findings:
These results prompted me to phone a number of people to dig deeper and here were some of the issues that were lurking behind the statistics:
In my head I am thinking, what is really going on is that people find evaluation and demonstrating value difficult because they do not do the analysis part well. Some people believe that stakeholder conversations are all they need. I believe we should have a more analytical approach in L&D, getting closer to the business and finding out what is really going on.This is what I would recommend, The Learning Loop Approach.
As part of the consultancy approach, I referred back to another showcase event “The Consultancy Approach” where we had some great conversations about what that might look like.
Finally we discussed what it would look like to have a “learning ecosystem” (phrase stolen from Pedro Valido) and hopefully that discussion is still going on via a message board we created.
If this topic has interested you then please get in touch and/or complete our “Deep Dive” survey to let us know what keeps you awake at night. You can also register your interest for our first Deep Dive event on December 4th in London.
A little provocative I know, but for years I have been hearing that face-to-face learning is dead and the simple fact is, that it’s doing well. It’s alive and kicking!
This is borne out by Towards Maturity report “L&D where are we now 2017 -2018” where 4/5 soft skills courses are still delivered face to face. In fact what surprised me was that of the statistics around technology in learning, there were no figures over 35% to say technology has visible benefits. So why the provocative title? Am I biased towards face to face?
The simple answer is no, BUT I recognise the huge value it can bring and the buzz it can create. WE are after all human and there is really nothing that can recreate the excitement and buzz the right face to face intervention can create. What I hate to see is L&D’s rush to the latest fad, thinking that this will:
- Save money
- Improve performance
- Get the results we need
In reality the latest fad, is just that and what I favour is a more pragmatic approach rather than a huge investment in some technology that may not deliver on the hype and promises. That is not to say that we should not use the latest and shiniest. My concerns are often around investment versus return, especially for the voluntary, public and not-for-profit sectors. Can they really afford to invest in some of the latest when there is no guarantee they will deliver?
So, what am I suggesting? Here are a few things:
- Analyse the needs carefully so you get an accurate picture of what is required – this will put it into the knowledge, skills and attitudinal learning categories as well as what level of learning (see Blooms taxonomy) – will it need to be in the moment, hard wired or semi-permanent?
- Look at how to build up the learning, not in a one-time only event (unless that is ok) but maybe overlapping and layering of the learning, interweaving skills and knowledge
- Choose from the 100 ways to learn and create a blend that will make the learning interesting and engaging. People can then choose one or all of the activities. The activities could, if appropriate build up the learning. See the example below:
Learning about the 5 secrets of Accelerated Learning on the Learning Loop:
- Participants read an article in the pre-work a few weeks before in the LNA email, finding out needs and objectives of participants
- Participants get to watch a short video summarising the 5 secrets in the last-minute email
- On the workshop, there are posters and resources on the 5 secrets around the room
- During the game – there will always be a question to “Describe the 5 secrets of Accelerated Learning” – which is a higher level of learning than “List the 5 Secrets of Accelerated Learning”
- One group does a teach back to help everyone remember what the 5 Secrets are and how to apply them
Back to the title then…. AI, eLearning, VR, and face to face are not dead. They are, as is face-to-face, simply some of the tools that you can keep in your toolbox so that you can choose the most appropriate method. Just as these days PowerPoint has been overdone, lets not overdo the new ones. If we mix up the tools we use, then we create a variety for the learners that will keep them engaged. What are your thoughts?
Should we in L&D be focussed on improving job performance and hence all learning is focussed on that or should we be looking for people to be inspired to learn more and be more self-directed?
I spent some time in July writing my book, “How not to waste your money on training” and during the week I had a few philosophical moments. One was about the difference between education, training and learning. I often speak to L&D professionals about the difference between a training needs analysis and a learning needs analysis. How the former always leads to training whereas the latter leads to something broader than just training; it could be learning in many different forms.
In a similar way I was thinking about how my degree in chemical engineering and fuel technology was a good education. It prepared me for the world of work and also began a lifelong desire to learn more. When I moved from engineering to IT training with IBM we were called “instructors” and I worked in the IBM Education Centre in St. John’s Wood. Was what people received when they came to us there, an education? I am not so sure. I would hope the delegates were more prepared for their world of work and that they were inspired to learn more. But how broad was that inspiration? Did they become self-motivated learners keen to go beyond the traditional training course to further develop themselves?
This leads me to the present day; my title changed from instructor to trainer to L&D professional/facilitator. How do I define though whether I am educating, training or helping people to learn?
A few years after I gained my Certificate in Training Practice, I began working with trainers, delivering the CIPD Certificate in Training Practice, then the Certificate in Learning and Development Practice. Through my accelerated learning programmes, I further worked with L&D professionals to help them learn more about an approach that has been taking shape over many years. An approach that helps me focus on organisational needs as well as learner requirements. Programmes that took 8 months of weekly 4 hour sessions, were delivered in 8 one day sessions moving to a well-known learning provider. Now I deliver a 6-week programme, which includes a 2 day workshop and I cannot possibly ‘cover’ all I used to.
Leading by example and walking-the -talk have been driving forces in our organisation “How to Accelerate Learning”. Facilitation is practiced and runs like an invisible thread through the programmes. Inspiring resources and innovative ways of learning through gamification, create a different feel to the programme, leaving many people “inspired” – their words not ours. Even hardened trainers with years of experience under their belts talk of how different it feels.
This has not always been a deliberate intention, but a on occasions, a happy and accidental one. One that we persist with because of the results we achieve and the feedback we get. We put effort into:
- Drilling deeper into needs to see if learning is the appropriate course of action
- Delivering learning via a blend of activities not just training
- Helping people to gain confidence in stepping out of their comfort zone – to experience new ways to help people learn
- Using unusual materials and resources to inspire a different approach
- Following up the learning so it is not just a one-time event
So, I don’t feel like it is training in the traditional sense; not like when I was a VM Instructor. Nor do I feel that it is just learning because of the feedback we get. So are we educating and is that now the remit of L&D?
I have had no issue with the Kirkpatrick model, as it has always made sense to me. Just recently I have been challenged to reconsider my position. Last week running the Learning Loop, one of the participants said they have started to use Will Thalheimers’ Learning Transfer Evaluation Model, which made me curious about what that model had to offer, that was different to Kirkpatricks.
This week I read a blog from Work Learning Research saying how levels 3 and 4 of Kirkpatrick are misunderstood by most L&D people, that they use learner reactions as if they were valid level 3 and 4 measures. The blog got me ruffled a bit, particularly because the survey results:
1) use a sample size of TWO for vendors
2) there is no visible link with those 250 surveyed and whether they are actually even measuring at levels 3 or 4 (if they are not, then could they know the answer to the question?)
This makes any conclusions, drawn from the research, in my opinion tenuous to say the least.
What made me curious enough to look at Will Thalheimers’ model was any suggestion that it had something new to offer. What I did like was the focus on learning transfer, but then I thought – shouldn’t the focus be on performance? Will has a lot to say and criticise the Kirkpatrick model for, but I am looking for a deeper and better way to do evaluation. What I would like to offer is a deeper explanation of Kirkpatrick’s model and I hope you will make up your own mind how you might apply it.
“Begin with the end in mind” said Steven Covey. I agree wholeheartedly. If you do a thorough analysis, engage with stakeholders and line managers about the outcomes (in performance) required, then evaluation should be straight-forward. But please note if the analysis part is skipped over or done on a superficial level, THE EVALUATION WILL BE DIFFICULT WHICHEVER MODEL YOU USE.
- Level 1 – learner reactions – still an important part of seeing if you have had sufficient engagement and struck the right chord with the objectives (Business focused and learner-centred according to the 5 Secrets of Accelerated Learning.
- Level 2 – learning achieved – did they learn what they were supposed to and have they met those objectives? This is important to know for L&D and their line managers
- Level 3 – impact on performance – this is what is then observed on the job and in my opinion should not be L&D’s sole remit. If they have analysed the needs correctly and determined the correct outcomes, the line managers should be engaged enough to help imbed the learning as well as measure performance improvements.
- Level 4 – impact on the business- if the stakeholders have been engaged and the outcomes are focused on the business, the stakeholders will be interested in measuring the business impact.
Here are some ways in which you could do evaluate at different levels of Kirkpatrick. Let me know what you think – I will be curious to hear your reactions!
In the graphic below I have overlaid Kirkpatrick model onto Will Thalheimers’ model.
Evaluation methods answers