More than a smile sheet

#8 in our training evaluation blog post series:

Digging into training evaluation uncovers a lot of debate and discussion around the value of level one evaluation data.

In my last evaluation post in this series, A little evaluation can be a dangerous thing, I wrote about the potential dangers of only using level 1 evaluation data to determine the effectiveness of learning back in the workplace. There are many articles, blog posts and forums dedicated to discussing the merits (or lack thereof) of using level 1 evaluation. I personally believe that a level 1 smile sheet has value to the learner as it allows them to reflect on their learning and provides a vehicle for their thoughts and feelings. But I also believe that we need to keep in mind that it’s only one small measurement in the overall evaluation process. Much less weight should be on the “qualitative” data gathered from a level 1 smile sheet and much more weight and importance be given to level 4 evaluation results - the impact training has on the business results.

Whether simple or complex, level 1 end-of-course evaluation forms (a.k.a. “smile sheets”) are used in the majority of training courses offered by organizations – in over 91% of organizations according to a 2009 ASTD Value of Evaluation research study. But does your level 1 end-of-course “smile sheet” go beyond the basic questions to capture data that will help your organization measure evaluation levels 2, 3 and 4?

A well-designed level 1 evaluation plan should measure not only learner satisfaction but also their level of engagement and relevance to their job. The goal is to incorporate statements or questions that focus the learner on higher levels of evaluation and get them thinking about how the new learning will benefit both them and the organization after the training event is over.

There are some simple changes you can make to your level 1 evaluation form that can provide further value:

  • Consider using a 7, 9, or 11 point rating scale to provide a richer level of feedback. Only label each end of the rating scale, rather than labeling each number on the scale (e.g., 1=strongly disagree and 7=strongly agree).     
  • Make all evaluation statements or questions learner-centred. For example, rather than “The instructor provided debrief activities for students to demonstrate their learning”, instead use “The debrief activities helped me to effectively practice what I learned”.        
  • Consider adding statements or questions to the course evaluation form that measure engagement and relevance. This helps to focus the learner on levels 2, 3 and 4. Some examples include:
    •  I had sufficient opportunities to contribute my ideas. (level 2)
    •  I estimate that I will apply the following percent of the knowledge/skills learned from this training directly to my job. (Provide a % scale from 0% to 100% in increments of 10.) (level 3)
    • This training will improve my job performance.(level 4)  

You can see that just a few tweaks to a level 1 evaluation leads to insightful information that can improve your training process.

Stay tuned for more upcoming blog posts with tips and strategies for other levels of evaluation and be sure to check out our other evaluation blog post in this series:

 

A little evaluation can be a dangerous thing

#7 in our training evaluation blog post series:

I was reading an interesting article recently called, “Are you too nice to train?” by Sarah Boehle and she included an interesting case that I’d like to share:

Roger Chevalier, an author and former Director of Information and Certification for ISPI, joined the Century 21 organization as VP of Performance in 1995. The company trained approximately 20,000 new agents annually using more than 100 trainers in various U.S. locations. At the time, the real estate giant's only methods of evaluating this training's effectiveness and trainer performance were Level 1 smile sheets and Level 2 pre- and post-tests. When Chevalier assumed his role with the company, he was informed that a number of instructors were suspect based on Level 1 and 2 student feedback. Chevalier set out to change the system.

His team tracked graduates of each course based on number of listings, sales and commissions generated post-training (Level 4). These numbers were then cross-referenced to the office where the agents worked and the instructor who delivered their training. What did he find? A Century 21 trainer with some of the lowest Level I scores was responsible for the highest performance outcomes post-training, as measured by his graduates' productivity. That trainer, who was rated in the bottom third of all trainers by his students in Level I satisfaction evaluations, was found to be one of the most effective in terms of how his students performed during the first three months after they graduated. "There turned out to be very little correlation between Level I evaluations and how well people actually did when they reached the field," says Chevalier, now an independent performance consultant in California. "The problem is not with doing Level 1 and 2 evaluations; the problem is that too many organizations make decisions without the benefit of Level 3 and 4 results."

Industry studies appear to support his words. A 2009 ASTD Value of Evaluation research study found that 91.6% of the organizations in the study evaluated training at level 1, 80.8% at level 2, 54.6% at level 3 and 35.9% at level 4. 4.1% did no evaluation at all! Of the 91.6% that evaluate at level 1, only 35.9% said this level had high or very high value. Yet of the 36.9% of organizations that evaluated results (level 4), 75% said this level had high or very high value.

ASTD’s findings are somewhat alarming because they suggest that the majority of these organizations are going no further than level 1 evaluation, if they’re evaluating at all. We could assume from this data that the level 1 information gathered by these organizations’ training teams is the primary or maybe the only measurement used to justify their training efforts. Qualitative data and comments get rolled up into an overall total and used as a benchmark to measure the effectiveness of the trainers and the training programs being offered. Level 1 is used in isolation with no knowledge or thought about how the training programs address (or don’t address) key business needs. So why do companies do this?

I agree with Boehle’s theory that it comes down to two factors. First, Level 1 “smile sheets” are easy to do while levels 2, 3, 4 and 5 may appear to be costly, time consuming and potentially confusing (where do we start? How do we do it?). Secondly, if stakeholders (e.g. CXOs, internal clients and business partners) don’t demand accountability, why evaluate further? Digging in further may uncover negative results - if all appears to be working well on the surface, no one is asking questions and learners are happy, why rock the boat?

It’s been our experience that the best practice training and development teams recognize that they have a responsibility to ensure that the programs they produce and deliver are aligned with the organization’s needs – to demonstrate how training is contributing to the success of the organization. They need to show proof that training is really making a difference - clearly identifying how organization’s bottom line is being positively impacted and how business needs and issues are being addressed. Using only level 1 data to measure training and trainer effectiveness is dangerous and tells very little about how much learning is actually taking place on the job and how business results are truly being impacted. And sooner or later, this will catch up to the training providers and ultimately to the organizations they work for. Training budgets will be cut, work will be outsourced, and organizations will struggle to keep up with their competition in a tight and highly competitive economy.

Be sure to check out our other evaluation blog post in this series:

Selling the importance of evaluation

#5 in our training evaluation blog post series:

In the not so distant past, evaluation of learning was an isolated activity relegated to the training team who’s responsibility didn’t extend much past gathering level 1 and level 2 evaluation. Today the emphasis is on the bottom line and how organizations can get the best value for their money and efforts. New evaluation tools, processes and strategies are available to help companies become more strategic; the evaluation of learning has become less of an isolated activity and more of a culture/philosophy. Learning teams are now becoming drivers of change, helping to support evaluation efforts within their organizations. But what if your stakeholders and senior management don’t see or understand the importance of evaluation?

Implementing levels 3, 4 and even 5 can be a challenging and daunting task even if the training team is fully involved and committed, because commitment and participation is also required from employees, managers, supervisors, business partners, stakeholders and senior management. An organization’s executives need to be on board as top-down messaging is critical to success – you’ll be swimming upstream trying to get managers to participate in evaluation if they don’t feel that their own bosses are behind it.

Many senior leaders already acknowledge that employee education is a critical success factor for future growth and prosperity. Use this as your ‘hook’ to sell them on the importance and value of a solid system of evaluation. Here are some tips to help you along the way:

  • Show how evaluation contributes to success: Be able to show a direct correlation between the organization’s strategic needs and goals, business unit operational needs, individual development needs, the training that is designed to address these and the evaluation techniques that will be used to quantify improvements.
  • Share a roadmap to implementation: Create an evaluation strategy that will systematically guide the organization from the present situation to the desired amount of evaluation. Be prepared to provide costs in terms of time and manpower.
  • Give confidence with examples: Gather relevant case studies of best practice organizations who have implemented evaluation within their organization with positive results. Use this information to support your position. There are a number of websites with best practice research. Check out our favourites in a previous blog post.
  • Start small to prove your case: Run a pilot and communicate/share results. Work with a key stakeholder/business partner to address a business need through training. Use this training as the “test case” for your evaluation plan. Apply each level of evaluation gathering testimonials and data, and tracking trends along the way. Share results and testimonials. Use stories and case studies based on the training results to capture attention and highlight the positives. Prove that measuring the value of learning can have a positive effect on your organization.
  • Communicate, communicate, communicate! Communication at all levels of the organization is critical to success. People need to understand the What, Where, Why, When and How of your evaluation strategy and what their role and commitment will be. Be patient. This will take time, but your efforts will be worthwhile!

Be sure to check out our other evaluation blog post in this series:

 

 

Measuring the value of learning

#1 in our training evaluation blog post series:

Good training evaluation techniques identify and measure what learning has occurred during and after learning, whether job performance improvements have been realized, and most importantly, what the cost-benefits are to the organization. When it comes to training evaluation techniques, Kirkpatrick's four level evaluation model (including level 1-reaction, level 2-learning, level -behaviour and level 4-results) is the most widely used and respected model for evaluation and measurement of business value. Level 5 evaluation, Return on Investment (ROI), is an additional step to training evaluation that has been developed by respected author and founder of the ROI Institute, Dr. Jack Phillips.

I found an interesting statistic in a 2009 ASTD study on The Value of Evaluation: Making Training Evaluations more Effective. 92% of respondents said they measure at least Level 1 of the Kirkpatrick/Phillips module of learning evaluation, but the use of the model drops off dramatically with each subsequent level. The study suggests that organizations may not fully grasp how evaluation should be used. I have to agree with this insight.  Many people I’ve talked to and worked with during my career don’t realize that good evaluation tools and techniques are available that can be used to make their organization stronger. There are ways to implement evaluation that can have a positive impact without being overly complicated or time-consuming. During our recent Kirkpatrick certification, I came across many T&D and HR practitioners who were looking for tools that could successfully work in their organization and processes to make evaluation “easier” to implement.

We, at Limestone, are passionate about the value of training evaluation and would like to share some of our thoughts, strategies, and ideas with you through a series of “evaluation” blog posts.  So stay tuned.