More than a smile sheet

#8 in our training evaluation blog post series:

Digging into training evaluation uncovers a lot of debate and discussion around the value of level one evaluation data.

In my last evaluation post in this series, A little evaluation can be a dangerous thing, I wrote about the potential dangers of only using level 1 evaluation data to determine the effectiveness of learning back in the workplace. There are many articles, blog posts and forums dedicated to discussing the merits (or lack thereof) of using level 1 evaluation. I personally believe that a level 1 smile sheet has value to the learner as it allows them to reflect on their learning and provides a vehicle for their thoughts and feelings. But I also believe that we need to keep in mind that it’s only one small measurement in the overall evaluation process. Much less weight should be on the “qualitative” data gathered from a level 1 smile sheet and much more weight and importance be given to level 4 evaluation results - the impact training has on the business results.

Whether simple or complex, level 1 end-of-course evaluation forms (a.k.a. “smile sheets”) are used in the majority of training courses offered by organizations – in over 91% of organizations according to a 2009 ASTD Value of Evaluation research study. But does your level 1 end-of-course “smile sheet” go beyond the basic questions to capture data that will help your organization measure evaluation levels 2, 3 and 4?

A well-designed level 1 evaluation plan should measure not only learner satisfaction but also their level of engagement and relevance to their job. The goal is to incorporate statements or questions that focus the learner on higher levels of evaluation and get them thinking about how the new learning will benefit both them and the organization after the training event is over.

There are some simple changes you can make to your level 1 evaluation form that can provide further value:

  • Consider using a 7, 9, or 11 point rating scale to provide a richer level of feedback. Only label each end of the rating scale, rather than labeling each number on the scale (e.g., 1=strongly disagree and 7=strongly agree).     
  • Make all evaluation statements or questions learner-centred. For example, rather than “The instructor provided debrief activities for students to demonstrate their learning”, instead use “The debrief activities helped me to effectively practice what I learned”.        
  • Consider adding statements or questions to the course evaluation form that measure engagement and relevance. This helps to focus the learner on levels 2, 3 and 4. Some examples include:
    •  I had sufficient opportunities to contribute my ideas. (level 2)
    •  I estimate that I will apply the following percent of the knowledge/skills learned from this training directly to my job. (Provide a % scale from 0% to 100% in increments of 10.) (level 3)
    • This training will improve my job performance.(level 4)  

You can see that just a few tweaks to a level 1 evaluation leads to insightful information that can improve your training process.

Stay tuned for more upcoming blog posts with tips and strategies for other levels of evaluation and be sure to check out our other evaluation blog post in this series:

 

A little evaluation can be a dangerous thing

#7 in our training evaluation blog post series:

I was reading an interesting article recently called, “Are you too nice to train?” by Sarah Boehle and she included an interesting case that I’d like to share:

Roger Chevalier, an author and former Director of Information and Certification for ISPI, joined the Century 21 organization as VP of Performance in 1995. The company trained approximately 20,000 new agents annually using more than 100 trainers in various U.S. locations. At the time, the real estate giant's only methods of evaluating this training's effectiveness and trainer performance were Level 1 smile sheets and Level 2 pre- and post-tests. When Chevalier assumed his role with the company, he was informed that a number of instructors were suspect based on Level 1 and 2 student feedback. Chevalier set out to change the system.

His team tracked graduates of each course based on number of listings, sales and commissions generated post-training (Level 4). These numbers were then cross-referenced to the office where the agents worked and the instructor who delivered their training. What did he find? A Century 21 trainer with some of the lowest Level I scores was responsible for the highest performance outcomes post-training, as measured by his graduates' productivity. That trainer, who was rated in the bottom third of all trainers by his students in Level I satisfaction evaluations, was found to be one of the most effective in terms of how his students performed during the first three months after they graduated. "There turned out to be very little correlation between Level I evaluations and how well people actually did when they reached the field," says Chevalier, now an independent performance consultant in California. "The problem is not with doing Level 1 and 2 evaluations; the problem is that too many organizations make decisions without the benefit of Level 3 and 4 results."

Industry studies appear to support his words. A 2009 ASTD Value of Evaluation research study found that 91.6% of the organizations in the study evaluated training at level 1, 80.8% at level 2, 54.6% at level 3 and 35.9% at level 4. 4.1% did no evaluation at all! Of the 91.6% that evaluate at level 1, only 35.9% said this level had high or very high value. Yet of the 36.9% of organizations that evaluated results (level 4), 75% said this level had high or very high value.

ASTD’s findings are somewhat alarming because they suggest that the majority of these organizations are going no further than level 1 evaluation, if they’re evaluating at all. We could assume from this data that the level 1 information gathered by these organizations’ training teams is the primary or maybe the only measurement used to justify their training efforts. Qualitative data and comments get rolled up into an overall total and used as a benchmark to measure the effectiveness of the trainers and the training programs being offered. Level 1 is used in isolation with no knowledge or thought about how the training programs address (or don’t address) key business needs. So why do companies do this?

I agree with Boehle’s theory that it comes down to two factors. First, Level 1 “smile sheets” are easy to do while levels 2, 3, 4 and 5 may appear to be costly, time consuming and potentially confusing (where do we start? How do we do it?). Secondly, if stakeholders (e.g. CXOs, internal clients and business partners) don’t demand accountability, why evaluate further? Digging in further may uncover negative results - if all appears to be working well on the surface, no one is asking questions and learners are happy, why rock the boat?

It’s been our experience that the best practice training and development teams recognize that they have a responsibility to ensure that the programs they produce and deliver are aligned with the organization’s needs – to demonstrate how training is contributing to the success of the organization. They need to show proof that training is really making a difference - clearly identifying how organization’s bottom line is being positively impacted and how business needs and issues are being addressed. Using only level 1 data to measure training and trainer effectiveness is dangerous and tells very little about how much learning is actually taking place on the job and how business results are truly being impacted. And sooner or later, this will catch up to the training providers and ultimately to the organizations they work for. Training budgets will be cut, work will be outsourced, and organizations will struggle to keep up with their competition in a tight and highly competitive economy.

Be sure to check out our other evaluation blog post in this series: