#9 of 9 in our weekly succession planning blog post series:
Our guest blogger, Paul Riley is life-long learner of Organizational Leadership and Change who applies systems thinking and community development principles to help people work more effectively together within the complex human systems we create.
This week’s blog post focuses on the last principle of the 7 principles of successful Succession Planning: #7: Monitor and Evaluate. Succession planning and leadership development programs should be continuously monitored and evaluated to help stakeholders understand what works, why it works, and what impact it’s having on the organization’s leadership pipeline. People often think of evaluation as an activity that’s done at the end of the program. However, it’s important that evaluation plays an integral role in the process from the beginning, during program planning and implementation, with a focus on long-term outcomes and continuous improvement.
Program evaluation starts with the end in mind. In other words, you must identify the goals and long-term outcomes of the program to understand what you’re evaluating. I like to start by establishing an explicit program theory to describe how and why a set of activities are expected to lead to anticipated outcomes and impacts. I often use a logic model with the organizations I work with to show the chain of reasoning by connecting the program’s parts using “if...then” statements to illustrate a sequence of causes and effects. The planning process begins with a discussion among stakeholders about strategies that will contribute to the program’s desired results. In essence, this conversation is about the program’s theory.
The logic model I mentioned above is an iterative tool that provides a simple framework which is revisited throughout the program planning, implementation, and evaluation phases. The terms ‘logic model’ and ‘program theory’ are often used interchangeably because the model describes how a program works and to what extent. The W.K. Kellogg foundation provides a very useful Logic Model Development Guide, which was developed for the non-profit sector but is particularly useful for evaluating programs designed for organizational and behavioural change, regardless of sector.
Although a logic model provides a useful framework for establishing and presenting the program’s theory, the framework doesn’t provide much detail about how to select indicators. So for this, I recommend incorporating frameworks into the evaluation program that are designed for evaluating training, succession planning, and leadership development programs. For instance, Bennett’s Hierarchy describes seven successive levels to evaluate training and development programs. The hierarchy starts with inputs and activities at the bottom, which Bennett asserts are the simplest level of evaluation that provide the least value in determining whether a program is effective. At the top of the hierarchy are social, economic, and environmental outcomes, which Bennett believes represent the highest aim for educational programs and are often the most complex to measure. Kirkpatrick also provides a model to evaluate training programs, which includes four levels: (1) participant reaction, (2) learning, (3) behaviour change, and (4) organizational results. William Rothwell, author of Effective Succession Planning, proposes an adaptation of Kirkpatrick’s four-level model to evaluate succession planning programs, which includes: (1) customer satisfaction, (2) program progress, (3) effective placements, and (4) organizational outcomes.
Combining the frameworks proposed by Bennett, Kirkpatrick, Rothwell, and others, provides different lenses through which to look at the various aspects of the program’s theory. While the logic model provides a general framework to guide program planning, implementation, and evaluation, these other models offer a more targeted focus on establishing indicators to measure program outcomes and impacts. Incorporating multiple evaluation methods is likely to offset weaknesses and complement strengths of different models, and it allows evaluators to confirm results, which enhances the integrity of program evaluation by producing more accurate measurements. Mixed-method evaluation programs are also more likely to reflect the needs of program participants and stakeholders, by looking at things from a variety of perspectives, which is likely to produce better evaluation designs and more targeted recommendations.
One of the main challenges I encounter when establishing an evaluation program is that people in the organization often feel like they don’t have the time or the resources to devote to evaluation. They are too busy delivering succession planning and leadership development programs to reflect on whether what they’re doing is working. So, I recommend enlisting the help of participants of the program. Participative processes, such as empower evaluation, increase the likelihood that evaluation will happen, because users who are actively involved are more likely to understand the process and feel ownership. Furthermore, you can kill two birds with one stone by achieving program outcomes while facilitating data collection and analysis.
Creating flexibility in the evaluation process might also help to increase participation. For instance, I often work with organizations to develop a small “menu” from which users can select indicators for evaluation. This allows stakeholders to establish measurements that reflect their concerns, whereas an exhaustive list of indicators may be perceived as cumbersome and unrealistic in terms of data collection. An evaluation process that’s both flexible and participative will help to accommodate the many different contexts, goals and outcomes within the organization, and facilitate learning.
Stakeholders must be engaged in the monitoring and evaluation process from the beginning and throughout the life of the program to ensure indicators measure what is important to the organization, rather than focusing only on what is easily measured. Without clear, timely, accurate, and visible indicators, stakeholders will struggle to work toward the program’s goals, because they won’t have a clear understanding of what impact activities, outputs, and outcomes are having in building a leadership pipeline. Active participation ensures that assessment is rooted in the direct experiences of the organization and grounded in the organization’s vision, values, goals, and objectives.
Be sure to check out our other Succession Planning blog posts in this series: