Meaningful instructional design requires meaningful evaluation. However, evaluation, like organizational development itself, requires buy-in at many levels. This buy-in’s necessary…
- To identify meaningful metrics
- To collect data
- To react to data, making appropriate improvements
- To undertake change management necessary for these improvements
If you’ve been in this field for any length of time, you’ve probably come across Kirkpatrick’s four levels of evaluation (reaction, learning, performance, & results). Kirkpatrick’s approach has come under fire for a number of reasons (e.g., emphasis on training events, implied linearity and causality, and more) and many articles have provided thoughtful critiques.
- From Harold Jarche: Training’s a mug’s game
- From Dan Pontefract: Dear Kirkpatrick’s: You still don’t get it
- From Jane Bozarth: Nuts and Bolts: How to evaluate e-learning
- From Donald Clark: Kirkpatrick 4-levels of evaluation: Happy sheets? Surely past its sell-by-date?
However, in this series of posts, I haven’t come to bury Kirkpatrick’s approach or to praise it. Instead, I’m going to discuss some alternatives.
One alternative: Kirkpatrick Plus
Articulated by Kaufman, Keller, and Watkins (“Kaufman”) (1995), this evaluation framework connects performance to expectations. Kaufman proposes 5 levels of evaluation
Level 1: Resources and processes
Level 1 is actually divided into two levels, 1a and 1b.
- Level 1a focuses the evaluation lens on inputs, e.g., such as the availability and quality of materials needed to support a learning effort.
- Level 1b considers processes. What’s their quality? Are they efficient? Are learners satisfied with them?
Compared to Kirkpatrick’s Level 1 (Reaction), Kaufman’s Level 1 focuses not only on learner satisfaction, but on the organizational factors that can impact learner satisfaction.
Level 2: Acquisition
This level is focused on individual and small group payoffs—what Kaufman calls “micro” benefits. Are the objectives or desired outcomes of the learning intervention met? It’s pretty analogous to Kirkpatrick’s Level 2 evaluation (Learning), but Kaufman notes that the learning intervention may not necessarily be training.
Level 3: Application
This is still a micro analysis, examining individual and small group impacts. The relevant inquiry here is whether newly acquired knowledge and skills are being applied on the job. Level 3 also is quite similar to Kirkpatrick’s Level 3 (Behavior/Performance).
Level 4: Organizational payoffs
Here, the analysis examines macro benefits. What are the benefits from an organizational standpoint? Level 4 is analogous to Kirkpatrick’s Level 4 (Results).
Level 5: Societal contributions
Kaufman considers this a mega analysis. How is the organization contributing to its clients and society? Is it responsive to client/societal needs?
Issues of health, continued profits, pollution, safety, and well-being are central [in this level]. The basis for mega-level concerns is an ideal vision, which is a measurable statement of the kind of world required for the health, safety, and well being of tomorrow’s children.
Level 5 has no analog in Kirkpatrick’s Evaluation Model.
A better model?
The “Kirkpatrick Plus” framework doesn’t stray that far from Kirkpatrick’s Evaluation Model and so can be subject to many of the same criticisms. Notably, while measuring organizational payoff’s an important part of a meaningful evaluation, teasing apart the effects of a learning intervention from all the other variables that impact ROI is notoriously difficult. And if you think measuring organizational payoff is challenging, imagine how hard it is to measure societal impact. (This isn’t to say this evaluation aspiration isn’t a worthy one.)
I do think making the organization’s efforts part of the evaluation process (as in Kaufman’s Level 1) is an important step in the right direction. The organization’s commitment to success (e.g., by providing necessary resources, processes, and other supports) should be subject to as much scrutiny as the learner’s performance.
Still shopping for a better model? Stay tuned for the next post.
Kaufman, R., Keller, J., & Watkins, R. (1995). What works and what doesn’t: Evaluation beyond Kirkpatrick. Performance and Instruction, 35(2): 8-12. Retrieved from http://home.gwu.edu/~rwatkins/articles/whatwork.PDF