Rating your learning interventions

In a course for patient safety leaders I designed, I ask learners to design and then rate a patient safety improvement plan. They are asked to reflect on the overall grade of their design using a Likert scale to assess the following criteria.

Is the plan….

  • Evidence-based?
  • Advantageous?
  • Simple?
  • Compatible with existing workflows?
  • Trialable (i.e., is it easy to test and pilot)?
  • Observable (i.e., are there clear ways to measure success)?

where 1=not at all and 5=hits the mark completely.

The same criteria can be used to reflect on the design of a learning intervention. I use the word “intervention” here rather than “experience” to reflect the larger context in which any learning experience is situated. An intervention can include multiple learning experiences, both formal and informal.

Looking at an overall intervention, it can be useful to step through the following questions as you consider component learning experiences.

  • Evidence-based: Does the design rely on solid learning theories and methods?
  • Advantageous: Does taking part in the learning experience(s) and transfering the lessons of the experience(s) to the workplace meet the WIIFM or “What’s in it for me?” criteria for the learner?
  • Simple: Is the experience easy to access and navigate if online?  Are activities prefaced by clear instructions if a facilitated workshop? Whether online or facilitated, does the design consider cognitive load and the limits of human memory?
  • Compatible with existing workflows: Are  we  ultimately asking learners to perform in ways that are contradictory to their current workflows? If yes, what organizational supports exist to create new or modified workflows? Is the learning experience itself compatible with existing workflows? For example, does it make sense to include audio in an online learning experience if clinical staff are accessing the course during “down time” in open spaces and rarely carry headphones? Do we really believe workers can and/or will reach for that performance support tool at time of need in a particular context?
  • Trialable: Have we actually developed a project timeline that creates space for piloting and then reacting to inputs of pilot testers?
  • Observable: Finally, have we defined at the outset what success looks like and do we have the organizational buy-in to conduct meaningful workplace evaluations beyond smile sheets? What have we done to try to get this buy-in?

In this exercise, the point is not to worry about any sort of overall “grade”–that’s just a way to synthesize ideas . The real objective of the exercise is to proactively anticipate and mitigate barriers. What might be done to improve? To increase our chances for success? Are we truly framing this as an intervention or just viewing what we’ve designed as a “one-and-done” event?

If we are true organizational partners, we are accountable to our learners just as we expect our learners to be accountable to the organization.

Leave a comment