Implementation Risk – is that a thing?

Introduction to this series:

This series of posts frames its analysis through the unusual lens of risk, rather than the conventional provider lens of potential or promises. The series is intended for a school district readership. Your actual use and results happen in your real world, which can be quite different from some ‘lab’ conditions studied and published. Consider that no matter the ‘ideal’ potential of a program, if there are significant risks to actually achieving that potential for your students and teachers, it’s important to become familiar with the risks and, if need be, risk tradeoffs and risk mitigation to achieve results.

Related Posts:

  • Lens: Why the lens of “risk”?
  • Info: How risky are different sources of program information?
  • Studies : How risky is your replicating results?

This third post considers “implementation risk”. Perhaps you haven’t heard that term much; in a market where a green check on a study rigor webpage list signals “low risk”, implementation is deep in the shadows. However, it’s obvious that no program – of any nature, not just edtech – can work with poor implementation, or even with average but insufficiently broad and regular implementation. At the same time as implementation is this prima facie foundation for any program’s promised results, it is also commonly observed that implementation is the “weakest link” towards that achievement. Implementation risk must come out of the shadows of impact study rigor.

I’m breaking “implementation risk” into 3 chronological phases: Licensing, teacher adoption and use, and student use.

The Three Phases of Implementing a Program: how much risk?

Phase 1: Get program licenses assigned: negligible risk

This is the administrative process of obtaining access to the program. A purchase order has 100% chance of obtaining licenses to use, and modern auto-rostering gets those licenses assigned in a few mouse clicks. All the adults in both parties, the vendor and the school, are fully aligned on accomplishing rostering. What happens after those clicks is where the variation happens.

Phase 2: Teachers Using the program: high risk

Getting results depends of course on using the program. Usage isn’t controlled by the computers and their databases, it is controlled by the humans. As such it is highly variable. High variation equates to a “risk” of low use. Usage amount, breadth and regularity carries considerable risk for achieving results.

The “amount” of use – how many minutes or lessons, per school or per student – seems like the only K.I.I. (key implementation indicator) you would need. But “amount” desperately needs unpacking. What portion of the available grade-level content was used? What portion of each students’ assigned content? Amount also needs to be disaggregated from one overall average measure of all students’ use. What was the usage breadth across the entire student body? Did the observed use vary materially from plans across schools, across grade-levels and classrooms, across student subgroups?

Below are facets of teacher-driven implementation that vary and thus can add to risk of achieving results.

Teacher use risk: deciding to adopt
Resistance to adoption of new edtech by teachers is to be expected. Some challenging factors:

  • Change is disruptive and has costs. Multiple other aspects of the classroom ecosystem also change in any given year. There is such a thing as “too much” change and thus risk.
  • Technology-infused change carries a range of acceptance considerations for teachers ranging from screen time to device use challenges.
  • Precedent: long-time teachers have lived experience with “the new product”. That experience may not have been very positive; may not have implanted a default expectation for “new product successes”. The prior “new program” may not have been adequately introduced, justified or explained to teachers. There may not have been appropriate amount or type of training or followup. Perhaps the program was started, but wasn’t given enough time to incorporate sufficiently to deliver its results. Even if it did deliver, it might not have been sustained by decision-makers, leading to disappointment and skepticism about future initiatives.
  • New content may bring new pedagogy and new ways or aims or activities of teaching. Yet teachers may very well be uncomfortable with or disagree with a new program’s “approach” and its alignment to their personal educational knowledge and skills, beliefs and values.

Classroom time risk: minutes from where?
Classroom minutes are a harsh zero-sum tradeoff each week. How much time per week is required to reach what level of the new program’s results? Is this more or less time per week than its predecessor product? What’s the time threshold below which one shouldn’t expect outcomes? The last question is the big risk. Is there an empirically derived benchmark: what proportion of current program user schools have managed to reach what amount of use? Beyond an “overall” value: are there time challenges for various student types?

Teacher training risk:
Teacher training can vary greatly compared to a program’s ideal model. Variation in training activities could include:

  • Is there an effective program overview kickoff for teachers – purpose and applications, goals and strategies?
  • Is there appropriate selection of types and amounts of training for a productive foundation for healthy and productive implementation?
  • Does that foundation require “too much” training compared to pragmatic training budgets
  • How challenging is the training to access, absorb, and use?
  • How effective are the provider’s delivered training sessions in accomplishing their goals?

Teacher use risk : Difficulty of standard product use
After training, what’s the risk that many teachers still struggle to successfully incorporate the program? Here are a few factors post-training:

  • Inadequate support for making ongoing time tradeoffs
  • A too-complex on-screen teacher interface for required activities
  • Too many individual student interventions are required

Phase 3: Students using the program

Finally: How will the students themselves “adopt” the program? Will “all” students be able to, and choose to, effortfully and successfully engage with it? If there are substantial unresolved challenges in student adoption, obtaining results will be like “pushing a rope”. On the other hand, if the program successfully engages learners, motivating them to “login” regularly, invest focused time, and progress through the assigned content sequence—especially when that content is carefully aligned with learning value-add —then the desired results from consistent student usage will follow.

Student content access risk:
Programs vary greatly in presentation of content: readings, illustrations and figures, videos, games, problems and problem sets, walk-throughs, quizzes, objects, journeys. Some of these presentation types may show up as hurdles for some students to overcome. So the programs may differentiate to reduce those hurdles. For example, for students who are multiple grade levels behind, programs may be designed to assign below grade-level content to enable access. There may be translations into other languages. There may be various hurdles for and accommodations for special student needs. Especially in edtech, with its tremendous variations in approaches and learning engineering, it’s necessary to look through the lens of risk at student access considerations to ensure equity of learning opportunity.

Content sequence progress risk:
Edtech programs present many content objects in a sequence, and the student progresses object after object through that sequence. The sequence progression might be passive – allowing progression through “seat time” without an “exit ticket” to check for an output or outcome. For example, paging through a digital book or series of articles is passive through this lens. However, likely in edtech progression will require some sort of student interaction with the content, and some sort of gate before moving ahead. Applying the risk lens to this progression: are there types of learners who will be blocked from attaining some “exit tickets”? The risk here isn’t about struggle, it’s about too much unproductive struggle that ultimately fails. It’s important to evaluate the likely risk and amount of unproductive struggle, and to find out how the program addresses it. Will teacher intervention be recommended or required? How often does that occur and how successful are those interventions?

Progress vs. learning:
Is making progress identical to adding value to student learning? Again, it’s possible that seat time alone could eventually lead to progress. For example watching videos, or problem tips that escalate to eventually solving the problems for the student, then being moved on. Even with digital interactions and with gating before progress, could the learner experiences as they earn that progress be adversely affected by the intangibles along the way? Like confusion, boredom, or too much frustration that end up being counterproductive to the ultimate educational value-add for the learner.

Content applicability:
A program’s content objects are aligned to specific content standards. Which objects are completed by which student matters in the product’s learning impact. An edtech program may assign different content objects to individual students. Through the lens of risk: is a program’s individual student content assignment appropriate? To what extent are students “supplemented” with content that doesn’t add significant learning value, for example, extra practice on procedures where they are already fully fluent? Conversely, to what extent are there content areas that some students could benefit materially from but end up not getting assigned to those students at all? How does the edtech program assign content, exactly? What’s the program’s theory of value-ad for different types of content and different specific standards-aligned content for different types of students?

This post is intended to prompt questions you can ask providers or researchers to help unpack a specific program’s insights for being effective. In exercising this lens of risk and your questioning process for any one program, you will become conversant with various types of risks. The researcher and provider responses will reveal to you how well the program has been designed via research, logic-modelled and studied. You will learn risk magnitudes, risk tradeoffs and risk mitigations. You will learn more deeply how the program must be applied in order to achieve those real results you are seeking.

Next Post:

Studies : How risky is your replicating results?