Should a risk assessment be part of a selection decision
Education is known for its risk-averse nature, and rightfully so. The importance of achieving learning results for all students can’t be overstated. When investing taxpayer resources in public educational programs, it is vital to ensure they deliver outcomes at scale. Putting outcomes knowingly at risk is a non-starter. Educators tend to stick to what is familiar and well known: the results turn out to be what you expect (that is, essentially the same as last year). Education Technology (edtech) famously has offered a promise of significantly better results than the status quo. Since blended learning has become the norm and status quo is some product already, any new replacement edtech products’ promises must now be “significantly better results than your incumbent edtech”.
In the market for edtech, however, my observation is that those educators who source programs seem to act de facto unaware of many significant risks. This market acts like edtech programs – in my case math software for k-8 – are commodities from which to choose based on some checkmarks, or familiarity of approach, or popular/hyped features, or lowest price. As an edtech program provider’s chief data scientist I can tell you from over 20 years of experience: buyers/replacers don’t fully assess risk. I have never once been proactively contacted with a due diligence inquiry about the factors below that relate to chance of program success. The provider’s high level marketing and the sales pitch are accepted as sufficient “provider’s information”. This lack of depth in assessing risks doesn’t stop with pre-purchase. During initial adoption and then year-long use, similarly there just aren’t proactive asks for implementation stats and insights; there aren’t proactive asks for learning indicators. Whatever is in the app’s dashboards and standard reports is accepted without deeper dives.
And the most blind spot of all occurs when considering swapping out one program for another – which for the last decade of ubiquitous blended learning has been the case: to adopt a new product, there is an incumbent product being dropped. There is no indication that I see of “replacers” evaluating what positives could be lost from the incumbent product. Those incumbent positives can range from teacher acceptance, to implementation quality and breadth, to integration within the district’s systems, to equity of student engagement and of course to the program’s magnitudes of contributions to outcomes, to high stakes test scores. I have not yet seen anyone grasp this fact: if there are any of these lost when the old product is removed, that factor will need to be either replaced by the new product, fully or partially, or else abandoned. Must any replacement be equal or better in every single aspect? No, of course not, impossible. But how many aspects are at least as good, and how many aspects are not, is a crucial question that just goes unasked.
This series of posts provides a starter guide to identifying, understanding and minimizing risks associated with adopting/replacing edtech programs. The list extends to various facets of the ecosystem from provider marketing hype to implementation challenges to what to expect from efficacy research studies.
But the future is not a choice between blind risk and paralysis by analysis! The future can be very bright once this market’s incentives shift. In the final post I provide some upside guidance on research studies specifically (spoiler alert: studies are your main information source). It’s a short list of viable program evaluation strategies that provide insights on how to do studies, ask questions of researchers, minimize study-application risk, make better informed decisions, and prepare a smart path to robustly achieving student learning outcomes.
So how do you evaluate program risk? What types of things are “at risk”? How risky are they? Let’s contextually define risk up front: risk is the chance that any given aspect of a program won’t happen as well as the marketing implies.
Next post:
Info: How risky are different sources of program information?
Related Posts:
- Implementation: Implementation Risk – is that a thing?
- Studies : How risky is your replicating results?