Wednesday, November 1, 2023

Can We Trust "Evidence-Based"?

We love to talk about "evidence-based" practices in education. We've even enshrined it in the federal laws about education-- the Elementary and Secondary Education Act as currently modified as the Every Student Succeeds Act (ESSA). The idea is that education is supposed to use techniques, materials, etc., that are evidence-based and not just whims-based or best-guess-based.

But what does "evidence-based" actually mean? Not nearly as much as you probably think it means.

There are four tiers of evidence, four flavors of evidence that something works. And they aren't all necessarily all that evidency.

Tier 1: This is "strong" evidence. It requires "studies that have had a positive, statistically significant impact on student outcomes without any negative findings from well-designed, well-implemented experimental or quasi-experimental studies examining the same interventions and outcomes." In short, it's what most think of as "actual evidence." 

Tier 2: "Moderate evidence." One federal definition of this is--well, it's exactly the same as Tier 1. The What Works Clearinghouse site (that federal internet spot that is supposed to be a collection of effective and "evidence-based" materials for education use) distinguishes it by saying that the research study might come "with reservations" which means there might be some issues with the studies being used to back it up (not well implemented, questions about subject selection, just generally "issues that require caution." What a layperson might call "shaky" or "questionable" evidence.

Tier 3: "Promising evidence." Instead of a statistically significant positive finding, we'll settle for some correlation with controls for selection bias. There's no requirement for minimum subjects or a particular setting. So, what a layperson would call "hardly any actual evidence at all, but if you squint hard you can make something out of this."

Tier 4: "Demonstrates a rationale" This one doesn't come up as often, probably because it boils down to "We have a good idea for a practice and our idea makes sense and we did a tiny little study that seemed to get a tiny positive effect but mostly we're going to have to create another study to really test this stuff." 

All of them require the absence of any evidence from other "high-quality causal studies," which means, I guess, that studies from tiers 2, 3 and 4 can just kind of duke it out amongst themselves. 

These distinctions are worth making. But I worry that entirely too many non-academic-research laypeople, including classroom teachers, hear the term "evidence-based practices" and think, "Oh, there's proof that this practice works," when that's not necessarily true at all. Evidence-based is not the same as proven effective, and teachers should not throw out the evidence of their own eyeballs and experience because a practice has been declared evidence-based.

1 comment:

  1. Does everything on education require evidence based reports? What if there are no such research reports? Can't educators think without research reports.

    In 2010 when I first disagreed with the theory, that had existed for more than 35 years, that phonological awareness deficit cannot be the cause of dyslexia many so-called educators asked me for evidence. There were no research reports to support my finding.

    Many researchers did not agree with me because they had said that it was phonological awareness deficit that is the cause of dyslexia. This was despite me giving them logical explanations based on my students who were 'dyslexic' in English but not in Malay and Hanyu Pinyin.