But what does "evidence-based" actually mean? Not nearly as much as you probably think it means.
There are four tiers of evidence, four flavors of evidence that something works. And they aren't all necessarily all that evidency.
Tier 1: This is "strong" evidence. It requires "studies that have had a positive, statistically significant impact on student outcomes without any negative findings from well-designed, well-implemented experimental or quasi-experimental studies examining the same interventions and outcomes." In short, it's what most think of as "actual evidence."
Tier 2: "Moderate evidence." One federal definition of this is--well, it's exactly the same as Tier 1. The What Works Clearinghouse site (that federal internet spot that is supposed to be a collection of effective and "evidence-based" materials for education use) distinguishes it by saying that the research study might come "with reservations" which means there might be some issues with the studies being used to back it up (not well implemented, questions about subject selection, just generally "issues that require caution." What a layperson might call "shaky" or "questionable" evidence.
Tier 3: "Promising evidence." Instead of a statistically significant positive finding, we'll settle for some correlation with controls for selection bias. There's no requirement for minimum subjects or a particular setting. So, what a layperson would call "hardly any actual evidence at all, but if you squint hard you can make something out of this."
Tier 4: "Demonstrates a rationale" This one doesn't come up as often, probably because it boils down to "We have a good idea for a practice and our idea makes sense and we did a tiny little study that seemed to get a tiny positive effect but mostly we're going to have to create another study to really test this stuff."
All of them require the absence of any evidence from other "high-quality causal studies," which means, I guess, that studies from tiers 2, 3 and 4 can just kind of duke it out amongst themselves.
These distinctions are worth making. But I worry that entirely too many non-academic-research laypeople, including classroom teachers, hear the term "evidence-based practices" and think, "Oh, there's proof that this practice works," when that's not necessarily true at all. Evidence-based is not the same as proven effective, and teachers should not throw out the evidence of their own eyeballs and experience because a practice has been declared evidence-based.