I've always taken a skeptical view of education research. I was in college in the seventies, and I have memories of repeatedly discovering that the Gortshwingle study of How Students Learn was actually a study of how twenty male college sophomores at a small Midwestern college performed a particular task. Educational research seemed to suffer a experimental subject of opportunity problem. And like much research involving human beings and the psychological and intellectual intangibles that drive them, educational research also seemed prone to bias. "I was completely surprised by what the data revealed" seemed not to come up very often. On top of that, designing an experiment that really captures life as it happens in an actual classroom (or, in some cases, on planet earth). Put it all together and I've always found plenty of reasons to view educational research with a very critical eye.
A recently released study suggests that educational research has another huge problem. In "Facts Are More Important Than Novelty: Replication in the Educational Sciences," Matthew A. Makel (Duke University) and Jonathan A. Plucker (University of Connecticut) suggest that there is gaping hole in educational research through which one could drive a fleet of school buses.*
The authors open with a Carl Sagan quote, and then get straight to the central problem:
The desire to differentiate "truth from nonsense" has been a constant struggle within science, and the education sciences are no exception.
Makel and Plucker show us the newly raised stakes-- the US DOE's Institution of Educational Sciences (IES) has been set up as a central clearing house for "real" scientific education research, disseminated through avenues such as the What Works Clearinghouse and the Doing What Works website. But is that research truth or nonsense?
Makel and Plucker walk us through the various ways in which the Randomized Control Trials and Meta-Analyses that make up much of this research can be less-than-solid. Bias, bad design, dumb ideas, poor execution, stupid bosses-- they have a whole list of sourced Ways Things Can Go Wrong in research. The authors are working us around to the major manner in which Real Science corrects for those problems.
Since the first primitive lab assistant said, "Woah, that was cool! Can you make it do that again? Can I make it do that again?" or the first proto-scientist said, "Hey, take a look at this and tell me what you see," the backbone of science has replicatable results. Turns out that educational science is more of a spineless jellyfish.
The authors' pored over five years worth of 100 journals to see how often replication had actually happened (they explain their technique in the paper; feel free to check it out yourself). The results were not stunning.
The present study analyzed the complete publication history of the current top 100 education journals ranked by 5-year impact factor and found that only 0.13% of education articles were replications.
It gets worse. The majority of replicants did in fact confirm the original research. Well, at least, they did that if the replication involved one or more of the researchers who did the original work. If the replication was done by actual third parties who had no stake in proving the original research correct, successful replication was "significantly less likely." The success rate for the original authors in the original journal was 87%. For completely different authors in a new journal, the success rate was 54%.
Not that I'd pay too much attention to that portion, because the sampling is small. The authors looked at a total of 164,589 articles published in the journals. Of those, 461 claimed to be replicants, but the authors determined that only 221 actually were.
So what does this mean? It means that very likely a great deal of what's passed off as research-based knowledge is information that has never been checked, the result of just one piece of research. Imagine if you were seriously ill and your doctor said, "Well, there's this one treatment that only one guy did only this one time, and he thought it turned out well." Would you consider that a hopworthy bandwagon?
The authors maintain a scientific tone as they say "Well, we guess the good news is, hey-- lots of room for improvement." There are lots of ways to address "the rampant problem of underpowered studies in the social studies that allow underpowered studies in the social sciences that allow large, but imprecise, effects sizes to be reported." So this is Not Good, but it is also Not That Hard To Fix.
In the meantime, when confronted with education research, remember to ask a few simple questions. In addition to my own personal favorites ("If this involved studying live humans, what live humans were used? What was the research design?") we should also add "Has anyone ever replicated this research, and can we get a look at that, please?"
In short, just because someone flings the words "science" and "research" at you, don't assume that you're about to be hit with The Truth.
*Hat tip to Joy Resmovits from HuffPo for pulling this obscure little piece of wonkery into the cold light of twitter.