Pages

Thursday, August 14, 2014

A Big Problem with Ed Research

I've always taken a skeptical view of education research. I was in college in the seventies, and I have memories of repeatedly discovering that the Gortshwingle study of How Students Learn was actually a study of how twenty male college sophomores at a small Midwestern college performed a particular task. Educational research seemed to suffer a experimental subject of opportunity problem. And like much research involving human beings and the psychological and intellectual intangibles that drive them, educational research also seemed prone to bias. "I was completely surprised by what the data revealed" seemed not to come up very often. On top of that, designing an experiment that really captures life as it happens in an actual classroom (or, in some cases, on planet earth). Put it all together and I've always found plenty of reasons to view educational research with a very critical eye.

A recently released study suggests that educational research has another huge problem. In "Facts Are More Important Than Novelty: Replication in the Educational Sciences," Matthew A. Makel (Duke University) and Jonathan A. Plucker (University of Connecticut) suggest that there is gaping hole in educational research through which one could drive a fleet of school buses.*

The authors open with a Carl Sagan quote, and then get straight to the central problem:

The desire to differentiate "truth from nonsense" has been a constant struggle within science, and the education sciences are no exception.

Makel and Plucker show us the newly raised stakes-- the US DOE's Institution of Educational Sciences (IES) has been set up as a central clearing house for "real" scientific education research, disseminated through avenues such as the What Works Clearinghouse and the Doing What Works website. But is that research truth or nonsense?

Makel and Plucker walk us through the various ways in which the Randomized Control Trials and Meta-Analyses that make up much of this research can be less-than-solid. Bias, bad design, dumb ideas, poor execution, stupid bosses-- they have a whole list of sourced Ways Things Can Go Wrong in research. The authors are working us around to the major manner in which Real Science corrects for those problems.

Replication.

Since the first primitive lab assistant said, "Woah, that was cool! Can you make it do that again? Can I make it do that again?" or the first proto-scientist said, "Hey, take a look at this and tell me what you see," the backbone of science has replicatable results. Turns out that educational science is more of a spineless jellyfish.

The authors' pored over five years worth of 100 journals to see how often replication had actually happened (they explain their technique in the paper; feel free to check it out yourself). The results were not stunning.

The present study analyzed the complete publication history of the current top 100 education journals ranked by 5-year impact factor and found that only 0.13% of education articles were replications. 

It gets worse. The majority of replicants did in fact confirm the original research. Well, at least, they did that if the replication involved one or more of the researchers who did the original work. If the replication was done by actual third parties who had no stake in proving the original research correct, successful replication was "significantly less likely." The success rate for the original authors in the original journal was 87%. For completely different authors in a new journal, the success rate was 54%.

Not that I'd pay too much attention to that portion, because the sampling is small. The authors looked at a total of 164,589 articles published in the journals. Of those, 461 claimed to be replicants, but the authors determined that only 221 actually were.

So what does this mean? It means that very likely a great deal of what's passed off as research-based knowledge is information that has never been checked, the result of just one piece of research. Imagine if you were seriously ill and your doctor said, "Well, there's this one treatment that only one guy did only this one time, and he thought it turned out well." Would you consider that a hopworthy bandwagon?

The authors maintain a scientific tone as they say "Well, we guess the good news is, hey-- lots of room for improvement." There are lots of ways to address "the rampant problem of underpowered studies in the social studies that allow underpowered studies in the social sciences that allow large, but imprecise, effects sizes to be reported." So this is Not Good, but it is also Not That Hard To Fix.

In the meantime, when confronted with education research, remember to ask a few simple questions. In addition to my own personal favorites ("If this involved studying live humans, what live humans were used? What was the research design?") we should also add "Has anyone ever replicated this research, and can we get a look at that, please?"

In short, just because someone flings  the words "science" and "research" at you, don't assume that you're about to be hit with The Truth.


*Hat tip to Joy Resmovits from HuffPo for pulling this obscure little piece of wonkery into the cold light of twitter.

2 comments:

  1. It's certainly not scientific if you don't replicate the studies. I was not aware that educational research is routinely so unscientifically done. I know from proofreading my daughter's graduate papers on scientific studies - she's a physical therapist - how important it is to not draw conclusions from studies outside of the sample group you work with. When I was in college in the early 70's, I found Jerome Bruner's work in my ed psych course very helpful. My other daughter is a social studies teacher and she also has found Bruner's work helpful; it seems his work is still valid, or does it also mean there haven't been enough other theorists. It seems like the education departments are trying to do a marginally better job than when I was in college, but she still had to completely organize the information from her methods class for it to make sense because it was presented in a very random, disorganized way. Except for the ed psych class, all my other education classes were the most useless classes I had, taught by the worst teachers. My field is foreign language - French and Spanish - and the only other helpful class I had was called Spanish Linguistics for Teachers (which was not in the ed dept), where we analyzed and compared Spanish and English to figure out what problems the students would have and where and why and how to address them, and I was able to use the skills I learned there to apply them to teaching French and E.S.L. also. I helped my son study for his cognitive psychology course, and I found it fascinating, especially the chapter on learning theory. I've had some drive-by pd sessions on cooperative learning or on random memory things like how many words the average person can learn at a time and about memory "chunking", but again, very random. In the 80's something called TPR became popular for teaching foreign language but I got absolutely nothing out of a day session on it; I only understood it once I read the book by the person who developed it and then I understood the theory well enough to be able to apply it. I had a very good workshop on multiple intelligences and learning preferences that was helpful. And I think you mentioned a neuroscience study in one of your recent articles. It just seems like someone needs to separate the wheat from the chaff in these studies, or maybe cognitive science is better than "educational" research. And we have to watch people who cite things out of context or misrepresent studies: that awful, awful article by Joanne Lipman written to shill for her book by jumping on the reformster bandwagon about "grit" cited Anders Ericsson. I googled him and the way she quoted him out of context actually contradicted the main thrust of his research. I don't know. Teaching will always be an art (but if you say that people will think it's aesthetics and depends on your opinion instead of being "skill in performance acquired by experience, study, or observation" or "a special ready capacity that is hard to analyze or teach") but the more of a science we can make it, not only could we refute stupid ideas or fads, but the faster young teachers would become good.

    ReplyDelete
  2. If the science regarding ed reasearch is lacking, what can be said of underpinnings of all of pedagogy training that makes up the advantage that people with a 4 year ed degree have over the TFA kids? I feel that the way teaxhing appears to be taught is lacking in practical application. Leading to new teachers unprepared for the rigors they will face in their first year.

    ReplyDelete