Sunday, September 14, 2014

Education Next Plugs Research Proving Not Much of Anything

This week Education Next ran an article entitled "The First Hard Evidence on Virtual Education." It turns out that the only word in that title which comes close to being accurate is "first" (more about that shortly). What actually runs in the article is a remarkable stretch by anybody's standards.

The study is a 'working paper" by Guido Schwert of the University of Konstanz (it's German, and legit) and Matt Chingos of Brooking (motto "Just Because We're Economists, That Doesn't Mean We Can't Act Like Education Experts"). It looks at students in the Florida Virtual School, the largest cyber-school system in Florida (how it got to be that way, and whether or not it's good, is a question for another day because it has nothing to do with the matter at hand). What we're really interested in here is how far we can lower the bar for what deserves to be reported.

The researchers report two findings. The first is that when students can take on-line AP courses that aren't offered at their brick and mortal schools, some of them will do so. I know. Quelle suprise! But wait-- we can lower the bar further!

Second finding? The researchers checked out English and Algebra I test scores for the cyber-schoolers and determined that their tenth grade test results for those subjects were about the same as brick-and-mortar students. Author Martin West adds "or perhaps a bit better"  but come on-- if you could say "better" you would have. This is just damning with faint praise-by-weasel-words.

West also characterizes this finding "as the first credible evidence on the effects of online courses on student achievement in K-12 schools" and you know what? It's not. First, you're talking about testing a thin slice of tenth graders. Second, and more hugely, the study did not look at student achievement. It looked at student standardized test scores in two subjects.

I know I've said this before. I'm going to keep saying this just as often as reformsters keep trying to peddle the false assertion used to launch a thousand reformy dinghies.

"Standardized test scores" are not the same thing as "student achievement."

"Standardized test scores" are not the same thing as "student achievement."

When you write "the mugwump program clearly increases student achievement" when you mean "the mugwump program raised some test scores in year X," you are deliberately obscuring the truth. When you write "teachers should be judged by their ability to improve student achievement" when you mean "teachers should be judged by students' standardized test scores," you are saying something that is at best disingenuous, and perhaps a bit of a flat out lie.

But wait-- there's less. In fact, there's so much less that even West has to admit it, though he shares that only with diligent readers who stick around to the next-to-last paragraph.

The study is based on data from 2008-2009. Yes, I typed that correctly. West acknowledges that there may be a bit of an "early adopter syndrome" in play here, and that things might have changed a tad over the past five years, so that then conditions under which this perhaps a bit useless data was generated are completely unlike those currently in play. (Quick-- what operating system were you using in 2008? And what did your smartphone look like?)

Could we possibly reveal this research to be less useful? Why, yes-- yes, we could. In the last sentence of that penultimate graf, West admits "And, of course, the study is also not a randomized experiment, the gold standard in education research." By "gold standard," of course, we mean "valid in any meaningful way."

So there you have it. Education Next has rocked the world with an account of research on six-year-old data that, if it proves anything at all, proves that you can do passable test prep on a computer. And that is how we lower the bar all the way to the floor.

1 comment:

  1. What a joy to read "brick and mortal" on a dreary afternoon in western upstate NY.

    ReplyDelete