Pages

Tuesday, November 24, 2020

The False God Data Fails Again

The dream of the Cult of Data is that for any issue, we simply design an instrument for collecting the data, analyze the Data, and select a solution suggested by the Data. Phrases like "data driven" or "data informed" are used to express the assumption that decisions backed by Data are inherently smarter, better, stronger, and wiser than Those Other Kinds of decisions. 

But what if they aren't? What if Data doesn't actually solve anythung?

I've made this point before while writing about the NAEP, the gold standard, America's report card, the test that is supposed to give us data that is clear and clean and objective and allows us to make wise decisions. Except that it doesn't. The Data come out, the arguments follow, and the hard data from the NAEP test settles exactly nothing.

Now here we go again. 

Matt Barnum, my personal favorite Chalkbeat reporter, took a look at a recent gathering of education experts who wanted to look at a simple question- are the gaps in test scores between the wealthy and the not-so-wealthy closing? Seems straightforward, and yet...

No one could answer the question. Or, more precisely, no one could agree on the answer. One researcher claimed the gap was growing, another said it was shrinking, and a third argued that it hadn’t changed much in decades.

It depends. It depends on which Data you look at, which data you trust, which data represents what you think it represents. And as Barnum points out, this is a particularly remarkable question to be stumped by, because the test score gap (aka "the achievement gap") has been on research radar for decades. Guys like Eric Hanushek and Tom Kane (both part of this confab) have made a career out of playing with this question. By this point, they ought to be pretty good at capturing, collecting, and crunching these particular Data.

And yet, they aren't. There is still no agreement about what the data shows about the test score gap. Nor do we have any data showing that closing the test score gap would have real life-improving effects for those on the bottom side of the gap.

Let me repeat-- we have a whole lot of Data, especially testing Data, and yet we still don't know the answer to that question. Data has not solved the problem, or even clarified exactly what the problem looks like. 

Mind you, the term "data" as used by members of the Data Clan has a specific meaning--numerical standard items collected through "objective" means. So even though a classroom teacher collects tens of thousands of data points every day, those don't count because they aren't Data. It is only magical Data that can save us, can provide clarity and certainty about what is happening. Except that it's not working. Just as it didn't work for things like data-driven staffing decisions, where we would just use VAM-soaked data to distinguish the good teachers from the bad. 

There's much to discuss here, but I want to keep my point short and clear--

Keep in mind how the False God Data keeps failing the next time somebody insists that we must test students this year, maybe even right now, because testing will get us the Data we need to know how students are doing, what learning they've "lost," and where the new gaps are. It's not going to work. It's going to waste time and money and time, and in the end all we'll have are some experts staring at strings of numbers and shrugging their shouders.


2 comments:

  1. Test score data driven instruction was destined to fail because even an item analysis cannot ever tell a teacher WHY they got an item wrong, making it impossible to improve instruction. The list of reasons is nearly endless and the last issue is "ineffective" teaching. More likely, absenteeism, not paying attention, apathy, non-motivated, bad standard or bad test item. Nor will it ever explain why the same 25 students sitting in the same class with same teacher, and the exact same instructional experience can produce test scores that run the gamut.

    ReplyDelete
    Replies
    1. It always amazed me that edu-meddlers ignored the traits of successful test takers. We have reams of data that tell us what students (not teachers) need to do to to improve achievement, be it test scores or GPA. The silver bullet was never cast of standards, curricula, pedagogy or software.

      Delete