Having a data-driven school has been all the rage for a while now, because when you express your ideas, thoughts, and biases in numbers, they qualify as "facts," whereas judgment expressed in words obviously lacks data-rich factiness, and so should be ignored. Yes, the fact that I am 100% an English teacher may make me about 62% bitter about the implied valuing of numbers over words; I'd say I'm at about 7 on the 11-point Bitterness Scale, and that's a fact.
|
Pretty sure the rest of the vehicle is around here somewhere. |
Being data-driven (which usually means test-result-driven) is a bad idea for several reason.
Data vs. Standards
Mind you, I am not and never was a fan of nationalized standards like the
Common Core [Insert New Name Here] Standards. But at some point lots of folks quietly switched standards-aligned to data-driven curriculum management, and that matters a great deal. Almost an 8 on the 10-point Great Deal scale.
It matters because tests ignore many of the standards, starting with non-starters like the speaking and listening standards. No standardized test will address the cooperative standards, nor can writing or research be measured in any meaningful way on a standardized multiple-choice test. And no--
critical thinking can not be measured on a standardized test any more than creativity can be measured by a multiple choice question.
In other words, the moment we switch from standards-aligned to data-driven, we significantly and dramatically narrow the curriculum to a handful of math and reading standards that can be most easily addressed with a narrow standardized test. The Curriculum Breadth Index moves from a 20 down to a 3.
Remember GIGO
Because the instrument we use for gathering our data is a single standardized test that, in many states, carries no significant stakes for the students, we are essentially trying to gather jello with a pitchfork.
The very first hurdle we have to clear is that students mostly don't care how they do on the test. In some cases, states have tried to clear the first hurdle by installing moronically disproportionate stakes, such as the states where third graders who are A students can still find themselves failing for the year because of a single test. But if you imagine my juniors approach the Big Standardized Test thinking, "Golly, I must try to do my very best because researchers and policy makers are really depending on this data to make informed decisions, and my own school district really needs to do my very, very bestest work so that the data will help the school leaders,"-- well, if that's what you imagine, then you must rank around 98% on the Never Met An Actual Human Teenager scale.
That's before we even address the question of whether the test does a good job of measuring what it claims to measure-- and there is no reason to believe that it does. Of course, it's "unethical" for teachers to actually look at the test, but apparently I and many of my colleagues are ethically impaired, so we've peeked. As it turns out, many of the questions are junk. I would talk about some specific examples, but the last time I and other bloggers tried that, we got cranky letters and our hosting platforms put our posts in Time Out. Seriously. I have a post that discusses specific PARCC questions in fairly general ways, but Blogger took it down. So you will have to simply accept my word when I say that in my professional opinion, BS Test questions are about 65% bunk.
For a testing instrument to gather good data, the testing questions have to be good, valid, reliable questions that accurately measure the skill or knowledge area they purport to measure. Then the students have to make a sincere, full-effort honest attempt to do their best.
The tests being used to generate data fail both measures. Letting this data drive your school is like letting your very drunk uncle drive your car.
Inside the Black Box
When I collect my own data for driving my own instruction, I create an instrument based on what I've been teaching, I give it to students, and I look at the results. I look for patterns, like finding many students flubbing the same task, and then I look at the question or task, so that I can figure out exactly what they don't get.
The BS Test is backwards. First, it was designed with no knowledge of or attention to what I taught. So what is required here is not testing what we teach, but teaching to the test.
Except that we all know that teaching to the standardized test is Bad and Wrong, so we have to pretend not to do that. On top of that, we have installed a system that puts the proprietary rights and fiscal interests of test manufacturers ahead of the educational needs of our students, with the end result that teachers are not allowed to look at the test.
So to be data-driven, we must first be data-inventors, trying to figure out what exactly our students did poorly on on the BS Test. We may eventually be given result breakdowns that tell us the student got Score X on Some Number of Questions that were collectively meant to assess This Batch of standards. But as far as a neat, simple "here's the list and text of questions that this student answered incorrectly," no such animal is occurring. This is particularly frustrating in the case of a multiple choice test, since to really track where our students are going wrong, we need to see the wrong answers they selected, which are our only clues to the hitch in their thinking about the standard. In short, we have 32% of the actual information needed to inform instruction.
We are supposed to do teach to the test with our eyes blindfolded and our fingers duct-taped together.
Put Them All Together
Consider all of these factors, and I have to conclude that data-driven instruction is a snare and a delusion. Or, rather, 87% snare and 92% delusion, with a score of 8 on the ten-point Not Really Helping. And I think the weeds measure about 6'7".