The sheer volume of purported purposes makes it appear that BS Tests are almost magical. And yet, when we start working our way down the list and look at each purpose by itself...
The notion that test results can be used to determine how much value a teacher added to an individual student (which is itself a creepy concept) has been debunked, disproven, and rejected by so many knowledgeable people it's hard to believe that anyone could still defend it. At this point, Arne Duncan would look wiser insisting that the earth is a giant flat disc on the back of a turtle. There's a whole argument to be had about what to do with teacher evaluations once we have them, but if we decide that we do want to evaluate teachers for whatever purpose, evaluations based on BS Tests do not even make the Top 100 list.
Inform Instruction: Micro Division
Can I use BS Tests to help me decide how to shape, direct and fine tune my classroom practices this year? Can I use the BS Tests results from the test given in March and sent back to us over the summer to better teach the students who won't be in my class by the time I can see their individual scores? Are you kidding me?
BS Tests are useless as methods of tuning and tweaking instruction of particular students in the same year. And we don't need a tool to do that any way because that's what teachers do every single day. I do dozens of micro-assessments on a daily basis, formal and informal, to determine just where my students stand on whatever I'm teaching. The notion that a BS Test can help with this is just bizarre.
Inform Instruction: Macro Division
Okay, so will year-to-year testing allow a school to say, "We need to tweak our program in this direction." The answer is yes, kind of. Many, many schools do this kind of study, and it boils down to coming together to say, "We've gotten as far as we can by actually teaching the subject matter. But test study shows that students are messing up this particular type of question, so we need to do more test prep--I mean, instructional focus, on answering these kinds of test questions."
But is giving every single student a BS Test every single year the best way to do this? Well, no. If we're just evaluating the program, a sampling would be sufficient. And as Catherine Gerwitz pointed out at EdWeek, this is one of many test functions that could already be handled by NAEP.
Measuring Quality for Accountability
It seems reasonable to ask the question, "How well are our schools doing, really?" It also seems reasonable to ask, "How good is my marriage, really?" or "How well do I kiss, really?" But if you imagine a standardized test is going to tell you, you're ready to buy swampland in Florida.
Here's a great article that addresses the issue back in 1998, before it was so politically freighted. That's more technical answer. The less technical answer is to ask-- when people wonder about how good a school is, or ask about schools, or brag about schools, or complain about schools, how often is that directly related to BS Tests results. When someone says, "I want to send my kids to a great school," does that question have anything to do with how well their kid will be prepped to take a narrow bubble test?
BS Tests don't measure school quality.
Competition Among Schools
"If we don't give the BS Test," opine some advocates, "how will we be able to stack rank all the schools of this country." (I'm paraphrasing for them).
The most obvious question here is, why do we need to? What educational benefit do I get in my 11th grade English classroom out of know how my students compare to students in Iowa? In what parallel universe would we find me saying either, "Well, I wasn't actually going to try to teach you anything, but now that I see how well they're doing in Iowa, I'm going to actually try" or "Well, we were going to do some really cool stuff this week, but I don't want to get too far ahead of the Iowans."
But even if I were to accept the value of intra-school competition, why would I use this tool, and why would I use it every year for every student? Again, the NAEP is already a better tool. The current crop of BS Tests cover a narrow slice of what schools do. Using these to compare schools is like making every single musician in the orchestra audition by playing a selection on oboe.
The Achievement Gap
We used to talk about making the pig fatter by repeatedly measuring it. Now we have the argument that if we repeatedly weight two pigs, they will get closer to weighing the same.
The data are pretty clear-- in our more-than-a-decade of test-based accountability, the achievement gap has not closed. In fact, in some areas, it has gotten wider. It seems not-particularly-radical to point out that doubling down on what has not worked is unlikely to, well, work.
The "achievement gap" is, in fact, a standardized test score gap. Of all the gaps we can find related to social justice and equity in our nation-- the income gap, the mortality gap, the getting-sent-to-prison gap, the housing gap, the health care gap, the being-on-the-receiving-end-of-violence gap-- of all these gaps, we really want to throw all our weight behind how well people score on the BS Tests?
Finding the Failures
Civil rights groups that back testing seem to feel that the BS Test and the reporting requirements of NCLB (regularly hailed as many people's favorite part of the law) made it impossible for schools and school districts to hide their failures. By dis-aggregating test results, we can quickly and easily see which schools are failing and address the issue. But what information have we really collected, and what are we actually doing about it?
We already know that the BS Tests correspond to family income. We haven't found out anything with BS Tests that we couldn't have predicted by looking at family income. And how have we responded? Certainly not by saying, "This school is woefully underfunded, lacking both the resources and the infrastructure to really educate these students." No, we can't do that. Instead we encourage students to show grit, or we offer us "failing" schools as turnaround/investment opportunities for privatizers. Remember-- you don't fix schools by throwing money at them. To fix schools, you have to throw money at charter operators.
For me, this is the closest we come to a legit reason for BS Tests. Essentially, the civil rights argument is that test results provide a smoking gun that can be used to indict districts so steeped in racism that they routinely deny even the most rudimentary features of decent schooling.
But once again, it doesn't seem to work that way. First, we don't learn anything we didn't already know. Second, districts don't respond by trying to actually fix the problem, but simply by complying with whatever federal regulation demands, and that just turns into more investment opportunities. Name a school district that in the last decade of BS Testing has notably improved its service of minority and poor students because of test results. No, instead, we have districts where the influx of charter operations to fix "failing" schools has brought gentrification and renewed segregation.
BS Testing also replicates the worst side effect of snake oil cures-- it creates the illusion that you're actually working on the problem and keeps you from investing your resources in a search for real solutions.
On the other hand, one of the dumbest supports of BS Testing is the idea, beloved by Arne Duncan, that expectations are the magical key to everything. Students with special needs don't perform well in school because nobody expects them to. So we must have BS Tests, and we must give them to everyone the same way. Also, in order to dominate the high jump in the next Olympics, schools will now require all students to clear a high jump bar set at 6' before they may eat lunch. That includes children who are wheelchair bound, because expectations.
Yes, somehow BS Test advocates imagine that parents have no idea how their children are doing in school unless they can see the results of a federally-mandated BS Test. The student's grades, the students daily tests and quizzes and writing assignments and practice papers provide no information. Nor could a parent actually speak to a teacher face to face or through e-mail to ask about their child's progress.
Somehow BS Test advocates imagine a world where teachers are incompetent and parents are clueless. Even if that is true in one corner or another, how, exactly, would a BS Test score help? How would a terrible teacher or a dopey parent use that single set of scores to do... anything? I can imagine there are places where parents want more transparency from their schools, but even so-- how do BS Tests, which measure so little and measure it so poorly, give them that?
Without BS Testing, ask advocates, how will the federal government know how schools are doing? I have two questions in response.
1) What makes you think BS Tests will tell you that? Why not just the older, better NAEP test instead?
2) Why do the feds need to know?
Many of the arguments for BS Testing depend on a non sequitor construction: "Nutrition is a problem in some countries, so I should buy a hat." Advocates start with a legitimate issue, like the problems of poverty in schools, and suggest BS Testing as a solution, even though it offers none.
In fact there's little that BS Tests can help with, because they are limited and poorly-made tools. "I need to nail this home together," say test advocates. "So hand me that banana." Tests simply can't deliver as advertised.
The arguments for testing are also backwards-manufactured. Instead of asking, "Of all the possible solutions in the world, how could we help a teacher steer instruction during the year," testing advocates start with the end ("We are going to give these tests") and then struggle to somehow connect those conclusions to the goal.
If you were going to address the problems of poverty and equity in this country, how would you do it? If you were going to figure out if someone was a good teacher or not, how could we tell that? How would you tell good schools from bad ones, and how would you fix the bad ones?
The first answer that pops into your mind for any of those questions is not, "Give a big computer-based bubble test on reading and math."
Nor can we say just give it a shot, because it might help and what does it really hurt? BS Tests come with tremendous costs, from the huge costs of the tests to the costs of the tech needed to administer them to the costs in a shorter school year and the human costs in stress and misery for the small humans forced to take these. And we have yet to see what the long-term costs are for raising a generation to think that a well-educated person is one who can do a good job of bubbling in answers on a BS Test.
The federal BS Test mandate needs to go away because the BS Testing does not deliver any of the outcomes that it promises and comes at too great costs.