Pages

Thursday, April 24, 2014

In Pursuit of Failure

Let's say I'm devoted to finding the Loch Ness Monster, and I am determined to find scientific proof. So I order up a host of sciency devices to search the loch, and I set out to test them. My test-- any device that finds the monster is certified accurate, and any device that does not is rejected and faulty.

I will measure the device's scientific accuracy by measuring it against my pre-existing belief. This is a type of science called Not Actually Science, and it is an integral part of much Reformy Teacher Evaluation.

James Shuls, Director of Education Policy at the Show-Me Institute (a maket-based solution group out of Missouri and not, sadly, a school for strippers), appeared this week in Jay P. Greene's blog (no relation afaik) reminding us of TNTP's "report" on the Lake Woebegone effect (so we've got the intersection here of three Reformy flavors).

Shuls follows a familiar path. We know that there are a bunch of sucky teachers out there. We just do. Everybody has a sucky teacher story, and Shuls also says that there is objective data to prove it, though he doesn't say what that data is, but we know it's accurate and scientific data because it confirms what we already know in our gut. So, science.

We know these teachers exist. Therefor any evaluation system that does not find heaps of bad teachers cluttering up the landscape must be a bad system. This line of reasoning was echoed this week by She Who Must Not Be Named on twitter, where a conversation with Jack Schneider spilled over. Feel free to skip the following rant.

(Because, for some reason, EdWeek has launched a new feature called Beyond the Rhetoric which features dialogue between Schneider and the Kim Kardashian of Education Reformy Stuff, and while I actually welcome the concept of the column, I am sad to see That Woman getting yet another platform from which to make word noises. Could they not have found a legitimate voice for the Reformy Status Quo? I mean, I wish the woman no ill will. I know there are people who would like to see her flesh gnawed off by angry weasels, but I'm basically a kind-hearted person. But I am baffled at how this woman can be repeatedly treated like a legitimate voice in the ed world when the only successful thing she has done is start a highly lucrative astroturf business. Sigh.)

Anyway, She tosses in the factoid that 1/2 of studied school districts didn't dismiss any teachers during the pre-tenure period. This, again, is offered as proof that the system is broken because it didn't find the Loch Ness Monster.

Now let me clear-- I think bad teachers are undoubtedly more plentiful than Loch Ness Monsters (and smaller). I've even offered my own revised eval system. I agree that the traditional teacher eval system could have used some work (the new systems, by contrast, are generally more useless than evaluation by tea leaves).

What I don't understand is this emphasis on Badness and Failure. This is the same focus that got us Jack Welch and stack ranking, widely considered "the worst thing about working at Microsoft" until Microsoft management decided they agreed and, like everyone else in the private sector, stopped doing it. This type of evaluation starts, even before a manager has met his team, with the assumption of a bell-ish curve-- at MS, out of every ten employees, the assumption was that two were great, seven were okay, and one was fire-ably sub-par.

Imagine doing that with a classroom of students. Imagine saying, "Whoever gets the lowest score on this gets an F, even if the score is a 98%."

Oh, wait. We do that, as in John White announcing before the New York test is even given, that 70% of students will fail it. And then-- voila-- they did!

It's a little scary that the Reformy Status Quo model is built around an absolute gut-based certainty that The Trouble With Education is that schools are full of terrible teachers who are lying to their gritless idiot pupils, and what we really need to do is shake up public schools by rooting out all these slackers and dopes, just drag them out into the light and publicly shame them for their inadequacy.

It's a lot scary that some of us seem to already know, based on our scientific guts, just how much failure we should be finding, and we're just going to keep tweaking systems until they show us the level of failure we expect to find.

For Shuls and free-market types, that means giving eval systems real teeth.

If school leaders actually had the authority and proper incentives to make positive pay or firing decisions based on teacher performance, we might start seeing some teacher evaluation systems that reflect reality.

Note again the assumption that we already know the "real" failure level-- we just need to get the evaluation system to reflect that. Shuls thinks the problem might be wimpy admins and weak consequences. If we threatened teachers with real damage, then we'd get somewhere.

For Education's Sarah Palin, the problem is people. VAM and other methods of including Test scores appeal to them because the test score won't be distracted by things like the teacher's personality or style or, you know, humanny stuff. The Test, these folks are sure, will reveal the students and teachers that are stinking up the joint, and it will be there in cold, hard numbers that can't be changed or softened or escaped. And they are numbers, so you know they're True.

The pursuit of the Loch Ness Failure Monster is a win-win for Purveyors of Reformy Nonsense. If a school appears to be staffed with good, capable teachers, that's proof that they are actually failing because if they had a real eval system, it would reveal all the failing teachers. And if the eval system does reveal failing teachers, well, hey, look at all the failing teachers. Not only is failure an option; it's a requirement.

No comments:

Post a Comment