Pages

Thursday, January 18, 2018

AEI and Lessons Almost Learned

The American Enterprise Institute (AEI) recently held a little mini-symposium about lessons learned from the Bush and Obama years of education reform. There's a lot to peruse, but Rick Hess does a brief examination of some takeaways in his regular EdWeek blog, and while he's struck by a few things, I'm struck by how the path to lessons learned leads right up to an important insight, but doesn't seem to seal the deal.

First, this:

I was struck how comfortable most in attendance seemed to be with the notion that "Bush-Obama school reform" was something of a unified whole. McShane and I had wondered if it might go the other way, with participants or attendees insisting that Bush and Obama reflected two very different approaches to school reform. After all, when thinking back on these administrations, they're generally regarded as being pretty far apart on a range of issues. But there seemed to be a broad consensus, for good and ill, that much of what Bush did in the NCLB era set the table for Obama's efforts, and that Obama's efforts built pretty seamlessly on what Bush had previously done.


I think an awful lot of us were hollering for eight years that the Obama education plan was simply more Bush education plan, except for the parts that were actually worse. Hess is correct that some folks have drawn distinctions between the two administration. Betsy DeVos, speaking at the same conference, characterized the difference as carrot and stick-- Bush policies threatened everyone with punishment, while Obama policies tried to bribe states into compliance. That's not unfair, although Obama's Ed Department never really put down the stick-- during and after the rounds of Race to the Top bribery, states always faced the threat that without waivers, they would be subject to all the punishments that NCLB put in place.

We could talk about details and specific policies and the kind of titanic bureaucratic chair shuffling that only seems significant if you're stuck inside the beltway, but University of Oklahoma's Deven Carlson gets us closer the important question of what, exactly, the two sets of failed policies had in common.

In surveying what we've learned about school accountability, which seems to be remembered as the defining legacy of the Bush-Obama era, Oklahoma's Carlson offered up a series of takeaways. First off, he argued that accountability clearly increased test scores in reading and math, and that "no fair reading of the literature" can deny that. That said, due to test prep and other kinds of manipulation, "achievement increases may not correspond to actual learning gains" and "reading and math gains came at the expense of instruction in other subjects." At the policy design level, he said that schools responded to accountability in unintended and unproductive ways, frequently focusing on proficiency thresholds and "bubble" kids rather than system improvement. Carlson suggested that all of this was fueled by unrealistic expectations and goals.

Accountability is a common thread, with both policy sets focused on the idea of holding states and schools and teachers accountable, and not just accountable to parents and local taxpayers, but to the federal government (this is where Andy Smarick and I will tell you to go read Seeing Like a State).

Carlson dances right up to the point here. Policies increased test scores, though it's unclear if the test scores mean anything, and likely that elements of actual education were sacrificed for the scores.

The common thread, the failure point for both Bush and Obama ed policy is not accountability.

It's the damn test.

The entire reform system, the entire house of policy built by both administrations, is built on the foundation of one single narrowly-focused standardized test, the results of which are supposed to measure student achievement, teacher effectiveness, and school quality. The entire policy structure is held together and activated by data, and that data is being generated by means no more reliable than a gerbil tossing dice onto shag carpet in the dark. It's not a house built on sand-- it's a house built on sand that's been mixed with cat poop and laced with exploding eels. Both administrations built policy machines built to run on hi-test super-pure data fuel, and then filled the tanks with toxic cool-aid mixed with mashed potatoes. It's no wonder that machine would not carry us to the promised land.

Sure, there has ben a surfeit of baloney. Attempts to create national standards. Theories about how schools can be turned around without spending money. Disregard and disrespect for professionals who work in schools. Opportunistic attempts to make sure that "reform" would include "increased market openings for entrepreneurs." Unrealistic goals and destructive penalties. These all helped insure that Bush and Obama's ed policies would fail.

But all of them rested upon the lie that we have a means of generating real, useful data. We can only talk about how to punish or reward teachers and schools if we think we have a valid way to evaluate them. We can only talk about standards if we think we have a way to determine that the standards are being met. We can only talk about accountability if we believe we have a valid and reliable measure of what's getting done.

And we don't.

I know that makes some folks crazy. I know that some folks believe there must be a yardstick we can use to measure schools, and they want to believe that so badly that they have convinced themselves that a piece of twisted wire they found in a junk yard is just as good as a yardstick.

They are wrong. They've been wrong for well over a decade now, and the damage continues to pile up (and that suits some people just fine-- hooray disruption and entrepreneurial opportunity!).

It's important to understand that the Big Standardized Tests are the rotten core of these failed enterprises, because ESSA encourages us to keep repeating that mistake. Our ed policy "leaders" are like people who tried to make a self-driving car by setting a brick on the gas pedal and lashing the steering wheel in one position, and every time the car rams into a tree, they say, "Well, we must need a different sized brick, and we'll tie the steering wheel in a different position." They are like people who have gathered toenail clippings from elephants and declared, "Withy the right analytical model, we can determine the height and health of trees in the jungles that grow 200 miles from where these elephants live."

No, your approach is fundamentally, fatally flawed.

There is no program, no policy, no design, no model that will allow you to turn bad data into good data. There is no model that will let you turn a small sliver of bad data into a rich, full, accurate picture of reality.

Okay, smartypants, you say. If the BS Tests won't give us the data we need, then what else will.

The real answer is that I don't know-- and neither do you and neither does anybody else. And frankly, if you want to propose a data-gathering system, the burden of proof is on you to establish that the system is any good (a burden that has never, ever been met by the purveyors of the BS Tests). There are smart people who have written whole books about the matter, but solving the problem will require a very large conversation, the likes of which we still haven't had. Having that conversation, for real, for serious, would be a good start.

But meanwhile, the critical lesson from the past two administrations remains unabsorbed. It is the generation of narrow, bad data, the placement of a bad standardized test at the center of education, that doomed all of the previous ideas, both the good ones and the bad ones, to failure (in fact, it is the unfounded belief in BS Tests as fonts of good data that has allowed some bad ideas to even exist).

Until we address that, ed policies will continue to fail.

1 comment:

  1. Three rather disconnected thoughts...

    First: "After all, when thinking back on these administrations, they're generally regarded as being pretty far apart on a range of issues."

    The only people who think Bush and Obama were far apart on a range of issues were those who believed the promises of candidate Obama and didn't pay attention to the actions of President Obama.

    Second, regarding sticks and carrots, the actual image is that of a carrot tied to a stick held forever in front of the donkey to get it to pull the cart. The stick and the carrot are the same thing. The donkey will never get the carrot because of the stick.

    Third, you talk about solving the "problem" of measuring schools. I guess I'd take a step back even from there and challenge the framing as a problem. The only reason we'd need to "measure" (sic) schools is if we want to compare them, but why do we need to? If my school offers what I need for my child and my child is getting a good education and is being well treated, what difference does it make how my child's school compares to your school or that other school down the road, much less that school five towns or three states away? I think we really need to fight back on the whole notion that schools need to be either "measured" or compared and that it's a problem that we don't have a good way to do that.

    ReplyDelete