Thursday, February 8, 2018

CAP Still Plugging Takeovers and Turnarounds

The Center for American Progress was, under John Podesta, a holding tank for Clinton politicians and bureaucrats who were biding their time, cooking up policy advocacy, while waiting for Hillary to take her rightful place in DC. As you may have heard, that didn't quite work out.

CAP often took point for the Democratic support of ed reform policies, and like DFER, they were often indistinguishable from conservative GOP ed reform groups. They were particularly relentless in their love of the Common Core (here, here, here, here, here, here, and here, to name a few). But now that nobody's going to land a government job any time soon, nd CAP is run by Neera Tanden and its board is chaired by....really?-- lobbyist and tax dodger Tom Daschle, they've been a bit more quiet about the Core. Are there other reform items they'd like to plug?

Well, they've joined the cottage industry of ESSA plan backseat drivers, and in a recent post, argued for a particular strategy for fixing schools. Which one? Here's a clue-- it's a strategy that has already been tested and failed.

A previous report by the Center for American Progress identified the ability for districts and states to intervene in low-performing schools as a critical school turnaround policy. States should initially provide flexibility for districts to replace staff, reallocate resources, or make changes to instructional time. If schools continue to struggle, states have the authority to take more rigorous action under ESSA. One approach states are considering is to implement alternative governance structures that change the turnaround agents or systems responsible for school operations and leading the path forward.

That "report " was not any sort of research- or evidence-based paper, but was the result of a "conversation" involving several federal, state , and local leaders "with expertise in school turnaround" gathered to talk about how best to do it. So this belief in state intervention was not a result of evidence, but the premise of a discussion among people who are professionally invested in this approach.

And what example do folks who support takeovers and turnarounds like to cite? Of course, it's New Orleans. Do we really have to get into all the ways that the privatization of the New Orleans school system is less than a resounding success? Or let's discus the Tennessee experiment in a recovery school district, in which the state promised to turn the bottom five percent into the top schools in the state, and they utterly failed. As in, the guy charged with making it happened gave up and admitted that it was way harder than he thought it would be, failed.

The whole premise of a state takeover is that somebody in the state capital somehow knows more about how to make a school work than the people who work there (or, in most cases, can hire some guy who knows because he graduated from an ivy league school and spent two years in a classroom once). The takeover model still holds onto a premise that many reformsters, to their credit, have moved past:  that trained professional educators who have devoted their adult lives to working in schools-- those people are the whole problem. It's insulting, it's stupid, and it's a great way to let some folks off the hook, like, say, the policy makers who consistently underfund some schools.

Most importantly, at this point, there isn't a lick of evidence that it works.

We have the results of the School Improvement Grants used by the Obama administration to "fix" schools, and the results were that SIG didn't accomplish anything (other than, I suppose, keeping a bunch of consultants well-paid). SIG also did damage because it allowed the current administration and their ilk to say, "See? Throwing money at schools doesn't help." But the real lesson of SIG, which came with very specific Fix Your School instructions attached, was that when the state or federal government try to tell a local school district exactly how things should be fixed, instead of listening to the people who live and work there, nothing gets better. That same fundamental flaw is part of the DNA of the takeover/turnaround approach.

But CAP is excited about ESSA because some states have included this model in their plan. So, yay.

They acknowledge limitations to the approach, including pushback from district and community members, noting that Georgia voted down the attempt at a recovery school district (call and "opportunity" district in an attempt to avoid the damage done to the brand). The state just went ahead and created a turnaround chief anyway, and CAP doesn't ask why pushback occurred. CAP's advice is to engage the community and get buy-in from stakeholders, but they don't really suggest how (pro tip: it involves listening).

CAP also says that it is "critical that states set the right parameters for measuring student progress" which would be a great thing to say if it were followed by the observation that the Big Standardized Test results soaked in VAM sauce are a lousy measure of school effectiveness, but they don't. Instead they just mean, "make sure everyone understands what the cut score is," which is actually better than the old favorite "bottom five percent" measure (a boon to charter developers, since there will always be a bottom five percent).

CAP does NOT note the problem with takeover/turnarounds that involve silencing local voice entirely and removing the duly-elected school board from power to be replaced, in some cases, by charter operators who are unaccountable to local stakeholders.

But CAP is happy about this trend because they think this "lever for change" is "promising." I think CAP continues to kid itself. Here's the last sentence of the article:

States that use this authority must do so strategically and with clear guidelines to work with the communities they serve, as well as capitalize on lessons learned from other states doing similar work.

The link is to a lousy new paper from Chiefs for Change, another part of the reform axis, which is unfortunate, because the lessons learned about state takeover of "failing" (aka "schools with low scores on a single poorly-written, narrowly focused standardized test") is that it rarely works, and often does more harm than good. But never let it be said that the folks at CAP let the little people who actually work in education distract them from the big picture of grand reform ideas.


  1. Thank you for the summary of failed policies.

  2. "CAP also says that it is "critical that states set the right parameters for measuring student progress" which would be a great thing to say if it were followed by the observation that the Big Standardized Test results soaked in VAM sauce are a lousy measure of school effectiveness, but they don't."

    The "real" problem onto-epistemologically speaking, that is from a basic foundational coneptual basis, THERE IS NO MEASURING IN THE TEACHING AND LEARNING PROCESS. There is assessing, judging, evaluating, and even pseudo-measuring (meaning fake measurements).

    The TESTS MEASURE NOTHING, quite literally when you realize what is actually happening with them. Richard Phelps, a staunch standardized test proponent (he has written at least two books defending the standardized testing malpractices) in the introduction to “Correcting Fallacies About Educational and Psychological Testing” unwittingly lets the cat out of the bag with this statement:

    “Physical tests, such as those conducted by engineers, can be standardized, of course [why of course of course], but in this volume , we focus on the measurement of latent (i.e., nonobservable) mental, and not physical, traits.” [my addition] (notice how he is trying to assert by proximity that educational standardized testing and the testing done by engineers are basically the same, in other words a “truly scientific endeavor”)

    Now since there is no agreement on a standard unit of learning, there is not exemplar of that standard unit and there is no measuring device calibrated against said non-existent standard unit, how is it possible to “measure the nonobservable”?

    THE TESTS MEASURE NOTHING for how is it possible to “measure” the nonobservable with a non-existing measuring device that is not calibrated against a non-existing standard unit of learning?????