Saturday, November 5, 2016

Did Race To The Top Work?

Not only is this a real question, but the Department of Education, hand in hand with Mathematica Policy Research and American Institutes for Research, just released a 267-page answer of sorts. Race to the Top: Implementation and Relationship to Student Outcomes is a monstrous creature, and while this is usually the part where I say I've read it so you don't have to, I must confess that I've only kind of skimmed it. But what better way to spend a Saturday morning than reviewing this spirited inquiry into whether or not a multi-billion-dollar government program was successful in hitting the wrong target (aka getting higher scores on a narrow, poorly-designed standardized reading and math tests).



Before We Begin

So let's check a couple of our pre-reading biases before we walk through this door. I've already shown you one of mine-- my belief that Big Standardized Test scores are not a useful, effective or accurate measure of student achievement or school effectiveness, so this is all much ado about not so much nothing as the wrong thing.

We should also note the players involved. The USED, through its subsidiary group, the Institute of Educational Sciences, is setting out to answer a highly loaded question: "Did we just waste almost a decade and a giant mountain of taxpayer money on a program that we created and backed, or were we right all along?" The department has set out to answer a question, and they have a huge stake in the answer.

So that's why they used independent research groups to help, right? Wellll..... Mathematica has been around for years, and works in many fields researching policy and programs; they have been a go-to group for reformsters with policies to peddle. AIR sounds like a policy research group, but in fact they are in the test manufacture business, managing the SBA (the BS Test that isn't PARCC). Both have gotten their share of Gates money, and AIR in particular has a vested interest in test-based policies.

So nobody working on this report is exactly free from bias or vested interestedness.

Oh, and as we'll repeatedly see, most of the findings here are over three years old. So that's super helpful, too.

Defining "Success" for RTTT and the Executive Summary

The study set out to examine six elements, and we want to be sure to look at that list because they constitute the definition of "success" for Race to the Top.

1) Improving state capacity to support school improvement efforts
2) Adopting college and career-ready standards
3) Building state data systems that measure student growth and inform instruction
4) Recruiting, retaining, rewarding and developing swell teachers and principals
5) Turning around low-performing schools
6) Encouraging conditions in which charter schools can succeed

Numbers two through five are recognizable as the four conditions that were extorted out of states in order to get their waivers and escape the penalties of No Child Left Behind. Number one is just a call to actually support the other items with more than prayers and best wishes. Six is-- well, that's a blunt as  the feds get about saying that they want to replace public schools with charters as a matter of policy.

The study breaks states into several groups. "Early RTT states" means states that got in the gravy train for rounds one or two; "late RTT states" are those that didn't jump on till round three. "Other states" or "non-RTT states" are those that, well, didn't get RTT grant money. Grant-getters were compared to not-grant-getters, and I'm going to keep my eyes peeled to note if, at some point in the meatier parts of the paper, we look at the notion that non-RTT states were still scrambling for waivers under threat of NCLB penalties, waivers that had requirements remarkably similar to RTT grant requirements. Frankly, this data set seems ripe for a study about whether the feds get more better compliance with bribery or with threats, but I'm not sure we're going to go there. We're still in the roman numeral pages.

The answer appears to be that there's not much difference between bribery and threats."When we examined changes over time in states' use of RTT-promoted practices, we found no significant differences between RTT and other states."

And right up front, the study lets us know some of the hardest truth it has to deliver. Well, hard of you're a RTT-loving reformster. For some of us, the truth may not be so much "hard" as "obvious years ago."

The relationship between RTT and student outcomes was not clear. Trends in student outcomes could be interpreted as providing evidence of a positive effect of RTT, a negative effect of RTT, or no effect of RTT.

Bottom line: the folks who created the study-- who were, as I noted above, motivated to find "success"-- didn't find that the Race to the Top accomplished much of anything. Again, from the executive summary:

In sum, it is not clear whether the RTT grants influenced the policies and practices used by states or whether they improved student outcomes. RTT states differed from other states prior to receiving the grants, and other changes taking place at the same time as RTT reforms may also have affected student outcomes. Therefore, differences between RTT states and other states may be due to these other factors and not to RTT. Furthermore, readers should use caution when interpreting the results because the findings are based on self-reported use of policies and practices. 

Hmm. Well, that doesn't bode well for the upcoming 200 pages.

Fun Side Note

To determine whether or not RTTT stuff influence "student achievement," the study relied on test results on the NAEP.

Let me repeat that. When the USED and a team of researchers, looking at the efficacy of a major education policy program over the past many years, wanted to know if US students were learning more, achieving more, and testing better, they skipped right over the PARCC and the SBA and the various other BS Tests currently being used for all manner of accountability and went straight to the NAEP.

Tell me again why all students need to take the BS Tests every year?

Also, the study would like us to remember that any differences that occurred in test results could have come from influences other than Race to the Top.

The Most and the Least (Troubling)

Across all states (RTT and non), the most widely and commonly adopted practice was the creation of the big data systems for tracking all the student data. So your state, RTT or Non, may not have gotten all the rest of these things taken care of, but when it comes to data mining and general Big Brothering, they were on point. Feel better yet?

The widest non-adoption was the RTT policies regarding teacher and principal preparation. In general adoption of the fed's clever ideas was low, bottoming out with the idea of evaluating teacher and principal prep programs and giving the "good" ones more money-- this policy was adopted by absolutely nobody. I'm wondering if states mostly left the teacher pipeline alone because they knew it was falling apart and they didn't want to bust it entirely. In some cases states did not so much beef up teacher prep as they simply abandoned it, implementing programs where humans who have qualifications like "certificate from another state" or "any college degree at all" or "a pulse accompanied by respiratory activity" could be put directly into a classroom.

History Lesson

No study like this is complete without a history lesson, and this study delivers a few pages of RTTT history. It was part of the giant American Recovery and Reinvestment Act of 2009, with a whopping $4.35 billion-with-a-B dollars were spent to try to get states to adopt policies that suits in DC believed would make education more better, though their beliefs were based pretty much on "This seems like a good idea to me."

There are charts showing who got what when for how many districts. My own state of Pennsylvania landed a whopping $41 million; the chart doesn't show how many local districts signed off on the application because in Round Three state education departments were allowed to gloss over just how many of their local districts had told them to go pound sand over this whole "We'll give you a million dollar grant to help you implement a ten million dollar program" business.

Also, there have been some RTTT studies attempted before. They found that implementing all this stuff was difficult. So there's that.

How We Did It

We also get a whole section about how data was collected and crunched. For a massive study of this depth and breadth, the methods are kind of.... well, tame? Unimpressive?

Data came from three places. The NAEP results. The Common Core of Data which is different from that other Common Core you may have heard about. The CCD is just all the public info about schools and ed departments etc. And then, to get each state's particular specific, the researchers called up representatives from state education agencies. So, some test scores, some public data, and phone interviews with "somebody" in the state office.

Those phone interviews were conducted in 2012-2013, aka right after Round Three states had gotten their money. Hence the separating of RTT schools into two groups-- those who had had a while to get things running and those that were still on their way back from depositing the check at the bank.

There's also an explanation here of how they tried to connect test results to program implementation and basically gave up because they were getting noise and junk for results.

Now for some more specific results.

State Capacity for Edureformy Stuff

This really breaks down into three aspects (which break down into ten, because government work, sigh), three "success factors." The third one was significantly raising achievement and closing the achievement gap, and "no state interview questions aligned to the third subtopic, so it was excluded from the analysis" which, wait-- what? We didn't ask about this, so we didn't include it in the study??

The other two were articulating the state's reform agenda, and building state ability to scale up and implement reformy stuff. The study found that, as of spring 2013, there was no difference between RTT states and non-RTT states. So, as of three and a half years ago. Well. That's sure helpful.

The biggest area of difference was when it came to strategies for turning around failing schools and for spreading practices by super-duper schools. RTT states did this more than non-RTT schools. No comments on whether any of those strategies actually did anybody any good.

Oddly enough, all types of states were pretty tightly aligned on one feature-- allowing for very little input from all stakeholders in defining priorities. No, that's not me being snarky-- that's an actual finding of the study.

Standards and Assessments

RTT states used more standards and assessment practices than non-RTT states. Virtually all states were on the Common Core bus, but it turns out that non-RTT states were less likely to have spent a bunch of money helping school districts with the implementation.

Data Systems

No significant differences here. All states adopted these practices. The only distinguishing feature was that RTT states were more likely to be doing data collection with early childhood programs as well. Interesting, and creepy.

Teacher and Principal Certification and Evaluation Practices

RTT states were doing more of these practices, including "high-quality pathways" to the jobs as well as using test results as part of the evaluation process. Again, the assumption that these are actually a good idea is not addressed.

Nearly all states were reporting on teacher shortage areas. Perhaps that's because following your teacher shortage areas is an actual useful practice.

Also noteworthy-- RTT states were far more likely to be embracing "alternative certification pathways," as well as allowing more to be set up. This is a policy outcome that directly contradicts all the pretty talk about supporting and improving the teaching profession, because you don't support the profession by throwing your weight behind programs that de-professionalize it by suggesting that anybody with some interest and a pulse can be sent into the classroom with minimal training. And all of that goes double for principalships.

RTT states were far out in front on using test scores for evaluation; no word on whether that was primarily through the widely debunked Value-Added measures, or if some other data masseuse was being used. However, hardly anybody was using test results to make compensation or professional advancement decisions.

Oh, and all that baloney about how states were supposed to find the best teachers and shuffle them all around for maximum impact and equitable distribution of teachery swellness? If you think that ridiculous policy idea can't actually be implemented in any way shape or form, it turns out almost all states agree with you.

Turnarounds

RTT states did more of this than non-RTT states. But instead of reading this part of the report, lets pull up any of the reporting about how the School Improvement Grants, intended to fund the turnaround revolution, turned out to be an utter failure. I'm starting to realize that this study has no interest in whether or not any of these policies are actually bunk.

Charter Schools

Early RTT schools did a "better" job of implementing the RTT practices aimed at increasing the reach and market of charters. So, to repeat, the US Department of Education is actively involved in helping charter schools sweep aside public education. This is really not what I want my tax dollars to be doing, thanks. But the report reminds us that the RTT application process favored those states that would let charters grow unhindered and uncapped, free to glom up as much real estate and as many students as they could advertise their way into.

So that's our point-by-point breakdown. Let's now talk about another concern of this study.

English Language Learners-- How Did Race To The Top Work Out For Them?

ELL students were more likely to be targeted by policies in the RTT states, though within the three subgroups, there was no difference between states, even if there were demographic differences applying to ELL population.

Discussion of Findings

Still with me? God bless you. We're about 100 pages into the report and they are now going to "discuss" their findings. Some of this is not so much "discussion" as it is "redundant restatement" of findings. But there are some interesting moments, particularly in the list of Questions Raised By These Findings.

Why did RTT states show more adoption of RTT policies than non-RTT states? I'd be inclined to go with "They were being paid a pile of money to adopt them," but the study suggests that the policies could be the result of differences between the states before the RTT competition. Or maybe states implemented a bunch of this stuff as a way to compete for the RTT money.

Why don't our 2012 and 2013 data match? One of the oddities of the report is that areas where some states seemed more RTT-soaked in 2012 were not the same in 2013. The authors don't know why, though it certainly points at the limit of self-reporting.

Why isn't it possible to find a connection between RTT implementation and student test results? They lean toward two possibilities-- we can't really figure out what the pattern of achievement was before RTT happened, and we can't really separate out all the other possible factors over and above RTT that could have changed test scores in that time period.

The Rest

Then follows about seven pages of end notes, and then we're into the appendices, which is a big lumpy festival of data and graphs and the numbers we squeezed out of the interviews. Dear reader, I love you, but I am not going to dig through these 150-ish pages for you.

My Findings about Their Findings

So what are my takeaways from this piece of something?

1) They spent three years and change turning their data into a report. About what was happening three years ago.

2) They put "relationship to student outcomes" in the title, then noted immediately and repeatedly that they had absolutely nothing to say about Race to the Top's relationship to student outcomes.

3) I was not entirely fair in reducing the question to "did RTTT work?" because that's not exactly what they asked. What they mostly really asked was "Did RTTT get states to implement the policies we wanted them to implement?" At the end of the day, this study carefully dodges the far-more-important question-- are any of the policies linked to RTTT any actual damn good for educating students? What we have here is a report that carefully asks if we hit the target while carefully ignoring whether the target we're looking at was actually the right target.

Put another way, the eleven (eleven!!) members of the study group have spent 200-plus pages talking about Race To The Top as if the point of the program was not to improve education, but just to spur the adoption of certain education policies on the state level.

It's like the feds said, "Go build this apartment building according to these blueprints." And then later, after the construction period was over, the feds sent an inspector and told them, "We don't care who's living in it and if they're happy living in it and if it's a safe and comfortable place to live. Just check to see how closely the builders followed the blueprints."

Maybe this is just how government functionaries work. Maybe when you've pushed a program that has shown zero educational benefits and quite a few destructive tendencies, all you can do is evaluate it by just saying, "Well, yes, we sure built that, we did."

Race To The Top (and waiveriffic RTTT Lite) was a disastrous extension of the worst of NCLB policies that brutalized the teaching profession and demanded that states turn schools into test-centric soul-mashing data centers, all while making a tiny toy god out of bad data badly used. The best thing you can say about is that it was so bad, so disastrous, so universally indefensible that it did what no issue in the last ten years could do-- it created bipartisan Congressional will to actually do their job. It is the rotten center of Obama's shameful education legacy. And nobody really needs 267 pages to say so.





4 comments:

  1. ha, ha, haw! It took them 267 pages to say what any text-savvy student could say in 3 letters: IDK.

    ReplyDelete
  2. The very premise of a federal program that pitted states against each other in order to win valuable prizes makes RTTT the most unethical program ever implemented by the USDOE. Public education rests on a foundation of equitable opportunity for all. The very idea of creating a contest with cash prizes and winners and losers should never have entered the brain of anyone who cares about the true responsibility of America’s public school system.

    NYS won the $700 million top prize in the RTTT contest. When the cost of following the rules of a contest far exceeds the amount of money awarded, it would suggest NY school districts were in fact the Biggest Losers. Heck of a job Andy!

    ReplyDelete
    Replies
    1. I'd say that it was illegal. It's called taxation without representation. It is illegal for the government to hold tax dollars hostage. The whole reason behind the inception of the USDoE was to make sure that those tax dollars were distributed fairly and evenly throughout the country.

      Delete
  3. And now we have #FutureReady

    In June of 2013, the President launched the ConnectED Initiative to provide 99% of students in the nation with access to high-speed Internet connectivity at the classroom level. Coupled with two billion dollars from the federal E-Rate program
    ( https://www.fcc.gov/consumers/guides/universal-service-program-schools-and-libraries-e-rate )
    increased flexibility in the use of federal funds, (http://tech.ed.gov/federal-funding-dear-colleague-letter/ )
    and billions of dollars in additional commitments from the private sector,
    ( https://www.whitehouse.gov/the-press-office/2014/02/04/fact-sheet-opportunity-all-answering-president-s-call-enrich-american-ed )
    progress towards improving the nation’s physical infrastructure has already been dramatically accelerated.”

    However, in order for these resources to leverage their maximum impact on student learning, schools and districts must develop the human capacity, digital materials, and device access to use the new bandwidth wisely and effectively. The Future Ready District Pledge establishes a framework for achieving those goals and will be followed by providing district leaders with additional implementation guidance, online resources, and other support they need to transition to effective digital learning and achieve tangible outcomes for the students they serve.”

    This kind of reminds me of the way Common Core was brought into the states through Race to The Top.

    Enter your state in this link to find out if your Superintendent has signed the pledge. http://futureready.org/about-the-effort/take-the-pledge/?search=&field_56d9bc8f9f5a0=

    ReplyDelete