Saturday, September 24, 2016

Essential Reading for Education Activists (and Wonks)

Corporate, privatized, market-driven education reform hasn't worked-- and now there's a book chock full of research to prove it.

The National Education Policy Center is based at the University of Colorado (Boulder) School of Education. They look kind of like what I always imagined when I thought of an actual think tank-- one that was interested in real inquiry and research, and not just put together to lobby for a particular set of ideas. They've created a network composed of many of the top researchers in the education policy world (just look at this list of fellows) and they are a regular source of actual education policy research (as well as doing solid analyses of other research that is out there, even when it's just "research").

William J. Mathis and Tina M. Trujillo have put together an important (and huge) look at what's been going on in education. Learning from the Federal Market-Based Reforms is a collection of twenty-eight articles from a rather amazing array of top scholars in the field, looking at what has been tried, what hasn't worked, and the research says will work.

The preface, foreword, and introduction lay out the vision pretty clearly and forcefully. In the second paragraph of the preface, Mathis and Trujillo summarize where we are pretty succinctly:

Unfortunately, our review also confirmed that, despite decades of solid research evidence demonstrating the limited and contradictory effects of the market model on school reform, it is still the model that dominates education in this country, particularly in schools that serve low-income families and children of color.

In her foreword, Jeannie Oakes argues that cultural values have dominated the arena while pushing aside actual research-based approaches, and that the dominant value is a sort of behaviorism. Reformsters have "normalized the idea that school quality and equity will improve" as families shop in an unequal "competitive" marketplace. Oakes raises an idea that I confess I hadn't really considered-- that a market-based approach doesn't just fail to erase differences, but actually cements a marketplace of schools of varying quality. The implication is clear-- in a free market there must always be "bad" schools, and some students will be stuck attending them.

Instead of a system promoting equity and education as a common good

 market-based, test-driven reforms have only reinforced the weak notion that a high-quality education is a scarce commodity that few schools provide and that families must compete for good opportunities for their children.

This despite forty years of research that provided an enormous body of knowledge about the causes and consequences of educational inequality.

If the subheading of the book ("Lessons for ESSA") concerns you, the introduction makes it clear that NEPC does not have rose-colored glasses on about the new education policy.

Unfortunately, research (such as in this book) plainly tells us that ESSA preserves most of the unproductive structures and reforms that NCLB prescribed... at its core,  ESSA is still a primarily test-based educational regime.

The introduction points at a culprit: "The faith in test-driven accountability and punitive techniques for fixing schools is the dominant operational philosophy." And the writers also summarize the bulk of the research in the book as pointing to "one unambiguous conclusion-- heavy-handed accountability policies do not produce the kinds of schools envisioned under the original ESEA."

And all that is while we're still in the part of the book where the pages are numbered with roman numerals.

There is plenty to chew on here (the book is, after all, almost 700 pages). But it is worth the chewing, and I expect that I will visit several of the chapters by themselves in blogs in the weeks ahead.

The book is built in four main sections:

Section 1: The Foundations of Market Based Reform

These four chapters look at what got us here, looking at the growth and change of policy starting all the way back with the New Deal. In particular, Harvey Kantor and Robert Lowe offer an interesting idea by characterizing policy change as "educationalizing the welfare state and privatizing education." Heinrich Mintrop and Gail Sunderman consider how the failure of sanctions-driven accountability was completely predictable (but that doesn't mean we won't stay stuck with it).

Section 2: Test-based Sanctions: What the Evidence Says

Four really important chapters here, looking at what the research actually says about school turnaround strategies (spoiler alert: not much good), the effect of school choice on achievement, and the real costs of school closures.

Section 3: False Promises

This large section contains eleven articles that each address one of ed reforms beloved bright ideas. Paul Thomas writes about "miracle schools," and the American Statistical Association's statement on the use of VAM for evaluation is here. Stan Karp effectively beats the dead horse that is Common Core. Several articles look at the civil rights angles of plugging reform, and Anne Gregory, Russell Skiba and Pedro Noguera look at how the achievement gap and discipline gap are related. Private contracting, school choice--there's even a look at virtual education.

Section 4: Effective and Equitable Reforms

Here are nine articles delineating what actually does work, considering everything from poverty to adequate funding to class size. There's some talk about T-PREP as a model for evaluating education programs as well as a look at some community organizing programs that have been successful.

Section 5: Bottom Lining It

At the end, Mathis and Trujillo return to the stage to make some final observations and recommendations for moving forward under ESSA.

Those recommendations include addressing the opportunity gap, admitting that high-stakes, test-based accountability doesn't help students learn, and accepting that privatizing schools hasn't worked very well, either.

Digesting

There's is a lot to read and digest here (did I mention it was almost 700 pages?) and my first pass through has been fairly cursory. Some of it is very, very wonky. And the odds are good that somewhere in those pages, you may find some ideas that you disagree with. If you are virulently anti-ESSA and anti-any-government-involvement-at-all, you may disagree with a lot.

However, it's well worth the time and effort to read work that is based on actual research as opposed to the kind of substance-free PR puffery that comes from reformsterland. Heck, it's even a good idea to take the occasional break from the ranting of various bloggers and absorb some actual scholarship.

The Gates Plan for College

Some days I feel kind of Rip Van Winklesque, as if I went to sleep and when I woke up the world had changed. Apparently while I was sleeping, the electorate rose up and elected Bill Gates the Grand Uber Head of Education. "Please," a bunch of you non-sleeping people said. "Redesign our entire education system. Redefine what it means to be an educated person, and redefine how a person gets an education. Please do that for us, and now that we've asked you to do this, please never ask us for any input on the subject ever again."



And so we got Common Core and high-stakes testing and Big Data Systems and a whole giant network of astro-turf groups pushing these policy ideas and a decade of corporate dismantling of public education, funded in astonishingly substantial ways by Bill and Melinda Gates.

But apparently while I was sleeping, y'all asked him to do something about redesigning colleges, too.

I'm looking at the most current version of Gates' Postsecondary Success Advocacy Priorities, which is kind of a non-meaning word salad of a title, but I'm thinking what we have here is what The Gates considers the priorities to advocate of in the process of redefining post-secondary success. Yes, I've read it so you don't have to, but if this is the kind of thing you let happen while I'm asleep, we've really got to talk.

The Overview

Higher education is the bridge to success. Well, it used to be, but now it's a narrow twisty high-priced toll bridge, and that's a problem. Mind you, the cost of that problem is not to the human beings who wanted to cross the bridge:

Rising costs and debt, stubbornly high dropout rates, and persistent attainment gaps threaten higher education’s ability to meet societal and workforce needs. Recent estimates show that the nation will need 11 million more workers with some form of high-quality post-high school education by 2025 than our system is currently on course to produce.

The Gates strategy is "dedicated to building human capital" by leveraging solutions, networks and incentives. So, yeah-- apparently the whole point of post-secondary education is to provide additional vocational training so that young widget-wannabes can grow into useful human capital. That human capital would be mostly poor and first-time post-sec education folks.

The rest of this is going to sound familiar to those of you who have been paying attention to personalized competency based education, credentials, and the cradle-to-career data pipeline.

The paper lays out three areas of emphasis and planning for The Gates.

Data and Information

The stated goal here is "a comprehensive national data infrastructure that enables the secure and consistent collection and reporting of key performance metrics for all students in all institutions." So once again we also have an implied goal of standardization across all institutions (otherwise the key performance metrics won't match) as well as a far-reaching and markedly creepy data system.

The Gates sees this as critical in answering questions like "whether and which colleges offer value."  The system they envisions mandates the linkage of every single private and public entity that collects or holds data about the individual students. The paper is talking about this mainly as a way to measure the value provided by post-secondary school, but that really doesn't make it seem any less creepy, and it doesn't take an even-slightly-paranoid person to imagine how such a database would be useful primarily for corporate employers, who could just order up exactly what they wanted from the Giant Database of Human Capital. Kind of like the creepiest match.com ever.

The Gates highlights some of the steps that have been taken to further this creepy dream (but not the steps that have been thwarted, like inBloom). All that has to happen is the giant data storage structure has to be built and everyone has to be told exactly what data points are to be collected and by what instruments and in what format. Oh, and every college and university has to agree to use the same metrics and system as every other college and university. That should be easy because colleges and universities love giving up their autonomy.

Finance and Financial Aid

Have you heard? Simply everyone in the country is talking about college affordability. Good thing you all asked The Gates to fix that while I was napping.

The Gates says the feds should make getting aid easier (they think the FAFSA is too hard, complicated and slow). The feds should also make more "resources" (aka "money") available to students to pass on to schools. Also, the aid programs should add incentives for sticking with it, getting the degree, and landing a good job. Which makes me wonder-- don't those things come with built-in incentives? And if they don't, is there a different problem that we should be looking at?

But The Gates wants some outcome-based incentives, and honestly it's a little fuzzy-- it appears we want these for both the school and the students. For the school, there's a real problem with such incentives, because if I'm incentivized to graduate students, then I am also incentivized to not accept students who are iffy in the probably-graduate department, which would actually make it harder to get into college for the kind of first-generation, poverty-background students that The Gates says they're especially concerned about.
.
Meanwhile, The Gates is having its buddies at Research For Action look into the various implications of outcome-based funding. Outcome-based funding is always a bit of a red flag because the natural extension of the idea is the kind of system where students are rewarded for each badge or credential they achieve-- this gets us a system where students are rats in a maze and education is reduced to a series of over-simplified hoop-jumping. But there are plenty of people working on developing just such a system.

Student-Centered Pathways

Well, now we're back to the world of upside-down reformster language. The problem as The Gates sees it is that college, with its "cafeteria model" course offering creates confusion and leaves students without a clear path. So to get rid of that confusion and provide clarity, why not tell them exactly what they have to do? See? Less choice is more happiness. Making students adhere to a pre-chosen path is student-centered.  Also, freedom is slavery.

This glorious future will be ushered in by Integrated Student Planning and Advising for Student Success (iPASS-- seriously, I didn't make that up). This software will use predictive analytics to help students stay on the right path to the right credential (which is what we keep talking about-- credentials and not degrees).

Also, "many low-income and first-generation students face the hurdle of passing introductory general education courses offered in large lecture halls with hundreds of students." So maybe it would be better if they just took their courses on the computer.

And standardization out the wazoo. Lots of students change schools, so all their credits and courses should be transferable. Also, remediation would go more smoothly if all the high school and college standards were aligned to each other, all across the board.

In summary, only by forcing every future widget onto the same one-size-fits-all pathway can we hope to provide a "student-cenetered" education experience that will best prepare them to be of use to their corporate overlords.

Why, it's a brave new world!

I'll remind you that these special Certificates of Human Capital Usefulness are being directed at first time, low-income post-secondary students. If you are a hopeful person, you'll conclude that's because The Gates wants to lead an action of social justice and economic uplift. If you are somewhat more cynical, you might conclude that folks from the Higher Classes would never allow their children to be subjected to such a system that treats them like easily-shaped widgets while devaluing higher education as nothing more than advanced job training run to benefit corporations rather than human beings. It is yet another redesign of an education sector into one more tool of the Betters class, a tool to shape the worker class into More Useful (and More Easily Used) corporate tools.

The whole thing is enough to make me very tired, but I swear, I'm not going to take another nap until everyone promises not to elect Gates to fix anything else.

The Word Charters Leave Out

The sales pitch, in various versions, pops up every time charter cheerleaders are pushing charters as the Big Solution in education.


"We know how to educate poor minority students."

The implication, of course, is that public schools don't know how to get the job done. The use of civil rights rhetoric further pushes the idea that charters can rescue non-wealthy, non-white students from a public school system that either can't or won't provide them with the education they need and deserve.

The problem with this assertion, however, is the words that charter fans invariably omit from the pitch.

The word is "some."

As in, "We know how to educate some poor minority students."

And that's a problem. That single word is the difference between a pitch that makes compelling sense and one that is simply a pack of weasel words. Let me tell you why.

First, some stipulations:

I'm going to skip for the moment my usual objections that the measures being used to determine whether a school is successful or not are grade-A useless baloney. Let's just pretend for the moment that we know how to measure student success.

And we can also insert my usual disclaimer here that not all charters are problematic, and particularly back before the rise of the modern investment-driven hedge-fundie charters, there have been charters that have truly added to the public education landscape. So I don't automatically hate charter schools.

I'm also going to acknowledge right up front that we have many schools and school districts that are not doing right by non-wealthy non-white students. That problem is real, and I am not going to pretend for a moment that if we just make modern charter schools go away, things will automatically be both hunky and dory.

So, what is the problem with--

We know how to educate some poor minority students

Problem #1: That is not the gig.

The public education gig is to educate all students. All. Students. Not some, not a few, but all. One of my objections to the rise of the modern charter is that it's a quiet re-write of the public education mission-- let's stop trying to educate everyone and just focus on the chosen few, and put Those Children in the underfunded holding pen that we'll call public school.

Some charter fans are open and honest about this; Mike Petrilli has noted that a charter mission should be to give "strivers" a place to get away from Those Other Students. But other charter fans deliberately obscure their omission of "some" and tout their ability to get good results with a few students as a sign that they know something that public schools do not.

Problem #2: This is not news.

I think this is one of the things about modern charters that absolutely drives public school teachers nuts. Charters want to claim that because they can achieve success with a small, select sample of students, they Know Something About Education. Dude, those of us in public education have known since forever that if we were free to pick and choose our students and could just get rid of the ones who don't want to learn the way we want to teach, we would look like education rock stars. Everyone knows that.

So when Boston charters start talking about their awesome results without also talking about their awesome attrition (and non-backfill) rates. When your charter system can point to a grand total of fifteen black males who went on to graduate from college, you are not showing us anything that public schools couldn't quickly and easily replicate-- if we were allowed to change the nature of the gig (see problem #1).

Bottom line

Some charters cream, deliberately, as a matter of policy (like these charters in California that got caught). Some cream more organically by targeting particular parts of the market with their advertising, and of course all charters self-select for families that are more involved in their child's education (and ask any public school teacher how schools would change if we had only the students of families that cared about education).

And we don't talk enough about the importance of the no-backfill rules in operation in many charter markets, guaranteeing that no new students ever come in in the middle of a multi-year program. Again- we already know that no-backfill would work, but that's not the public education gig.

There are charter fans who know better. Chris Barbic left the Tennessee Achievement School District noting that it's hard to raise the success rate of schools when you have to keep all the students that live in that school's community.

Anybody can do a good job of educating some students. Modern charter advocates should stop pretending they have invented the wheel. And if they really want to be honest, they can start using that one simple word-- some.

Friday, September 23, 2016

CA: Court Rejects Test-based Teacher Eval

While astro-turf group Students Matter, a front for the reformster activism of Very Rich Man David Welch, is most famous for concocting and then losing the Vergara case, they have been trying to skin the reformy cat with other knife-like lawsuits as well.



With Doe v. Antioch, Welch's group set out to compel thirteen California districts to include Big Standardized Test results in teacher evaluations. To do so, they dragged out the Stull Act (a law old enough to have been signed by Governor Ronald Reagan). The law (also amended in 1999) was supposed to require districts to base teacher evaluations on student test scores-- but it has the words "reasonably relate" which are, depending on your point of view, a necessary bit of slack to allow schools to handle the problem of alllllll those teachers who don't teach tested subjects (how exactly do you tie the evaluation of your phys ed teacher to the results of a math and reading test).

School districts have made use of that wiggle room, and reformsters have periodically waxed cranky over the wiggling.

We have actually been down this Via del Lawsuit before-- back in 2010 Doe v. Deasy was filed in Los Angeles by EdVoice, the group used as a front by Eli Broad, Reed Hastings and Richard Merkin. The case dragged on for a while and ended in a sort of draw, with reformsters and the teachers union each getting a little bit of what they wanted-- the district could include test scores, but would have to negotiate with the union about how much the tests would count.

So Welch and his crew went back to the court to see if they couldn't do better with some Bay Area districts.

The answer was no, no they couldn't. 

You can read the forty-page decision here, but Contra Costa County Superior Court Judge Barry Goode essentially determined that the law does not clearly say what Welch's group says it clearly says. While it says that districts must do some assessy things with students and some evaluaty things with teachers, the twain are not clearly required to meet.

The statutory language is not crystalline. It does not say (as Petitioners might prefer) “each school district shall assess each teacher, in part, based on the scores his or her pupils achieve on state adopted criterion referenced assessments.” Nor does it say (as Respondents might prefer) “each school district shall assess each teacher, in part, based on how he or she uses the scores of his or her pupils on state adopted criterion referenced assessments.”

Goode also digs through the history of the act and its four different sets of amendments (1975, 1983, 1995, 1999) to see what legislative discussion might shed light on the law's intent, and he again finds nothing to indicate that testing and teacher evals were meant to be inextricably linked.

In other words, he considers what the law doesn't say as important as what it does say. Goode rather charmingly puts it this way:

That is something of a “dog that did not bark.” If the Legislature were to have changed, so dramatically, the rules for the evaluation of teachers (as Petitioners argue), then the committee or floor analyses would likely have apprised members of that. Indeed, given the controversy over standardized tests, one would expect there to have been considerable debate and public discussion of such a change.

Goode hears no barking dog, and the barking of Welch's legal team is not enough to convince him, though bark they do:

Marcellus McRae and Joshua S. Lipshutz, the lead attorneys for the Doe v. Antioch petitioners, which included both California teachers and parents, issued a statement blasting Goode’s ruling. “A teacher evaluation that ignores student learning is a farce that serves neither students nor teachers,” they declared. “The decision ignores this basic and indisputable logic and renders the Stull Act meaningless.”

Probability that the decision will be appealed seems high.

Their complaint is, of course, bogus. Well, no, that's not quite right-- a teacher evaluation that ignores student learning is a farce that serves nobody, and when your teacher evaluation is based on bad data gleaned from bad tests, that is exactly what you have. Their presumption that BS Tests scores are valid measures of student learning-- that's the bogus, unsupportable baloney part. That, however, is beside the point in this case.

But in the meantime, the lawsuit demonstrates once again the danger of filing a lawsuit in hopes of "clarifying" a law-- sometimes it turns out to clearly mean something different than what you hoped for. For the moment, no school district in California is required to make student scores in the Big Standardized Test a major part of teacher evaluations.

Thursday, September 22, 2016

Wells Fargo and Making Your Numbers

As you've probably heard by now, Wells Fargo got caught building its financial strength by a technique known variously as lying, fraud, or just making shit up. Low level employees created a bunch of fake accounts linked to actual humans who had no idea their financial matters were becoming messier by the minute. At the end of the day, 5,300 low-level employees were fired, and nobody who was responsible for those employees suffered the slightest penalty.

So what does this have to do with education?



Well, first, it's one more occasion to invite those who insist that teachers should have to face accountability measures "just like they do in the real world"-- well, those folks can just shut up. The accountability faced by executives like CEO John Stumpf was exactly zero. As Elizabeth Warren, in her highest dudgeon, dragged out of him, the entire sorry episode didn't cost him a penny and didn't cost a single high-level executive their job. This is the same "real world" accountability faced by the guys who messed up Enron and tanked the world economy in 2008-- absolutely none at all.

But mostly it's a pretty stark example of what can happen when an organization becomes focused on making its numbers.

Considerable pressure was put on front line employees to make their numbers, and employees who tried to call attention to how the business was being warped by its perverse incentives were soundly spanked and, in some cases, ruined.  

When you install shortcut proxies for actual success, and you create very high stakes around those numbers, you completely change the nature and purpose of the institution. Wells Fargo didn't just cheat and defraud customers as a matter of policy; they transformed the very nature of the company from a business built on providing customer service  to a business built on making numbers. Customers were no longer people to be served, but resources to be tricked and defrauded into providing the company what it needed keep its stock rising and its executives fat and happy.

This is Campbell's law in action-- when you use a too-simple number as a measure of a complicated network of relationships and goals in an organization, you completely twist and ultimate warp the very nature of the things you are trying to measure. When you create do-or-die goals that are impossible to meet legitimately, you corrupt the organization and make cheating the preferred culture of the institution. Those who won't cheat, who won't do whatever it takes to make their numbers, are driven out. What did the data on new accounts and instruments tell executives about how employees were doing their jobs? Nothing. Nothing at all.

Bad data plus incentivization with high stakes equals disastrous mess.

It is easy to think, "Well, even if the data numbers don't exactly precisely show us what we want to see, well, at least their something, and something is better than nothing, right?" But the Wells Fargo fiasco is a reminder that chasing bad data results is not harm-neutral. And scores or simple numbers are always bad data, because to simplify complex information to a simple number or two always means losing the full picture. It's trying to judge the health of the jungle by weighing elephant toenail clippings-- you'll always be looking at a warped and limited picture, and so trying to make your numbers will always screw up your whole system.

You can argue that Wells Fargo executives are money-hungry greed-hounds, and that's what created the problems. It certainly greased the skids, but the mechanism that caused the entire organization to lose its way is the pursuit of bad numbers.

Using this kind of bad data is a lie. Saying "If our employees are selling more of this product, we are fulfilling our service mission as a bank is a lie." Saying "If students are getting higher scores on the Big Standardized Test, then they are getting a great education and our schools are thriving" is a lie. And when you base your mission and your critical relationships on a lie, destruction follows.

Wednesday, September 21, 2016

Grade Inflation?

Mike Petrilli (Fordham) is concerned about grade inflation.

His concern, as expressed in a recent piece at Education Next, is hung on the hook of a recent-ish survey by Learning Heroes, a new group sponsored by the same old folks (Gates Foundation, Bloomberg Philanthropies, Helmsley) that has partnered with some other outfits funded by the same people, like Great Schools (funded by Gates, Bloomberg, Helmsley, Walton) to help sell the notion that Big Standardized Testing is Really Important and we should care about it.


The Learning Heroes survey found that 90 percent of parents believe their child is performing at grade level or better. As you might expect, I don't put a lot of stock in what Learning Heroes have to say, but I can believe that their finding on this point is not far off the mark. Setting aside the construct of "grade level" (as Petrilli also does), I'm not sure that this finding doesn't say more about parental love than parental academic acumen. Sometimes we lose sight of how a poll actually works, but I ask you to imagine for a minute-- a stranger calls you on the phone and asks you to say how smart and accomplished your child is. What do you say? "Yeah, my kids kind of slow and behind," probably isn't it.

But for Petrilli, this feeds into a narrative that reformsters have been pushing for over a decade-- the public schools are lying to parents about what is being accomplished.

Providing a more honest assessment of student performance was one of the goals of the Common Core initiative and the new tests created by states that are meant to align to the new, higher standards.

That's Petrilli's polite way of putting it. Arne Duncan, you will recall, said that white suburban moms were going to be upset to find out their kids weren't as smart as they thought. At one point reformsters were trying to sell us the Honesty Gap, a method of crunching numbers to determine just how much your state education system was lying to you. This has been the recurring narrative-- your teachers, your schools, even your state, has been lying to you about how well your kids are doing, and only federally crafted standards backed up by Big Standardized Tests can tell you the truth.

Pertrilli is no dope; he understands the challenge here for testing industry salespersons

Conscientious parents are constantly getting feedback about the academic performance of their children, almost all of it from teachers. We see worksheets and papers marked up on a daily or weekly basis; we receive report cards every quarter; and of course there’s the annual (or, if we’re lucky, semiannual) parent-teacher conference. If the message from most of these data points is “your kid is doing fine!” then it’s going to be tough for a single “score report” from a distant state test administered months earlier to convince us otherwise. After all, who knows my kid better: his or her teacher, or a faceless test provider?

He dismisses the old test reports as impenetrably complex, and touts instead the new, improved PARCC reports, which a transparent in the sense that one can clearly see that they provide next to zero useful information.  But Petrilli argues that these reports soft-pedal the real results, and that we should look sixth graders in the eye and give them the cold hard truth. Nobody, he says, wants to incite a riot and "tell parents to grab a pitchfork and march down to their school demanding an explanation for lofty-yet-false grades their kids have gotten for years on end," but on the other hand, he says, "maybe they should."

This is the new pitch. Grade inflation. Petrilli has been asking folks to chime in with a possible solution to the grade inflation problem. My response, when asked, is that first I have to be convinced it exists.

Petrilli's basic argument is that grades are high and BS Test scores are low, therefor the grades must be inflated. There are several problems with his assumptions here.

1) He assumes that the Big Standardized Test is an accurate instrument for measuring student achievement. There is virtually no reason at all to believe that is true, and many many reasons, from the narrow focus to the multiple choice approach to the just-plain-lousy questions.

2) Even if we assume that, say, the PARCC is a good, solid, reliable, valid test-- which is a huge ungrounded assumption, but let's play along-- we still have to face the first hurdle, the problem of getting students to take the BS Tests seriously. We repeatedly discover that they do not.

3) Even if we assume that the BS Test is a "good" test and that the students tried their hardest on it, we still don't have any evidence that a good score on the BS Test is an indicator that the student is headed for college success and a good life thereafter. Petrilli touts the "predictive analytics" of Ohio, but all that boils down to is a way to use previous performance on a standardized test to predict future performance on a standardized test. Big whoop. When Taking Standardized Tests becomes a lucrative career option, then we may have something here.

Petrilli wants parents to understand that their kids need to "step it up" and hopes that we see a day when As and Bs are only handed to those who are on track for success. But that opens a whole other question-- are grades supposed to be a predictor of future success or a measure of current achievement?

Well, let's set all that aside for the moment and consider his main issue-- is there grade inflation happening?

My completely unscientific answer is, "Probably maybe in some places." There may well be grade inflation on the lower end of the scale, where teachers may feel pressure to make sure that too many students don't flunk their class (particularly if dealing with the kind of learning support department that demands that students with special needs be passed No Matter What). Schools that practice the business of allowing students to just keep redoing work until they pass might fall under this category, as might schools that use social promotion to move on elementary students for reasons other than academic achievement. The problem in addressing all of these cases is that we have no objective yardstick by which to measure what a student should "really" be receiving as a grade (and as much as reformsters would like PARCC, SBA, etc to be that yardstick, they fail miserably at the task).

It's a complicated problem with no easy answers, made more tricky by the fact that "grade inflation" often occurs through the mechanism of overruling the judgment of the classroom teacher. The whole topic is worthy of discussion.

Meanwhile, there is supreme irony in Petrilli's raising of the issue. We know where the most extreme and notorious grade inflation has occurred over the past few decades-- in colleges and universities. And it has occurred, arguably, because of the unleashing of market forces. College students and their families have come to see themselves as customers and are comfortable declaring, "I didn't give this school $100 grand of my money just to see Junior end up with Cs and Ds in classes that I pay for. Fix it!"

This is, of course, precisely the sort of market force that Petrilli and other charter fans want to unleash in the K-12 world, transforming families into "customers" who must be kept happy if the charter wants to avoid losing revenue. Giving families the ability to "vote with their feet" unleashes the very forces that contribute to and push for grade inflation in schools. Let's add that to the discussion. 

Tuesday, September 20, 2016

USED, Pay for Success and Stupid Pre-K Plans

The United States Department of (Privatizing) education is touting another boneheaded idea, this time aimed at preschool and using yet one more unproven approach-- pay for success.

What is that, exactly? Here's the explanation from the USED FAQ page:

Pay for success (PFS) is an innovative contracting and financing model that aims to test and advance promising and proven interventions while paying only for successful outcomes or impacts for families, individuals, and communities. Through a PFS project, a government (or other) entity enters into a contract with an Investor to pay for the achievement of concrete, measurable outcomes for specific people or communities. Service providers deliver interventions to achieve these outcomes. Payments, known as Outcomes Payments, are made only if the intervention achieves those outcomes agreed upon in advance. The government (or other) entity makes Outcomes Payments to repay Investors for the costs of services (and sometimes other projects costs) plus a modest return. Ideally, Outcomes Payments amount to a fraction of the short- and long-term cost savings to the government (or other) entity resulting from the successful outcomes.  



Pay for Success is the zippier nom de guerre of Social Impact Bonds. If you want my "for dummies" explanation, you can look here. If you want a grown-up's fully detailed explanation, complete with sad history, I recommend this piece by Tim Scott.

The basic idea is this. The government gets an investor to foot the bill for what's supposed to be a government program. Then if the task is completed successfully for less than the government had set aside to do the task, the government reimburses the investor for the program costs and as a bonus, the taxpayers' "savings" are magically transformed into the private investor's "earnings." It is this big time version of telling the babysitter, "Here's ten bucks to get supper. You can keep whatever change there is."

Let me rattle off just a quick list of why this is a dumb way to do business in the-- well, it's actually a dumb way to do just about any sort of business, but let's stick to why it's a dumb way to do business in the education sector.

1) It literally sets the interests of the contractor against the interests of the children. Every dollar that the contractor spends on children is a dollar the contractor doesn't get to keep.

2) It builds a system around doing the absolute least we can get away with. "Spend the least you can get away with," say Social Impact Bonds, "to get the lowest acceptable results." Nobody tells their children's school, "I want to know that you are spending the least money you can get away with to get the minimum acceptable education for my child."

3) If it remains in place, it guarantees that somebody is going to get screwed. Go back to the baby sitter example. I learn that the babysitter has successfully (or at least acceptably) fed my children for seven bucks. Why would I continue to hand her ten? Once we have established the cost for which the job can be done, all future negotiations will be about how much profit for her I build into my suppertime financing. If a Social Bond program were ever to succeed well enough to last longer than a year, that would put the government in the position of deciding how many taxpayer dollars the contractors would be handed as profit, and either the taxpayers or contractor gets screwed.

4) It not only encourages, but actually requires metrics for success that are simple and simplistic and completely inadequate for measuring actual success in a complex system like education.

5) It adds a not-very-helpful extra layer of bureaucracy. The investor deals with the government, and the contractor of the service deals with the investor. This creates a nice layer of plausible deniability for the government when the programs violate any rules-- kind of like when famous celebrity Chatty Talksalot hires a McCorporation to make her branded clothing, and McCorporation in turn hires subcontractors in the Third World to run a sweatshop, and then Chatty can say, "What?! I had no idea!" 

USED would like to graft this Pay for Success idea onto its terrible ideas about preschool, as captured in just one paragraph from their press release:

We should have a greater focus on evidenced-based practices, on measuring and improving outcomes for our youngest learners, and more incentives for promoting innovative approaches that promise to further improve child outcomes.

As we've seen, "evdience-based" is a meaningless weasel phrase. And as soon as we start talking about "measuring and improving outcomes" for four-year-olds, we are just plain full of it. Four years olds do not need to sit down and take a test so that their outcomes can be measured. They do not need to be run through academic based programs. They need to play. They need to explore. And they need to do it in an environment in which they are not required to demonstrate "outcomes" to officious adults.

PFS is not a substitute for government funding, but a different way of providing government funding –one based on rigorous evidence of impact once positive outcomes have been achieved. 

Baloney. There is no "rigorous evidence of impact once positive outcomes have been achieved" with four year olds (probably not with sixteen year olds, either, but let's set that aside for another day). There is no evidence base to indicate that the USED has a clue what rigorous evidence of preschool success would look like, and of course for a PFS program, it would have to look like something simple and easy to measure.

So if we tell McCorporation "We'll give you a hundred bucks for every kid who scores better than 75% on this reading test," what do you suppose the preschool program is going to look like? Not like anything that a small child actually needs to experience. This is a terrible idea for taxpayers, small children, and their families. But it's an awesome idea for investors who want to hoover up some of those sweet, sweet education tax dollars.