Chad Aldeman took to the Bellwether blog to make his case for summative school ratings (grades) under the loaded headline "Summative Ratings Are All Around Us. Why Are We Afraid of Them in K-12 Education?"
Of course, plenty of us, maybe even most of us, are not "afraid" of slapping a grade on schools. There just don't appear to be many benefits, and plenty of harm done. Aldeman provides a list of his positives. Let's see how they stack up.
1. Summative ratings are all around us.
Perhaps Aldeman somehow skipped that part of childhood where some adult authority figure said, "If everyone else jumped off a cliff, would you do it, too?" He correctly notes that ratings are all the rage, from Amazon to Rotten Tomatoes. But he also notes that customers who are interested in purchases will read the reviews, and reading through all the reviews on Amazon or Yelp is pretty much the opposite of a summative rating.
Of course, this sort of system doesn't always work out well. TripAdvisor, an app and service that collects reviews (and makes summative ratings) of hotels and motels, ironically itself gets a one star rating from Consumer Affairs, backed up by hundreds of tales of the rating service being skewed in any number of ways, often because of one sort of relationship or another with those being rated.
Aldeman might also have noted the long-standing summative rating used in the investment world, where investments are rated A or AAA or some lesser letter. If you think back to 2008 and all the people who lost their shirts, pensions, or homes because a whole lot of highly summatively rated investments turned out to be the result of big fat lies-- well, that summative rating system failed as well.
So there are lots of summative rating systems out there-- and many of them kind of suck. And yes, some folks on my side of the debate table sometimes trot these
summative ratings out-- I don't like it any better then.
2. Summative ratings are popular
No doubt about it. When it comes to some low-stakes decisions, people just like a simple up-or-down rating system. But the higher the stakes, the less satisfactory a simple summative rating system (I'm just going to start calling this SRS because I'm a lazy typist). Lots of people would say, "Let's just go to a five star restaurant, whatever it is." Hardly anybody says, "I would sign up for a romantic match site that just rated all the people with stars, and I would marry any five-star person, sight unseen."
Summative ratings are popular because people don't like to agonize over low-stakes decisions. But when it comes to high stakes decisions, they want as much information as they can get, not a quick summary. Schools, for most parents, are not low-stakes decisions.
3. Summative ratings are simple and easy to understand, but they’re not one-dimensional.
Here Aldeman and I disagree. Of course summative ratings are one-dimensional. That's the whole point-- to take a whole bunch of dimensions and simplify them to one quick, easy rating. Now, here's where we agree:
Inevitably, there’s no one “best” car for everyone, and there will never be one “best” school for all kids, but that doesn’t mean we should throw up our hands and give up in trying to help families weigh their options.
True enough. I just don't believe that a SRS is a useful tool in this circumstance. If you are reading through the Amazon reviews or the Consumer Reports descriptions or the college guide narrative paragraphs of description, that is not a summative rating.
4. If states don’t rate their schools, someone else will.
Oh, Chad Aldeman. I've read lots of your stuff, and you are definitely better than this argument, which can also be used to establish the State Department of Graffiti, the State Meth Production Lab, and the State Office for the Production of Bad Fan Fiction. These are all things that someone else will do anyway.
For that matter, shouldn't a right-tilted thinker like Aldeman prefer that someone else do it? After all, do we look to McDonald's for a rating of their own menu, or depend on car reviews from car manufacturers? I'm not an advocate of SRS for schools at all, but I would think that the same folks who think most education functions should be run by private enterprise would not also suggest that private enterprise run the rating system. After all-- whether we're talking public schools or charter schools, the state is certainly not a disinterested party.
No, this idea fails twice.
5. ESSA’s authors clearly envisioned states creating summative ratings.
Absolutely agree. ESSA clearly calls for a SRS. Of course, ESSA clearly respects the right of parents to opt out of the Big Standardized Test while also clearly demanding that states force at least 95% of all students to take that test. And that's before we get to the spirited arguments between Congress and the USED about what ESSA says. So I think it's safe to say that ESSA is still working out some of its issues.
6. Summative ratings force schools to improve.
If there’s one thing that’s clear from 13 years of No Child Left Behind, it’s that schools respond to external accountability pressures. They sometimes respond in unhelpful ways, of course, so the challenge is to design accountability systems that encourage schools to focus on measures that truly matter (which is all the more reason states should be involved).
Let me edit that down a bit: If there's one thing that's clear from 13 years of No Child Left Behind, it's that schools pretty much always respond to bad external accountability pressures in terrible ways.
NCLB, RTTT, and RTTT Lite (Waiver edition) all made test-and-punish the centerpiece of accountability, and that predictably narrowed the focus of every school in the country to "whatever is on the test." Schools did not improve. Some schools got their test scores to go up, but there isn't an iota of evidence that test score increase has anything in the world to do with a school getting "better." All of this guaranteed that SRS would be a joke, a letter grade based on how students did on a bad, narrow test.
Furthermore, this point falls back on the old notion that if we want schools to improve, we will have to force them to do it with threats and coercion. Reformsters really need to catch on that approaching schools and teachers with an attitude of, "You guys suck, so we are going to beat you into shape," really isn't helping.
A Few Points of My Own
There's a whole discussion to be had about whether SRS actually tell us anything or are just one more way to say "This school is rich, and this school is poor." In other words, data already readily available. And as always, I'd feel better about "finding" these troubled school is the response was more, "Let's get this school the support and resources it needs" and less "Let's turn this school over to a charter operator or a turnaround expert."
But let me skip those issues and just list three objections to the idea of summative school ratings.
1. Campbell's Law Again
The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.
If this law hadn't already existed, ed reform would have sparked someone to invent it. If high stakes are attached to these kind of school measures, they transform schools from an institution whose primary purpose is to educate students into an institution whose primary purpose is to keep the numbers of its measure up.
2. Reductive Measurement Further Warps That Which It Measures
If we decided that the summative measure of buildings was going to be based only height and width, we would end up with really cool buildings not deep enough to actually step into. If we decided the summative measure of food would be based strictly on how many colors were displayed, we would end up with garish food that tasted terrible.
When your SRS is based on an over-simplified measurement that ignores several dimensions of whatever's being rated, you end up with useless ratings and screwed-up things being measured. And when you are trying to come up with a simple summative rating for something as complicated as a school, it's absolutely guaranteed that your rating will ignore a huge number of critical dimensions. We've already seen this in thirteen years of schools being rated on math and English scores, and so cutting everything from recess to science to arts.
A school is a hugely complicated system of live human beings, each one of which is also highly complex. There is no way to some up with a summative rating that is not reductive well past the point of usefulness and not well into the realm of destructiveness.
3. School Turned Upside Down
One of the side-effects of the past decade-plus of accountability has been to turn schools upside down. Because the school must keep its numbers up and meet its accountability requirements, the school is no longer there to serve the student-- the students are there to serve the school. Is Chris dragging us down with lousy test scores? We'd better pull Chris out of a bunch of classes and park Chris in the land of remedial test prep every day until that score comes up.
Charters have always recognized this-- let a bad student in to ruin your numbers and there's hell to pay. Better to counsel them out or keep suspending them till they give up. And public schools are not immune. In Upper Darby, PA, where the district is discussing moving the school attendance boundaries, parents are objecting because Those Students will hurt our school rating.
When the primary objective of a school is to make its numbers so that its summative rating doesn't take a hit, it's very easy to start seeing students as obstacles or problems-- not the whole purpose of the school.
Ultimately my objection to summative ratings for schools is that instead of giving schools one more tool for helping students, they get in the way of doing that job-- the most important job we have in schools.