Recently Chad Aldeman (Bellwether) ran an interview with a school ratings "expert," Christy Hovanetz. Hovanetz is from the Jeb Bush wing of the reformster world, having cut her teeth on Florida Ed policy in 1999, graduating to Foundation for Excellence in Education, the reform advocacy group that was supposed to help Jeb! use education to help boost his Presidential prospects. Boy, life just throws some crazy curve balls, doesn't it?
Florida was a leader in the rush to slap unsubstantiated letter grades on schools, and Hovanetz has since taken her show on the road. Aldeman wanted to ask her about what she'd learned, and the resulting interview tells us a lot about the fundamental flaws underlying most school rating systems.
Hovanetz starts right out with some classics. Parents need to know if a school will be a good fit. Taxpayers deserve to know that they are getting their money's worth. And my favorite-- "what gets measured gets done." Which in other words means that school rating is really a backdoor method of taking control of schools.
As the interview proceeds, other purposes for school ratings also emerge. For instance, there should be "a transparent way to report information that people understand and can use to improve student outcomes." And "The whole goal of accountability systems is to make sure that students are learning."
So now we're up to at least five goals-- marketing information for the "customers," quality assurance reports for the taxpayers, actionable data to drive instructional choices, evaluation of student progress, and as a tool for allowing whoever's in charge of ratings to inject their own agenda into schools. That is a huge, huge order for any evaluation system, particularly one from which the only outcome is a single letter grade.
But wait-- there's more. And it's arguably the worst of all.
As we work with states, we want to make sure they are providing information to parents and the public as to whether or not students will be successful once they leave the K-12 system.
There are certain policy ideas that signal to me that a person simply isn't serious about education or any of the things they say about it. This is one of them. You cannot know, insure, predict or otherwise provide information about how successful students will be after they leave the system, and if you claim such a thing is possible, either your are a fool or you think I'm one. It cannot be done.
I mean, let's imagine a family, highly successful, with all the advantages. Two of their sons (let's call them Jed and, um, Blorge) look like they're on two entirely different paths. Blorge is a bad student, a party boy, and repeatedly needs to be bailed out or propped up by help from the family's contacts. Meanwhile, Jed works hard, makes all the right marks, shows himself to have all the right stuff. Early on, the family would have predicted that Jed was destined for Presidential dreams, while Blorge would probably just have to draw a salary as a figurehead at whatever business someone could set him up with. Nobody at the end of the K-12 years would have predicted that someday Blorge would be comfortably retired from the highest halls of power while Jed would have to repeatedly slink home after suffering a series of campaign swirlies from a well-heeled jerk.
So do not-- do not-- claim that any system at all can tell parents whether or not Generic Are Schools will aim their child at success.
Hovanetz has more to explain. Turns out that the single letter grade is a sort of attention grabber, and once the overall impression has been made, a good system lets you dig down deep, into, you know, stuff.
Being able to draw in parents, the public, policymakers, and others who are interested in education, we need something to be able to say, “This particular school is high-performing or not a high-performing school,” and then provide additional information that supports that letter grade.
So now we're up to seven purposes.
Is there a way for states to check their work? Hovanetz suggests checking your work against the NAEP which brings us back to the same old question-- if NAEP is the benchmark against which you judge effective ratings, why not just use the NAEP as the rating instrument. The answer is "because the NAEP isn't a very good benchmark," but that of course means it isn't a good measure of your measurement system, either. Also Hovanetz says to check against your graduates college completion rates and how much they make later in life, though a ton of research says that we can predict those things while the students are in kindergarten just by looking at their collective socio-economic information.
Hovanetz does avoid the classic "multiple measures" dodge and goes ahead and argues for the narrowing of education.
Some states might be inclined to try to accommodate every single wish or desire of all stakeholders in a state, including things that may not be as important as whether or not kids are learning to do math or learning to read. Including those extra measures can dilute those really important things that students need to learn in school.
Remember, part of the reason we rate schools is because what gets measured is what matters. So use your rating system to focus in on only the important parts of school (because, of course, we all know and agree on exactly which parts of school are important and which are not) and get the educational program narrowed down to just that stuff. If the message from the state is, "Teach only math and reading," then that should be fine.
There is a strong desire to expand beyond just academic indicators—including a measure of growth is very important—but including things that are not direct learning outcomes and focus more on environment and other input measures blurs the vision on what we want students to know and be able to do. All of those things support a strong learning environment, and will indirectly will lead to success, but do not in themselves measure success. It’s trying to balance what’s important and what we want from student outcomes versus what it takes to put those conditions in place. Including too many things in the system complicates it and reduces the importance of student outcomes that we’re really looking for.
Who is this "we" and what are the outcomes that "we" have decided are the only important ones. And does Hovantez really think that parents only care about reading and math scores when they ask the question, "Is this school a good fit for my child?" Does she really think that taxpayers mean, "Just tell me about the reading and math scores-- nothing else matters" when they ask if they're getting their money's worth? For that matter, does she think taxpayers are saying, "I don't care what you do with my money as long as there are good reading and math scores."
She does offer a surprising new idea-- never mind breaking out the subgroups and just focus on the low achievers. This seems like an unusual approach even for reformsters. We know that, when it comes to future success after K-12, SES has a huge impact. The highest-achieving poor kids still fall behind the lowest-achieving rich kids. Hovanetz wants us to ignore all data except the test scores, but if she also wants to predict future success prospects for students, she can't ignore other data. And when it comes to being accountable to taxpayers, test scores are not enough measure of what has been accomplished-- not all students are equally cheap and easy to coach across the finish line.
But she's concerned that ESSA opens the door to including too many factors and giving states too many choices.
And by the interview, she is still not done putting requirements on the many magical goals that a school rating system must accomplish.
They should make sure to create a system that is equitable and levels the playing field across all schools. They should not create a situation where some schools are accountable for 25 things and other schools are only accountable for five things.
So the system should truly be one size fits all.
But her final flourish is-- well, special. Because Hovanetz wants states to be sure that they don't put "perverse incentives" into their rating systems (for instance, a rating that covers number of expulsions might lead the school to keep many Bad Actors in the school and damage the learning environment). I don't disagree, but I would point out that a system which leans heavily on the test results of very few subject areas to define the success of the entire school is one of the most perverse incentives of all, leading to brutally narrowed curriculum and instruction at the expense of many other elements needed to help young students grow in fully rounded and functional adults.
Final note. Some reformsters are not a fan of my tone (this is not a reformster thing-- some actually are fans of my tone), but I am not arbitrarily snarky or sarcastic. There are reformster arguments out there with which I absolutely and fully disagree, but which are constructed out of a serious, thoughtful approach to real issues in education. However, there are some arguments which are simply talking points stitched together without any thought to intellectual honesty or serious consideration of the issues. Hovanetz is probably a lovely person and a decent human being, and if she ever shows up in my neighborhood, I'll buy her a cup of coffee, but her arguments are ill-considered marketing copy for selling bad policy ideas and advocacy to political operatives. And that I just can't take seriously.