Look, it's not that I want everyone to stop any discussion of Big Standardized Test scores at all forever (okay, I might, but I recognize that I'm a radical in this issue and I also recognize that reasonable people may disagree with me). But what I really want everyone to stop pretending that the BS Test scores are an acceptable proxy for other factors.
But here comes a new "data tool" from Stanford, and watch how EdWeek opens its piece about the new tool:
An interactive data tool from the Educational Opportunity Project at Stanford University creates the first database that attempts to measure the performance of every elementary and middle school in the country.
The data set not only provides academic achievement for schools, districts, and states around the country, but it also allows those entities to be compared to one another, even though they don't all use the same state tests.
No.
No no no no no NO no no no hell no. The tool does not measure the performance of every school in the country, and it does not provide academic achievement either. It allows folks to compare the math and reading scores on the BS Tests across state lines. That's it. That's all. It's a clever method of comparing apples to oranges, but that's all. Academic achievement? It covers two academic areas, and not very well at that.
(And while I'm ranting, let me also point out that schools do not perform. Students, teachers, staff, other human beings-- they perform. Schools sit there. If we start talking about performing schools, before you know it we start spouting dumb things like "low-achieving schools have a large number of low-scoring students" as if that's an analysis and not a definition.)
But maybe this is a press and reporting problem. What does the tool claim to actually do?
We’re measuring educational opportunity in every community in America.
The Educational Opportunity Project at Stanford University has built the first national database of academic performance.
That's not promising.
The tool actually provides three "measures of educational opportunity" and those are 1) average test scores, 2) learning rates, and 3) trends in test scores. And the explanations are-- are we sure this thing came from Stanford?
You may, for instance, wonder what the heck average test scores have to do with educational opportunity. Here is the short explanation on the front page of the website:
The educational opportunities available in a community, both in and out of school, are reflected in students’ average test scores. They are influenced by opportunities to learn at home, in neighborhoods, in child-care, preschool, and after-school programs, from peers and friends, and at school.
"Are reflected" is weasel language, meaning "probably has some sort of connection to." It might be useful if the project looked at what provides, for instance, opportunities to learn at home. But we're just going to go with "every single thing in the child's environment has some effect on her test scores, probably."
Learning rates are, of course, growth scores, which the report says "are a better indicator of school quality than average test scores." The notion that student growth is at least as important as raw scores is not new, but I'm going to once again get on my high horse about this indicating school quality. That is only true if you think the mission of a school is to get students to do well on a poorly designed standardized test of reading and math. Is that the sum total of school quality? Nothing else you an to consider, like other non-math and non-reading programs, or safe environment, or caring teachers, or even good facilities?
And trends in test scores?
Tracking average test scores over time shows growth or decline in educational opportunity. These trends reflect shifts in school quality as well changes in family and community characteristics.
And this goes back to my point above-- if you change the population of a school, you change the "school performance" because "school performance" is really "student test scores."
These explanations of how these three methods of massaging test score data tells us anything about educational opportunity or academic achievement or school effectiveness-- they may seem perfunctory and thin, but that's all we get. We are just meant to accept the notion that a score on a standardized math and reading score gives us both a full picture of how well a school is doing and as a measure of the educational opportunities available to students. Just how magical are these Big Standardized Tests supposed to be?
All of that said, there is a ton of data here available in interactive map graphic form. That data is just about standardized test scores, but there are still some interesting things to see. For instance, Florida absolutely sucks in student growth of scores, which is ironic considering Florida's huge BS Test fetish. Arkansas is also pretty lousy, as are Kansas, North Carolina, Alabama, and Wisconsin. They also thought to run average test scores against SES for districts and lo and behold, there's the same result that we've confirmed over and over-- the direct correlation between poverty and test scores.
Is some of that test data stuff worth discussing? Maybe, but not if we're going to insist that those scores are somehow proxies for much larger, broader concepts like school effectiveness or for nebulous concepts like educational opportunity. If we are going to have useful, meaningful discussions about education we have got to-- GOT TO-- stop pretending that we have data that tells us things that it absolutely does not tell us.
" . . . the first database that attempts to measure the performance of every elementary and middle school in the country."
ReplyDeleteYou'd think that the folks at Stanford and EdWeek would be smart enough to understand that schools do not "perform". Schools are providers; providers of opportunity, support, highly quality instruction, safety, comfort, motivation, structure, care, feeding, transportation, enrichment, socialization, etc.
Smart enough to realize that ELA scores in particular reflect a lifetime of language acquisition that occurs mostly, far beyond the confines of the 6 hour day, 40 week school year that starts at age 5.
"The data set not only provides academic achievement for schools, districts, and states around the country, but it also allows those entities to be compared to one another, even though they don't all use the same state tests."
Flat out embarrassing statement! This alone disqualifies them from meddling in education.