It has been almost a month since the NAERP scores have dropped, and some folks are still trying to torture some sort of useful insights from the numbers (here's Mike Petrilli at Fordham writing a piece that should be entitled "What to learn about being better a hitting the wrong target").
The world of education is a fuzzy one, with some declaring that teaching is more art than science. But then the National Assessment of Educational Progress is issued. “The Nation’s Report Card” is greeted as a source of hard data about the educational achievement of fourth and eighth graders (and in some years, high school students), theoretically neither biased nor tweaked as state tests might be.
NAEP scores were released three weeks ago, and they have been percolating down through pundits, ed writers, ed bureaucrats, and ordinary ed kibitzers. So now that we have had weeks to absorb and process, what have some folks offered as important lessons, and what’s the only lesson that really counts?
Some have offered lessons that are simply misreadings of the data. The three NAEP levels (basic, proficient, and advanced) do not necessarily mean what folks think they mean, which is why Secretary of Education Betsy DeVos was incorrect when she claimed that NAEP showed two thirds of students don’t read at grade level. NAEP’s “proficient” is set considerably higher than grade level, as noted on the NAEP site. (This is a lesson that has to be relearned as often as NAEP scores are released.)
It’s worth noting that there is some debate about whether or not NAEP data says what it claims to say. There are arguments about how levels are set, with some arguing that the levels are too high. An NCES report back in 2007 showed that while NAEP considers “basic” students not college ready, 50% of those basic students had gone on to earn a degree. A 2009 report from the Buros Institute at the University of Nebraska also found issues with NAEP results. It’s possible that those issues have been tweaked away in the decade since, but that would have implications for any attempts to trace trends over all that time.
NAEP is extraordinarily clear that folks should not try to suggest a causal relationship between scores and anything else. Everyone ignores that advice, but NAEP clearly acknowledges that there are too many factors at play here to focus on any single one.
Betsy DeVos argues that the NAEP scores show that the U.S. needs more school choice. Jeanne Allen of the Center for Education Reform, which has long supported charters schools over public schools, argues that the NAEP scores are evidence that the U.S. public education system is failing. Former Secretary of Education Arne Duncan argues that the scores are proof that the country must courageously pursue more of the reform initiatives that he launched while in office. Mike Petrilli of the Fordham Institute called the poor results “predictable”as he blames them on the Great Recession, and pointed to a few small data points as proof that the kinds of reforms backed by Fordham work. The National Council on Teacher Quality claims that the static scores are the result of college teacher education programs that don’t teach teachers the proper ways to teach reading and math. It’s clear that when your only tool is a hammer, the NAEP looks just like a nail.
Critics of education reform like Diane Ravitch note that the NAEP scores showthat a “generation of disruptive reform” has produced no gains, that the NAEP trend line stays flat. DeVos singled out Detroit as an example of failed policies, yet the policies that have failed in Detroit are largely those reform policies that she herself pushed when she was an education reform activist in Michigan. And some policies may improve scores without actually helping students; Mississippi in 2015 joined the states that held back students who could not pass a third grade reading test, meaning those low-scoring students would not be in fourth grade to take the NAEP test. It would be like holding back all the shorter third graders and then announcing that the average height of fourth graders has increased.
In all discussions, it’s useful to remember that the increases or decreases being discussed are small– a difference of just a few points up or down. NAEP scores have shown neither a dramatic increase or decrease, but a sort of dramatic stagnation. That is arguably worse news for education reformers, who have been promising dramatic improvements in student achievement since No Child Left Behind became the law almost twenty years ago.
So what’s the one actual lesson of NAEP? One continuing belief for some students of education policy is that if we just had some cold, hard data, we could really get some stuff done. We could settle arguments about curriculum and pedagogy and policy, and by making data-driven decisions, we could steer education into a new golden age.
Well, here’s our regular dose of cold hard data. It hasn’t settled a thing.
That’s the one actual lesson of NAEP; the dream of data-informed, data-driven decision making as a cure for everything that ails us is just a dream. Data can be useful for those who want to actually look at it. But data is not magical, and in education, it’s fruitless to imagine that data will settle our issues.
Originally posted at Forbes.com