Pages

Tuesday, March 19, 2024

BS Test Blows A Billion

It's hard to track each and every sad side effect of the unexamined, unsupported assumption that the Big Standardized Test, our annual adventure of administering a mediocre math and reading test and then pretending that we have somehow measured how well the whole Education Thing is going. But here's one more bad example.

The feds had a whole grant program called Investing In Innovation (i3). It ran from 2010 to 2016 and doled out $1.4 billion to universities, school districts, and private outfits in a total of 172 grants to either develop, validate or scale up shiny reformy ideas. That innovation is "important to help improve student learning and close equity gaps nationwide" and the goal of this program (courtesy of the US Department of Education) was "to build high-quality evidence about effective educational strategies and to expand implementation of these strategies." 

And all of that pretty language about "improve student learning" and better "effective educational strategies" just means "raise scores on the BS Test." 

The Institute of Education Sciences (IES) is the arm of the feds that is like a test lab for education stuff, and they've done a study of just how well i3 grantees worked out by sifting through the research done about those various programs. So how did well did the programs work?

The short answer is, "Not great.

The long answer is, "Nobody is even asking the right questions."

Of the 172 grants, only 148 had completed research that could be viewed. Of those, only 26% showed a program that actually had a positive effect. A small number had a negative effect, and about 76% showed no affect at all. 

Grants are grouped by different sort of effects. One small group of grants was aimed at student attendance and completion, an effect that can actually be measured in a reasonably accurate manner. 

The rest were aimed at "student performance" in academic areas, plus a small group aimed at social emotional learning; the biggest number of programs wanted to improve classroom instruction, which in the largest number of cases meant either more teacher PD or developing and instituting curriculum and materials. For all of these student performance areas, the most important question to ask is "How do you think you measured that?"

Improving teaching and materials in the classroom is a worthy goal. But this review is a reminder that using the BS Test to see how we're doing is a self-defeating. It's looking for your lost car keys under a streetlamp 100 yards from where you dropped them because the light is better there. 

It is amazing to me that after all these years, so many folks are still talking about BS Test scores as if they are not just a true and accurate measure of educational effectiveness, but THE true and accurate measure. 

They aren't. They never have been. They remain an effort to gauge height, weight and health of an elephant by examining its toenail clippings. Their effect on education is the most prolonged, debilitating example of Campbell's and Goodhart's Laws in action, a situation in which some have so mistaken the measure for the Thing Itself that they are wasting time and so very much money. 

What the IES report tells us is that in a billion-dollar game of darts, a whole lot of people missed the wrong target. I don't know what that information is worth, but it sure isn't worth $1.4 billion. 

3 comments:

  1. Craptastic! What a waste of tax payer dollars.

    ReplyDelete
  2. With $1.4 billion they could have supported lots of community schools that address the needs of the whole student which, in turn, would have improved attendance and graduation rates. Testing is simply rating and ranking; it is not a program that makes a difference in the lives of students.

    ReplyDelete
    Replies
    1. And we could save SO much time and money ranking and sorting if we just line up and "grade" schools by SES. That's all the BS Tests show.

      Delete