Monday, June 4, 2018

EdNext and the Beanstalk

In the Fall 2018 issue of Education Next, Daniel Hamlin and Paul Peterson ask the question "Have States Maintained High Expectations for Student Performance?" The correct answer, it turns out, is "Ask a different question."

Magic? Or just tasty?
Hamlin and Peterson note that ESSA gave states license to dump the Common Core, either in its actual form or under whatever assumed name they hid it behind. For accountability hawks, this raises the concern  that we'll have a Race to the Bottom, as states make it easier for schools to clear the performance bar (yes, for the six millionth time, this blurs the barely-existing line between the standards and the tests used to account for them). Will the political expediency of being able to say, "All our kids are Proficient (as we currently define it)!" be too much for politicians to resist?

So, has the starting gun been fired on a race to the bottom? Have the bars for reaching academic proficiency fallen as many states have loosened their commitment to Common Core? And, is there any evidence that the states that have raised their proficiency bars since 2009 have seen greater growth in student learning?

In a nutshell, the answers to these three questions are no, no, and, so far, none.

So nobody has loosened up requirements to-- hey, wait a minute. Did they just say that raising proficiency bars hasn't actually increased student learning?

Even though states have raised their standards, they have not found a way to translate these new benchmarks into higher levels of student test performance. We find no correlation at all between a lift in state standards and a rise in student performance, which is the central objective of higher proficiency bars.

Yup. Higher standards have not moved the bar. I see three issues with what they've written here.

1) "Greater growth in learning" is yet one more reformy phrase that suggests that student learning or student achievement is subject to quantitative measurement. Measuring learning is like checking to see how full a glass of water is. The assumption is necessary because it makes learning easy to measure-- just hold a ruler up to it and you know how much of the learning the child has packed into their head.

But does that really work. Has a student who has learned to play bassoon achieved "more" than a student who has learned how to identify different types of rock, or a student who has learned the major causes of The Great European War, or a student who has learned how to cook a soufflé? Reformers have gotten us talking about quantity of learning when most of the differences that matter are qualitative rather than quantitative. From that foundational error, many of the problems of reform follow.

2) Student test performance still is unproven as a measure of anything except a student's ability to take a test, or their socio-economic background. Student test scores are only slightly more useful than collecting student show sizes. It's bad data, and it does not measure the things that reformsters say they want to measure.

3) Raising student test scores should not be the "central objective" of any piece of education policy ever. I give them points here for honesty. The line used to be that by making students smarter, test scores would go up. Here Hamlin and Peterson drop even the pretense that test scores are proxies for anything else. This is exactly what any student of Campbell's Law would have predicted-- we have gone from trying to move the thing that is supposed to be measured to simply trying to move the measurement itself (read Daniel Koretz's The Testing Charade for an in-depth examination of this point).

We are now only one third of the way through the article, and yet the next sentence is not "Therefor, there really is no purpose in continuing to fret about how high state standards are, because they have nothing to do with student achievement." But instead, the next sentence is "While higher proficiency standards may still serve to boost academic performance, our evidence suggests that day has not yet arrived." And sure, I understand the reluctance to abandon a favorite theory, but at some point you have to stop saying, "Well, we've now planted 267 magic beans in the yard and nothing has happened-- yet. But tomorrow could be the day; keep that beanstalk ladder ready."

Hamlin and Peterson next recap the post-2002 history of state standards and the raising thereof (or not). They also refer to Common Core as "content standards," which -- well, I would call at least the ELA portion of the Core anti-content standards, but we can save that discussion for another day.

They also spend some time talking about how states have been closing a gap in "proficiency" measurement between the Big Standardized Test and NAEP. We should apparently be excited that more states have results that align with their NAEP results (they give states letter grades based on their gap), but they don't explain why we should care. And given the results covered earlier, it would seem that we shouldn't care at all.

That's underlined by a graph that turns up further down the page.



















So despite all the fun number crunching, they come up with this conclusion:

Even so, the primary driving force behind raising the bar for academic proficiency is to increase academic achievement, and it appears that education leaders have not figured out how to translate high expectations into greater student learning.

Sigh. This is like one more iteration of the "It's the implementation that's screwing everything up" talking point. The high standards movement has always suffered from one other seriously flawed premise-- the notion that teachers and students could do better, but are just holding out on policy leaders, and they need to be prodded so that educational greatness can be achieved. This is both insulting and untrue. It is long past time for reformsters to look-- really look-- at their own data and finally conclude that their magic beans are never going to yield giant beanstalks.


2 comments:

  1. From the article: "Even so, the primary driving force behind raising the bar for academic proficiency is to increase academic achievement, and it appears that education leaders have not figured out how to translate high expectations into greater student learning."

    All very true. Yet consulting groups and education "gurus" are all the rage. And they are also the people who are soaking up education dollars.

    Look no further than Marzano's gold mine in teacher evaluation which is all based on numerical scoring and data in up to 60 categories.

    For grading purposes, look no further than the standards based grading movement headed by guys like Rick Wormeli who work hard to sell an intoxicating theory and will happily charge those who attend their seminars 30 pieces of silver. (Considering that we are continually discovering that standards make no difference, it's amazing these River City fellas can still make money.)

    Look even at things like Reader Apprenticeship that is designed to be test prep in disguise. Lots of money for their workshops as well.

    Man, these reformers were very adept at creating profit from a system that they created that has failed to move the needle. And they are still selling the snake oil to the victims.

    ReplyDelete
  2. If those third graders just trained harder we know they could all bench press 100 pounds! Why can't those phys ed teachers just keep their expectations high and their training programs rigorous?

    ReplyDelete