Pages

Saturday, June 9, 2018

How Can Proficiency Vary Between States?

EdSurge this week asked the magical question with Jenny Abamu's, "How Can a Student Be 'Proficient' in One State But Not Another? Here Are the Graphs."

Spoiler Alert: Abamu doesn't give the real answer.

When No Child Left Behind passed back in 2002, Congress enthusiastically proclaimed that 100 percent of American students would be proficient in reading and math by 2014. What they didn’t expect was that some states would significantly lower the bar for proficiency to avoid being marked as failing or losing special funding from the federal government.

Not really. First of all, since "proficiency" was going to be measured with normed tests, Congress declared that 100% of students would be above average. The ones that understood that this goal was mathematically impossible figured that when Congress revisited ESEA in 2007, their rewrite would modify that unattainable goal. In other words, pass something that sounds impressive and let someone else fix it later before the chickens come home to roost. But Congress couldn't get its act together in 2007, or 2008, or any other year until it was the teens and the Obama administration found the fact that every single state was violating the law-- well, that just gave the Obama administration leverage for pushing their own programs. In the meantime, anyone with half a brain new that states would game the system, because by 2014 there were only two kinds of school districts-- those that were failing and those that were cheating.

Abamu shares some other history, including the not-often-noted fact that Bill Clinton tried to establish a National Education Standards and Improvement Council that would have federal oversight of all state standards.

She hints that different levels of proficient are because states all wimped out and lowered the bar. And part of her point seems to be that thanks to PARCC and SBA and the Common Core and NAEP, we're closer to having state-to-state aligned standards than ever before. And she runs the old PARCC/SBA comparison to NAEP routine, showing how different state standards map onto the NAEP standards.

But she doesn't really answer the question.

How can students be "proficient" in one state but not another? Because "proficient" doesn't mean anything, and whatever meaning it does have is arbitrarily assigned by a wide variety of people.

The NAEP sets "proficient" as the grade equivalent of an A, but a study of NAEP results found that about 50% of students judged "Basic" attended and graduated from college. And at least nine studies have shown there is no connection between better test scores and outcomes later in life.

In real life, we might judge someone's proficiency in a particular area (say, jazz trombone playing) by first deciding what skills and knowledge we would expect someone who was "proficient" to have (know certain songs, can play in certain keys, knows who Jack Teagarden is and can imitate him). In fact, in the real world, we never talk about being proficient without talking about being proficient AT something. But here is Abamu's article we have yet another testocrat (NCES Associate Commissioner Peggy Carr) talking about a "proficient student." What does that even mean? We never talk about proficient humans, because proficiency is always applied in reference to a certain skill set.

But in the testing world, everything is backward.

First, instead of saying "This is what proficiency will look like" before we design our tasks or set our cut-off scores, we give the students the Big Standardized Test, score the Big Standardized Test, and only then decide where the cut score will be set.

Second, we don't talk much about what the student is proficient AT because we're really only checking one thing-- is the student proficient at taking a single BS Test focused on math and reading. It would give away the whole game to say, "These students have been found to be proficient standardized test takers," because when people think of the very best students, "great at taking standardized tests" is not one of the major criteria.

We've never, ever had a national conversation in the math or reading teaching community on the subject of "a really good reading student would be able to do the following things..." and then design a test that could actually measure those things. The test manufacturers hijacked that entire conversation.

At the end of the piece, Carr asks states to consider "Are your definitions of what is proficient reasonable?" The answer is, no, they aren't, because the state definition of "proficient" is "scored higher than the cut score we set on the BS Test," which is not a definition of proficiency  at all. A definition of proficiency would be "Can solve complex math problems using the quadratic equation" or "Can read a major novel and produce a theme paper about it that is thoughtful and insightful" or "Can play Honeysuckle Rose including the bridge in eight different key." As long as testocrats are setting the definition of proficient, it will never matter which state the student is in.




No comments:

Post a Comment