Tuesday, December 2, 2014

Common Core Testing Ignores Common Core

Some commentators applauded me for giving it to Common Core writing the other day. But in all fairness to the Core, the standardized testing that is being used to beat students, teachers and schools into submission, often completely fails to test the Common Core at all. One of the gigantic Jabba the Hutt sized fantasies pushed by reformsters is the one where they say, "See these standards over here? This Standardized Test will totally whether we're meeting those standards or not."

There are two failure points between the anchor standards and the tests themselves.

Anchor standards? Those are the broader, more global final destination of the standards, the Stuff of which College and Career Ready Dreams are made. They lead us to the grade-specific standards within the Core, and let's just say that often something is terribly, terribly lost in the translation. But that's a whole other post for a whole other day.

The second failure point is the one at which the grade-specific standard is somehow "measured" by a bubble test-- excuse me! we're totally past bubble testing-- a point and click question. Reformsters blithely assume/insist/pretend that nothing is lost, and that the standardized tests accurately measure if students are in line with the anchor standards or not.

But let's perform a little thought experiment. Let's look at the twelve anchor standards for writing, and let's imagine how we would assess those standards, and see if we imagine anything that looks at all like the mass-produced standardized tests currently serving as the pointy stick in the eye of education.

Write arguments to support claims in an analysis of substantive topics or texts using valid reasoning and relevant and sufficient evidence. 

First we'll need a substantive topic or text, so, a text of some length and complexity. Any analysis and claim-making will require either some prior knowledge about the topic, or the opportunity to acquire that knowledge. So from beginning of the topic/text intro to the end of handing in the essay, I'd expect to spend at least a week. "Valid," "relevant" and "sufficient" are all subjective judgments and therefor would have to be made by somebody very familiar with the topic/text. After all-- how do you know an observation about existential angst in Moby Dick is valid or not unless you're familir with both existential angst and Moby Dick?

Write informative/explanatory texts to examine and convey complex ideas and information clearly and accurately through the effective selection, organization, and analysis of content.

Again, how can you evaluate this skill without having the student do this very thing. The "accurately" as well as the "effective" again require expertise on the reader's part in order to assess.


Write narratives to develop real or imagined experiences or events using effective technique, well-chosen details and well-structured event sequences.

Big wig lingo for "tell a good story." "Effective," "well-chosen," and "well-structured" are all subjective calls. Would you rather read Hemmingway, Dickens, Studs Terkel, or Carl Sagan?

Produce clear and coherent writing in which the development, organization, and style are appropriate to task, purpose, and audience.

A difficult artificial task, as we are either writing for an imaginary audience, in which case we'd better hope we imagine it the same way the test-makers do, or we are writing for our real audience, either a minimum-wage test-scorer in a assessment sweat shop, or, God help us, a computer. What if the development or organization that's most appropriate is many, many pages?


Develop and strengthen writing as needed by planning, revising, editing, rewriting, or trying a new approach.

A great nod to the process writing approach, which I actually believe in. To properly assess this will again take a least a week. Editing and revising thirty seconds after writing is really just a more involved first draft technique.


Use technology, including the Internet, to produce and publish writing and to interact and collaborate with others.

I'm stumped. I don't even know how I would imaginarily assess this. You could, I suppose, use the popular on-line course technique of requiring the student to start X discussion threads and respond in Y others, because that always leads to scintillating authentic conversation. Again, there's a time frame here that I find daunting for standardized assessment.

Conduct short as well as more sustained research projects based on focused questions, demonstrating understanding of the subject under investigation.

Once again, how could you possibly reduce this to a mass-produced, mass-taken, mass-scored assessment? I suppose you could tell every single student to get out her netbook and research ferrets right now, but I'm afraid the infrastructure demands alone would make this a no-win.

Gather relevant information from multiple print and digital sources, assess the credibility and accuracy of each source, and integrate the information while avoiding plagiarism.

Also, lead a large angel square dance on the head of a pin. This will be an assessment that takes several days and is held in a library? It surely won't be assessed by listing several resource excerpts and requiring students to select the "correct" information from each of the mini-sources. It's an admirable standard, but it is completely unassessable in a standardized test.


Draw evidence from literary or informational texts to support analysis, reflection, and research.

At this point, I believe the full assessment will take roughly three months.


Write routinely over extended time frames (time for research, reflection, and revision) and shorter time frames (a single sitting or a day or two) for a range of tasks, purposes, and audiences.

No, never mind. It will take all year.


It's the same problem, over and over and over again-- the standards have to assessed by someone whose professional judgment is equal to the task of dealing with highly subjective measures, while the activities involved are time-consuming and very open-ended. If I look at the writing strand of the CCSS, and I look at any of the High Stakes Standardized Tests out there, I can confidently state that those tests measure exactly NONE of these standards. Those tests have nothing at all to do with these standards. These standards might as well say "Student will spin straw into gold and use the gold to knit flipper mittens for the Loch Ness Monster," because the high stakes standardized tests are testing other things entirely.

Yes, I could lead a spirited argument about the standards themselves, but that's another post. Today, I want to underline one simple idea-- when reformsters say that test results tell us how students are doing on these standards, they are big lying liars who lie large lies.

2 comments:

  1. I had wondered at the seemingly arbitrary way they split up the research task, but in re-reading this in the context of your post, I notice that the components "Gather information..." standard, taken in isolation, is testable in a short standardized task (technically), even if actually doing research is not. They were very conscious of which standards they were actually trying to set up to assess and which ones are throw aways (un-testable).

    ReplyDelete
  2. Exactly! It will take all year, which is why the TEACHER should assess students' progress and communicate it to students, parents, department chairs, principals, community members, and the school board. Honestly, we could save so much wasted time and money by dumping these pointless non-tests of the standards.

    ReplyDelete