Anthony Cody's recent blog about the effect of robo-grading on instruction includes an eye-opening glimpse of how much worse things can get. A sample from the Smarter Balanced test reveals a writing test in which the students are given the content for their essay and simply asked to rewrite it. "Here's a list of points for each side of this question. Select a couple and put them in paragraphs."
It is, in fact, testing exactly the sort of plagiarism skills that we have been trying to purge for decades.
Not that the teaching of bad writing is a new issue. Evaluating writing is hard, and it's subjective. Virtually every revered writer has been the subject of the argument, "Is this person a genius, or does this person actually suck?" If a writer in the canon can provoke wildly divergent views among actual professional literati (and fake ones like David Coleman), then it can be no surprise that a writer in my fifth period class can provoke similar subjectivity.
Teachers have long tried to reduce the assessment of writing to a more manageable. I myself brought home the Oregon version of the six traits model from a conference years ago, and like many other teachers, I've since modified it to better suit my own personal biases about writing.
The quest for a simple, clear system of writing assessment is eternal. It's eternal because nobody has found a good, solid, simple, clear, objective way to assess writing that does not require pummeling writing with a stick, hacking off its limbs, and stuffing the bloody corpse into a tiny, cramped box. If Heisenberg says you can't observe a phenomenon without affecting it, Greene says that you can't assess writing without mangling and killing it.
The solution to "How do I master the difficult task of assessing writing?" is rarely "Build a better assessment." It's more usually "Make students write something that's easier to assess." Assess them not on their ability to express themselves, to manage prose, to use language to organize and capture concepts-- instead, assess them on their ability to follow a formula.
We have some classic studies of the bad formula essay. Paul Roberts' "How To Say Nothing in 500 Words" should be required reading in all ed programs. Way back in 2007, Inside Higher Ed ran this article about how an essay that included, among other beauties, reference to President Franklin Denelor Roosevelt was an SAT writing test winner. And I didn't find a link to the article, but in 2007 writing instructor Andy Jones took a recommendation letter, replaced every "the" with "chimpanzee," and scored a 6 out of 6 from the Criterion essay-scoring software at ETS. You can read the actual essay here.
At my school, we've learned how to beat the old state writing test. It's not hard:
1) Recycle the prompt. Get the key words of the prompt into your first paragraph. If you aren't sure which words are key, just grab them all.
2) Fill as much paper as possible. Be redundant. Babble. But fill up space.
3) Use some big words. "Plethora" has historically been a favorite.
4) Write neatly. Indent clearly.
Jesse Lussenhop's classic article shows how badly the live scorer system works. But the new info about the new CCSS-related prompts show just how much the tail has begun to wag the dog.
Bad test design has a certain sort of logic. Every English teacher is familiar with the Bad Context Clue question. This is the question where a word is used in one of its least common meanings, such as "Bob's faculties were very strong." Students are instructed to depend only on context, but many are suckered into using the knowledge they already have. Teachers despair of training students to recognize those times when they are supposed to ignore what they already know.
But suppose you wanted to test a student's sense of smell, so you put a fragrant flower on the other side of the room and said, "Find your way across the room with your sense of smell." But then you realize that they might use other senses to find their way. So you start blasting Sousa marches, and you create a realistic hologram of massive flames in the middle of the room. The idea is that ONLY their sense of smell could get them across their room. But the task has been changed-- they not only have to use one sense, but they have to disregard the others. We've completely isolated the item that we want to assess, but we have done it by creating a senseless activity that would never occur in real life.
And that's why we have to teach students how to take tests. Because testing activities are designed to be easily assessed and to focus on unreal only-in-a-test activities.
We cannot teach students to write well and to write to get good scores on standardized tests at the same time.