Anthony Cody's recent blog about the effect of robo-grading on instruction includes an eye-opening glimpse of how much worse things can get. A sample from the Smarter Balanced test reveals a writing test in which the students are given the content for their essay and simply asked to rewrite it. "Here's a list of points for each side of this question. Select a couple and put them in paragraphs."
It is, in fact, testing exactly the sort of plagiarism skills that we have been trying to purge for decades.
Not that the teaching of bad writing is a new issue. Evaluating writing is hard, and it's subjective. Virtually every revered writer has been the subject of the argument, "Is this person a genius, or does this person actually suck?" If a writer in the canon can provoke wildly divergent views among actual professional literati (and fake ones like David Coleman), then it can be no surprise that a writer in my fifth period class can provoke similar subjectivity.
Teachers have long tried to reduce the assessment of writing to a more manageable. I myself brought home the Oregon version of the six traits model from a conference years ago, and like many other teachers, I've since modified it to better suit my own personal biases about writing.
The quest for a simple, clear system of writing assessment is eternal. It's eternal because nobody has found a good, solid, simple, clear, objective way to assess writing that does not require pummeling writing with a stick, hacking off its limbs, and stuffing the bloody corpse into a tiny, cramped box. If Heisenberg says you can't observe a phenomenon without affecting it, Greene says that you can't assess writing without mangling and killing it.
The solution to "How do I master the difficult task of assessing writing?" is rarely "Build a better assessment." It's more usually "Make students write something that's easier to assess." Assess them not on their ability to express themselves, to manage prose, to use language to organize and capture concepts-- instead, assess them on their ability to follow a formula.
We have some classic studies of the bad formula essay. Paul Roberts' "How To Say Nothing in 500 Words" should be required reading in all ed programs. Way back in 2007, Inside Higher Ed ran this article about how an essay that included, among other beauties, reference to President Franklin Denelor Roosevelt was an SAT writing test winner. And I didn't find a link to the article, but in 2007 writing instructor Andy Jones took a recommendation letter, replaced every "the" with "chimpanzee," and scored a 6 out of 6 from the Criterion essay-scoring software at ETS. You can read the actual essay here.
At my school, we've learned how to beat the old state writing test. It's not hard:
1) Recycle the prompt. Get the key words of the prompt into your first paragraph. If you aren't sure which words are key, just grab them all.
2) Fill as much paper as possible. Be redundant. Babble. But fill up space.
3) Use some big words. "Plethora" has historically been a favorite.
4) Write neatly. Indent clearly.
Jesse Lussenhop's classic article shows how badly the live scorer system works. But the new info about the new CCSS-related prompts show just how much the tail has begun to wag the dog.
Bad test design has a certain sort of logic. Every English teacher is familiar with the Bad Context Clue question. This is the question where a word is used in one of its least common meanings, such as "Bob's faculties were very strong." Students are instructed to depend only on context, but many are suckered into using the knowledge they already have. Teachers despair of training students to recognize those times when they are supposed to ignore what they already know.
But suppose you wanted to test a student's sense of smell, so you put a fragrant flower on the other side of the room and said, "Find your way across the room with your sense of smell." But then you realize that they might use other senses to find their way. So you start blasting Sousa marches, and you create a realistic hologram of massive flames in the middle of the room. The idea is that ONLY their sense of smell could get them across their room. But the task has been changed-- they not only have to use one sense, but they have to disregard the others. We've completely isolated the item that we want to assess, but we have done it by creating a senseless activity that would never occur in real life.
And that's why we have to teach students how to take tests. Because testing activities are designed to be easily assessed and to focus on unreal only-in-a-test activities.
We cannot teach students to write well and to write to get good scores on standardized tests at the same time.
This comment has been removed by the author.
ReplyDelete"Evaluating writing is hard, and it's subjective."
ReplyDeleteYour first clause: true.
Your second: only if you are unwilling or unable to define your expectations.
"We cannot teach student to write well and to write to get good scores on
standardized tests at the same time." -- Utter hogwash.
The expectations for standardized tests are so low that to not meet them
is to not teach writing at all. Any student worth his or her salt can ace
a standardized test and then go on to blow you away with his or her
impromptu answer to one of the University of Chicago application prompts.
Stop portraying writing as some sort of mythic alchemy that can never
quite be explained. This only promulgates the sophomoric response that so
many students retreat to -- "I'm just not good at writing." There are
a handful of very basic skills that most students can learn: clear topic
sentences, relevant evidence, attention to a small handful of elementary
conventions. With these, they can produce wonders.
Also stop limiting your concept of "writers" to those who produce fiction.
Yes, there are endless debates about naturalism vs magical realism, but
no one -- absolutely no one -- is arguing that any of the best selling
nonfiction writers have the skills of a fifth grader.
Not a word of this blog portrays writing as mythic alchemy. In my classroom, writing is craft. It's not art, and it's not science, though it borrows some aspects of both. And it is certainly not limited to (or even mostly about) fiction.
ReplyDelete"There are
a handful of very basic skills that most students can learn: clear topic
sentences, relevant evidence, attention to a small handful of elementary
conventions. With these, they can produce wonders."
Your first part is correct. Your second is not. Following these conventions, they can produce writing that is adequately mediocre.
The expectations for standardized test writing are not simply low, but wrong-headed. They not an early stop on some long road that eventually leads one to great writing; they are a path that goes in the wrong direction.
Standardized writing teaches students to approach an essay with the wrong question. Instead of responding to a prompt or a writing problem with the question "What do I have to say about this?" standardized writing trains students to start with the question "What can I use to fill in these five paragraph-shaped blanks.
"What do I have to say about this" unlocks all the questions that matter: What would be the most effective structure to convey my ideas? What would be the most effective vocabulary? What are the sub-points of my main point, and what are the most effective ways to present the most effective evidence?
If you read my entire essay, you'll note that my school's English department has been very successful in teaching to the writing test. None of those tricks that we teach the students would qualify as signifiers of quality writing, and we tell them that. But on the elementary level, fifth grade teachers around the state are having trouble with the test because they are trying to teach good writing instead of test writing, or both at once (or, admittedly in some cases, neither).