Pages

Tuesday, December 16, 2014

Pearson's Renaissance (3): Transforming Assessment

We are working our way through Pearson's Big Paper about Assessment. In Part I, we considered their ideas about the coming revolution in education. In Part 2, we considered what's wrong with assessment these days and hinted at what it should look like. Now in Part 3, we'll look at what Pearson thinks assessment should look like.

3. Transforming Assessment

The writers think that new assessment is going to change everything, from raising the achievement ceiling to making every student a smarter thinker and a better human being. In particular, they have huge faith in the transformative power of online testing. Here's why.

Assessing the full range of abilities

Traditional tests are too hard for some students and too easy for others, but computer adaptive testing (CAT) will be the baby bear porridge of testing-- just right for everybody. Pearson is confident that every state will be adopting this, and notes that Smarter Balanced is "making use of" CAT and a bank of 21K questions.

One problem for CAT is the requirement that test items be released to the public after the test is given. This would compromise its integrity in some undefined way. It would also make creating a new test every year subject to "unsupportable development costs."

Authors Hill and Barber provide a nice chart of how CAT is supposed to work-- essentially students take testlet A and the real-time results direct them to either testlet B or C, and so on. At the end of the line we might arrive at open-ended response questions "that can be scored by trained professionals." Of course, they could also be scored by minimum-wage barely-trained workers as well. They note that "considerable research" has been directed at solving the problem of getting an accurate estimate of student ability from testlet A, and they are confident that once current limitations are overcome, "there is every likelihood" that a fully-adaptive testy thingy will happen. Back in the previous chapter they bemoaned that "teaching remains an imprecise and somewhat idiosyncratic process that is too dependent on the personal intuition and competence of individual teachers." Is it okay if we base teaching on the personal intuition and competence of corporate chieftains?

Providing meaningful information on learning outcomes

Online testing will be fraught with meaning. Results will come back instantaneously, and the ability to give different versions of the test to different students will make data bloom in lovely ways. Hong Kong loves it. There's a lot of flowery language in this section, but it boils down to saying that online tests will give more information faster better zowie!

Assessing the full range of valued outcomes

Standardized tests are limited in what they can actually measure, because multiple choice questions don't go very deep. Pearson is certain that performance tasks can be adapted to a rubric approach that can allow assessment of playing and instrument, reading aloud with fluency, repairing an engine, and working well with a group.

By substituting the judgment of test and rubric writers for the judgment of teachers in the classroom, we can better measure all sorts of stuff.

The writers also devote some space to claiming once again that there are automated essay-scoring systems that are not actually crap. They admit the software has limitations, but so do humans, so neener neener. And then there's this:

A more fundamental solution lies in using digital technologies to support the adoption of a new generation of assessment tasks specifically designed to access deep learning and other outcomes not amenable to assessment via traditional tests and examinations.

In other words, instead of asking how we can best assess particular skills or knowledge, let's ask what sort of cool assessments we can make with a computer. Let's base assessment not on what we want to assess, but what we can assess most easily.

Pearson also wants you to know that they have some cool tests for assessing character traits, so that we can start recording data on what kind of person your child is. As always, I'll ask exactly who needs that data that does not already have it? In other words, are parents really sitting at home wondering about the character of their children and dreaming of a test that would tell them, or is this just an excuse to put another domain of data in a human drone child's cradle to career file?

Integrity of the test

This is the pivot point. Cheating is inevitable as long as we have a small number of tests on which high stakes are riding. High stakes equal high motivation to cheat. But--

intriguingly, the ultimate solution may lie in the potential of a new generation of assessments designed primarily to monitor and inform ongoing learning and teaching

ALL ASSESSING- ALL THE TIME

How do we tie curriculum and teaching together? How do we fix the achievement ceiling an finally make students smarter?  How do we make learning really "professional" and not just something filled with human frailty? How do we collect and crunch more data than God? How do we create an ungameable system?

All assessing, all the time.

This is assessment with a new purpose-- not to give a grade, but to determine whether Pat and Chris are ready to move on to the next stage of the curriculum. I once posited that Common Core standards were not so much standards as they are data tags for marking, storing, cataloging and crunching everything students do. Here's what Pearson says:

Through the use of rubrics, which will define performance in terms of a hierarchically ordered set of levels representing increasing quality of responses to specific tasks, and a common set of curriculum identifiers, it will be possible to not only provide immediate feedback to guide learning and teaching but also to build a digital record of achievement that can be interrogated for patterns and used to  generate individualised and pictorial achievement maps or profiles.

My emphasis. The online software will correct the work, sort the work, store the work, spit out the resources, evaluate the student's progress. The data collection will be mountainous, epic, massive in scope, providing a completely picture of who the student is and what the student knows.

For teachers, the transformation will be huge. "Learning systems of the future will free up teacher time currently spent on preparation, marking and record-keeping and allow a greater focus on the professional roles of diagnosis, personalized instruction, scaffolding deep learning, motivation, guidance and care." Scan back to the top of that sentence-- teachers will no longer prepare lessons or material.

Meanwhile, the system will be providing personalized instruction. As always with this kind of system talk, what we appear to mean is personalized pacing. All students are meant to climb up the exact same ladder-- it's up to the software to decide which rung they're ready to step on.

Pearson's Brave New World

This is education in Pearson's Brave New World. They list a few challenges in the transition, but I have a different set of problems that I anticipate.

Exactly where is the instructional content coming from in this system? The system is cheerfully spitting out the resources and assessments needed according to the educational plan. Who, exactly, is writing any of those things? Pearson wants to teacher-proof education by removing the influence of individual teachers from the classroom, but which human beings are producing the materials that go into the software? And why should I, as a professional educator, trust the nameless faceless functionaries on the other end of the internet hookup to know better than I what the program design should be?

Like all systematic approaches dependent on technology, this system depends on huge assumption-- that the students will take it seriously and attempt anything more than superficial compliance with the software.

Look-- my students do (mostly) the things I ask (kind of) because they respect me. They don't automatically respect me for being a teacher-- I spend most of September earning their respect and trust, and because I've earned it, they now take on the tasks I set out for them. Exactly how does the Pearson system propose to earn that trust from students? What makes them think that their system will fare any better than the old teaching machines or Rocketship academies or programs like Study Island (which do an excellent job of training students to click buttons quickly)?

Until Pearson has a good answer for either of those issues (neither of which they actually address), this is all baloney.

Next, and finally, we'll look at what Pearson wants to see policy makers, schools, system leaders, and other Important People do in order to keep the revolution on track.


No comments:

Post a Comment