The problem is particularly acute when your entire business is based on a failed model.
|Absolutely not looking at latest MAP test|
But I have to give credit to NSWEA for one thing-- they seem to grasp the most fundamental problem with these tests-- students have to actually give a rat's rear end about the test:
The best assessments are only effective and reliable if students are engaged and trying their best. But we know that’s not always the case.
That's the first paragraph in "What happens when test takers disengage?" and it may be the last thing that the article gets right.
Erin Ryan is the author; she's a senior content writer at NWEA who was previously a writer for Priorities USA, Upworthy, and Hallmark Cards. After graduating from University of Wisconsin with a degree in Journalism and Mass Communications, followed by an MS in educational leadership, she did put in one year as a teacher in Duval County Schools.
Ryan points ouit that sometimes, students zip through a test, just marking answers without reading the questions. She says this is called "rapid guessing," though for many years my students called it "playing some ACDC." This leads to "unreliable results," though I think "meaningless bullshit" is perhaps more descriptive. She warns that these results don't tell you the full picture of the student capabilities and "maybe even land them in programs for interventions they do not need" and so, you shouldn't use these kinds of tests to make those judgements, and should probably scrap them in favor of testing instruments that actually yield useful results.
Ha. Just kidding. We're going to consider everything except the possibility that the very design of MAP testing-- multiple choice on a computer screen about dull and random reading mini-excerpts-- make it a test that actively disengages students and therefor a poor choice for any school trying to collect useful data.
The techno-delivery of the test, Ryan argues, makes it easy to catch the disengaged students and do something about it. Except that in this respect, she's going to offer some terrible advice. She also makes the claim that research into test taking behavior, "working hand in hand with technology," makes it possible to keep students more focused on a test. Also nope.
NWEA has been claiming that it can read students' minds for years, using pause time on questions to gaze into the student's soul. For a while now, MAP administrators have received litle notices on the home screen that tattle on "disengaged" students (just in case teachers couldn't notice a studentn ripping through twenty questions in two minutes, or failed to use their power of "looking" to see students who are bored and disengaged).
introduced Slow Down Sloth, a cartoon sloth that would pop up and encourage a student to slow down. It's a nice consolation to those of us who feel sad that young folks will never get to meet Microsoft's Clippy. Now NWEA has auto-pause, which penalizes a racing student by freezing their test. This strikes me as an intriguing way to train students in how to figuyre out just how quickly the software will let them zip through the test.
But key is proctor intervention, and here's where we vreally run into trouble. Ryan drags NWEA researcher Steven Wise into this. Speaking about proctors:
“They think they are not allowed to intervene,” Steve says. “But that’s exactly what we want them to do. If a student is disengaged, you should do something about it.”
Here's the thing. They think they are not allowed to intervene because in many states, such intervention is absolutely against the rules. Pennsylvania teachers have to comply with a whole set of "test administration ethics" that are absolutely clear that a proctor cannot interact with students beyond reading the instructions script. Yes, if you're just using NWEA testing as a test prep tool, those rules don't technically apply, but why do test prep under different conditions than the "real" test? So this kind of advice...?
Maybe a student is struggling with a test because they’re not feeling well, are anxious, or are having trouble understanding the questions. Whatever the reason, when a proctor and student can talk when disengagement has occurred, instead of after, there’s an opportunity to save a testing event that might otherwise go to waste.
Nope. That sort of thing is absolutely verbotten, in part because, in its own way, it can invalidate a test almost as badly as playing ACDC.
Bottom line? NWEA has a product problem; the MAP is test is intrinsically disengaging, and is often used in settings (such as my old school) where it has no connection to the actual course and has less-than-zero stakes for students. It's multiple choice, which makes it easy for software to score, but a lousy measure of any complexity or depth of student understanding. Those are also the least engaging type of question, requiring no student response beyond "just pick a letter." The end result is a test that provides very little information. In the years that I gave the test, I never once found a student result that surprised me by telling me something I didn't already know (my personal number crunching also told me that it was a lousy predictor of Big Standardized Test performance).
NWEA's response to all these problems is not to go back to drawing board or question the foundational assumptions behind their product. Instead it's to offer these little help articles and webinars in order to get customers to plug the holes in their product. It's like an auto manufacturer saying, "We've screwed up the engineering of the airbags in these cars, and we'd like to give you some instructions about how to sit kind of side-saddly in the front seat so that the airbags kind of work."
Sending teachers instructions on how to tweak a faulty product so that it's marginally less faulty is not the solution here. NWEA needs to do better.