Back in 2010, the idea was that there would be at least ten of them-- one for each major course-- and students would take them at the end of the year as a final qualifying test for course credit (and therefor graduation). Donna Cooper (now of Public Citizens for Children and Youth) was part of the Rendell administration pushing for the tests, and like all good reformsters of the era, all she wanted was perfect standardization so that every student in every state was learning exactly the same thing. "It would seem to me that a parent in Norristown and a parent in Johnstown, their kids should know the same things to graduate.”
And like good reformy bureaucrats, neither the Rendell administration that cooked this up, nor the Corbett administration that cemented it into law, envisioned the state providing any resources at all to help students over this new hurdle. The Keystone exam system was the biggest unfunded mandate the state had ever seen.
The fiddling began immediately. Maybe the Keystones would count for a third of the full year grade. And somehow we'd have to roll the tests out over several years, only they turned out to be hard to just whip up quickly. And they were expensive, too.
Soon enough, the state decided that math, biology and English literature (aka "reading") would be the Keystone exams offered. And students and schools would get a couple of years to get up to speed before the test became a graduation requirement for the Class of 2017.
But there was a problem. It quickly became clear that if the Keystones were required for graduation, a whole bunch of students weren't going to graduate. In more than 100 districts (and charters) well over half the students would not get a diploma.
Nobody in Harrisburg wanted to attach his name to that, so Keystones-as-grad-requirements were pushed back to 2019, and to insure that the results looked better by then, the state government did-- well, they did nothing. Hopes and dreams, thoughts and prayers, maybe. But still no resources to help in those afflicted districts. And sure enough-- as 2019 approaches (those students are taking the tests this year in many districts) things still look bad.
So buried within Harrisburg's most recent attempt to enact a budget, there's another postponement for the Keystones.
Fans of local control are pleased-- let local districts set their own graduation requirements. But members of the Cult of Testing and Standardization are unhappy. Like Cooper, they argue that the previous Big Standardized Test, the PSSA, proved that local standards were inadequate.
“58,000 students were graduated and given diplomas who could not pass the state’s 11th grade PSSA,” said Cooper. “That’s a real indication of the failure of local control to understand the market signals of what is needed for a kid to succeed in today’s economy.”
Cooper and others like her might have a point-- if there were a speck of evidence that the PSSA was a valid measure of what is needed for a kid to succeed in today's economy, or college, or anything. There was no such evidence for the PSSA, nor is there any such evidence for the Keystone. And the PSSA, which is still given in lower grades, is a norm-referenced test with cut scores reset every year -- so somebody has to fail.
The Keystone exams are theoretically standards-referenced, which should mean that everyone can pass. But it should also mean that we can get test results literally five minutes after the student finishes the test, but we're still waiting months. Why is that? Maybe because of something called scaling, which seems like a fancy way to explain different weights for different questions on different forms of the test. Or maybe it has to do with rangefinding, which seems an awful lot like norm-referencing-- collect answers and see what their distribution looks like.
Trying to uncover the problems of the Keystone exams can be a daunting task. The technical reports from every year are available (here's 2015), with hundreds of pages each, including illuminating passages like this one
Reading these technical reports might suggest to a layperson, even a highly educated one, that testocrats have disappeared so far up their own butts that they are now viewing the world through the tiny little lenses of their own belly buttons.
But the answer to the Keystone problem may be much simpler than psychometric gobbledeegook or legislative refusal to fund what they demand.
It could be that the Keystones are just bad tests.
Mind you, I'm not supposed to know that. The PA DOE Code of Ethics says that I should never set eyes on the test itself, and if I accidentally see anything, I should make myself forget it. We are supposed to remain ignorant of the test except in the most general terms, and the students have to swear not to talk about them either. We are all not only supposed to hit the target, but we're supposed to do it blindfolded. That way
But, scofflaw that I am, I look anyway. And I'm telling you that if someone offered me the test to use for free in my class so that I could have more time with my children and less time test writing, I would turn it down. It's junk. Questions with no objectively supportable single correct answer. Questions that are simple vocabulary tests. Questions that require the students to use psychic powers on the authors of the passages. These tests do not measure anything except the students' ability to navigate a bunch of trick questions and guess what the test manufacturers are thinking.
You know who else knows that? My students. Like most districts, we have made the tests mandatory for graduation because we want the students to try because our school rating depends largely on those test results. But the students know that they'll do a performance task (the Binder of Doom) if they fail, and many of them are not only tired of taking stupid tests year after year, but they have long since concluded that they might as well roll dice, because success or failure feel pretty much random to them.
Pennsylvania is having the same damn argument as much of the rest of the education world, with accountability mavens arguing that we must test to have accountability, but skipping over the entire question of whether the test being used is actually measuring anything worth measuring. It's like listening to someone insist, "We have to know whether or not you have cancer, so you must wave your hand over this horny toad under a full moon." It's the same old reformy disconnect-- establishing that something is a real problem is not the same as establishing that you are proposing a real solution (and for those of us who don't agree that local control and variation is a real problem, you are even further off base).
Without a decent, fair, valid test, and without resources to back up this unfunded mandate, and without a reason for students to care, the Keystones will always be a disaster. We can only hope that the state legislature stops kicking this can down the road and finally just throws it in the dumpster of history, where it belongs.
So I'll agree with everything you said here, but add that there's something that may have changed my mind about the whole thing. Here in SC, students until three years ago had to pass the Big Test called the HSAP to graduate. It was a general test of math and reading skills that they first started taking in the 10th grade and would take at least once per semester through their senior year until they passed it. They even could come back after their senior year to take it if they had passed all of their coursework.
ReplyDeleteHowever, I got a student this semester that is in my senior-level required-to-graduate Government and Economics class who was, for at least one year (maybe two...details seem to be hard to come by) of his high school career, in self-contained classes due to academic performance. His dad decided to mainstream him for his senior year. He was recommended to be tested for autism in middle school but never was, and only now, have the tests just finished. While we wait on full diagnostics from the school psychologist, I can tell the results so far show his IQ his to be 67. I don't need to wait to tell you (as you said in yesterday's article) that the data I've gathered about this kid have led me to believe that--and I hope anyone reading this will believe me that I say this as an 18-year veteran with a heavy heart--it is educational malpractice to have this kid earn a diploma. It is a true devaluation of the piece of paper it's written on.
And yes, by god he WILL get it. Dad decrees it so. My principal decrees it so, and his case manager, who is literally my closest friend I've ever made in a school setting, decrees it so. His IEP require I give him a copy of the tests and quizzes (with answers) to take in his resource class. Even if he failed, his grade would be changed.
So why do I bring this up? The HSAP would be the last line of defense for a minimum competency to be reached, and there's no way he'd pass it, sad to say. I never, NEVER thought I'd look fondly on the day when I had to give this test...but now is that day and it has thrown me for a loop.
Why would you begrudge this young man a high school diploma? Would he be better off without one? Would all of the other members of the Class of 2018 be better off if the HSAP was the one barrier he could not surmount? Would his fellow classmates really think less of their diploma if he received one as well? Would his transcript look that much different if a failing HSAP grade was included? Are you seriously concerned that a cognitively impaired young man’s best efforts will degrade the value of a high school diploma? Would you have written this comment if he were your son?
ReplyDeleteYou've asked a lot of questions here, so I'll ask a couple of my own: is there (or should there be) a demonstrable standard or standards for the acquisition of a high school diploma? Whereby if one does not complete X, a person does not receive one?
DeleteIn short: should there be the proverbial line in the sand?
Secondly, If I did have concerns that awarding him this diploma would degrade the value of a high school diploma, would this necessarily be a bad thing?
"Mind you, I'm not supposed to know that. The PA DOE Code of Ethics says that I should never set eyes on the test itself, and if I accidentally see anything, I should make myself forget it. We are supposed to remain ignorant of the test except in the most general terms, and the students have to swear not to talk about them either. We are all not only supposed to hit the target, but we're supposed to do it blindfolded. That way the test manufacturers won't have to spend money rebuilding the test every year the integrity of the test will be maintained."
ReplyDelete"Reading these technical reports might suggest to a layperson, even a highly educated one, that testocrats have disappeared so far up their own butts that they are now viewing the world through the tiny little lenses of their own belly buttons."
For the sake of argument, let's say that exit exams are a good idea as requirements for either awarding a diploma or for determining the type of diploma awarded. Such exams should be carefully crafted and the designers of the test need to make available specific, relevant preparation materials. Moreover, after the test has been in existence for, say, 3 years the old test should be released and done so every three years afterward.That way teachers and students would be familiar with the format & philosophy of the test designers. All "free-response" questions (i.e. essays, stories, math 'word problems', science problems, etc.) should be released every year. I can speak for SC in this: Unfortunately these tests were released in a hurry and much of each exam was designed by testing amateurs. The sense I get now is that nearly every state changes the exam supplier & the requirements demanded every couple of years, so there is seldom a valid exam given. Valid exams are not developed over short periods of time. And it is a job for real pros.
Leads to the second quote:
For the writers of valid exams, this type of analysis IS necessary. Every question needs to be analyzed over which parts of the curriculum is covered and (tricky here) how well the question actually addresses the parts. Moreover, each question has to be analyzed on how the students' responses match up with the top scorers, the bottom scorers & the mean & median scorers. (If 90% of the bottom scorers get a response correct, but only 10% of the top scorers got it right, is the question valid?) Yes, gazing up their navels through tiny lenses.
Short story made long: IF such exams are to be required, they have to be a hell of a lot better than they currently exist.