This week the Pennsylvania House of Representatives voted to postpone the use of the Keystone Exam (Pennsylvania's version of the Big Standardized Test required by the feds) as a graduation requirement. The plan had been to make the Class of 2017 pass the reading, math and biology exams in order to get a diploma. The House bill pushes that back to 2019.
The House measure joins a similar Senate bill passed last summer. The only significant difference between the bills is that the House bill adds a requirement to search for some tool more useful than the Keystones. The bills should be easy to fit together, and the governor is said to support the two-year pause, so the postponement is likely to become law. And that is both good news and bad news.
Good News
The Keystone is a lousy test. It is so lousy that, as I was reminded in my recent Keystone Test Giver Training Slideshow, all Pennsylvania teachers are forbidden to see it, to look at it, to lays eyes on it, and, if we do somehow end up seeing any of the items, sworn to secrecy about it. But because I am a wild and crazy rebel, I have looked at the Keystone exam as well as the practice items released by the state, and in my professional opinion, it's a lousy test.
So it's a blessing that two more rounds of students will not have to pass the tests in order to graduate-- particularly as the feds bear down on their insistence that students with special needs be required to take the same test as everyone else, with no adaptations or modifications. The year the Keystones are made a graduation requirement is the year that many Pennsylvania students will fail to graduate, even though they have met all other requirements set by their school board and local district.
That will not be a good year.
Bad News
The tests will still be given, and they will still be used for other purposes. Those purposes include evaluating teachers, and evaluating schools.
Pennsylvania's School Performance Profile, looks like it based on a variety of measures (some of which are shaky enough-- PA schools get "points" for buying more of the College Board's AP products) but at least one research group has demonstrated that 90% of the SPP is based on test scores (one huge chunk for a VAMmy growth score and another for the level of the actual score).
So we will continue to have schools and teachers evaluated based on the results of a frustrating and senseless test that students know they have absolutely no stake in, and which they know serves no purpose except to evaluate teachers and schools. Get in there and do your very best, students!
Bonus Round
Of course, some districts tried to deal with that issue of getting student skin in the game by also phasing in a pass-the-test requirement as a local thing. So now a whole bunch of students who have been hearing that they'll have to pass the Keystones to graduate-- they'll be hearing that the state took that requirement away, except then someone will have to tell them that their local district did NOT take the requirement away. This should open up some swell discussion.
So How Do We Feel?
State Board of Education Chairman Larry Wittig (a CPA who was appointed to board by Tom Corbett) is sad, because he thinks the whole testing program is awesome and well-designed. Wittig's reaction is itself a mixed bag. On the one hand, he thinks that the testing system is "well-crafted" and beneficial to students, which is just silly, because the test is neither of those things. On the other hand, he also said this:
If I'm a teacher and in part my evaluation is based on the result of
these tests and now the tests are meaningless, I'm going to have a
problem with that.
And, well, yes. "Not stakes for students, high stakes for schools and teachers" kind of sucks as an approach.
And really, is there anyone in Harrisburg who wants to articulate the reasoning behind, "We don't have faith in this test's ability to fairly measure student achievement, but we do have faith in its ability to measure teacher achievement." No? No, there doesn't seem anybody trying to explain the inherent self-contradiction in this position.
Perhaps the House-sponsored search for a Better Tool will yield fabulous results. But in the meantime, we've already signed Data Recognition Corporation, Inc, to a five-to-eight year contract to keep producing the Keystone, even if we don't know what we want to use the tests for.
The good part of this news is undeniable. Two more years of students who will not have to clear a pointless, poorly-constructed hurdle before they can get their diplomas. That's a win for those students.
But to postpone the test rather than obliterate it, to keep the test in place to club teachers and schools over the head, to signal that you don't really have an idea or a plan or a clue about what the test is for and why we're giving it-- those are all big losses for teachers, for education, and for all the students who have more than two years left in the system.
Showing posts with label Keystones. Show all posts
Showing posts with label Keystones. Show all posts
Wednesday, November 25, 2015
Tuesday, February 10, 2015
Sorting the Tests
Since the beginnings of the current wave of test-driven accountability, reformsters have been excited about stack ranking-- the process of sorting out items from the very best to the very worst (and then taking a chainsaw to the very worst).
This has been one of the major supporting points for continued large-scale standardized testing-- if we didn't have test results, how would we compare students to other students, teachers to other teachers, schools to other schools. The devotion to sorting has been foundational, rarely explained but generally presented as an article of faith, a self-evident value-- well, of course, we want to compare and sort schools and teachers and students!
But you know what we still aren't sorting?
The Big Standardized Tests.
Since last summer the rhetoric to pre-empt the assault on testing has focused on "unnecessary" or "redundant" or even "bad" tests, but we have done nothing to find these tests.
Where is our stack ranking for the tests?
We have two major BSTs-- the PARCC and the SBA. In order to better know how my child is doing (isn't that one of our repeated reasons for testing), I'd like to know which one of these is a better test. There are other state-level BSTs that we're flinging at our students willy-nilly. Which one of these is the best? Which one is the worst?
I mean, we've worked tirelessly to sort and rank teachers in our efforts to root out the bed ones, because apparently "everybody" knows some teachers are bad. Well, apparently everybody knows some tests are bad, so why aren't we tracking them down, sorting them out, and publishing their low test ratings in the local paper?
We've argued relentlessly that I need to be able to compare my student's reading ability with the reading ability of Chris McNoname in Iowa, so why can't I compare the tests that each one is taking?
I realize that coming up with a metric would be really hard, but so what? We use VAM to sort out teachers and it has been debunked by everyone except people who work for the USED. I think we've established that the sorting instrument doesn't have to be good or even valid-- it just has to generate some sort of rating.
So let's get on this. Let's come up with a stack-ranking method for sorting out the SBA and the PARCC and the Keystones and the Indiana Test of Essential Student Swellness and whatever else is out there. If we're going to rate every student and teacher and school, why would we not also rate the raters? And then once we've got the tests rated, we can throw out the bottom ten percent of them. We can offer a "merit bonus" to the company that made the best one (and peanuts to everyone else) because that will reward their excellence and encourage them to do a good job! And for the bottom twenty-five percent of the bad tests, we can call in turnaround experts to take over the company.
In fact-- why not test choice? If my student wants to take the PARCC instead of the ITESS because the PARCC is rated higher, why shouldn't my student be able to do that. And if I don't like any of them, why shouldn't I be able to create a charter test of my own in order to look out for my child's best interests? We can give every student a little testing voucher and let the money follow them t whatever test they would prefer to take from whatever vendors pop up.
Let's get on this quickly, because I think I've just figured out to make a few million dollars, and it's going to take at least a weekend to whip up my charter test company product. Let the sorting and comparing and ranking begin!
This has been one of the major supporting points for continued large-scale standardized testing-- if we didn't have test results, how would we compare students to other students, teachers to other teachers, schools to other schools. The devotion to sorting has been foundational, rarely explained but generally presented as an article of faith, a self-evident value-- well, of course, we want to compare and sort schools and teachers and students!
But you know what we still aren't sorting?
The Big Standardized Tests.
Since last summer the rhetoric to pre-empt the assault on testing has focused on "unnecessary" or "redundant" or even "bad" tests, but we have done nothing to find these tests.
Where is our stack ranking for the tests?
We have two major BSTs-- the PARCC and the SBA. In order to better know how my child is doing (isn't that one of our repeated reasons for testing), I'd like to know which one of these is a better test. There are other state-level BSTs that we're flinging at our students willy-nilly. Which one of these is the best? Which one is the worst?
I mean, we've worked tirelessly to sort and rank teachers in our efforts to root out the bed ones, because apparently "everybody" knows some teachers are bad. Well, apparently everybody knows some tests are bad, so why aren't we tracking them down, sorting them out, and publishing their low test ratings in the local paper?
We've argued relentlessly that I need to be able to compare my student's reading ability with the reading ability of Chris McNoname in Iowa, so why can't I compare the tests that each one is taking?
I realize that coming up with a metric would be really hard, but so what? We use VAM to sort out teachers and it has been debunked by everyone except people who work for the USED. I think we've established that the sorting instrument doesn't have to be good or even valid-- it just has to generate some sort of rating.
So let's get on this. Let's come up with a stack-ranking method for sorting out the SBA and the PARCC and the Keystones and the Indiana Test of Essential Student Swellness and whatever else is out there. If we're going to rate every student and teacher and school, why would we not also rate the raters? And then once we've got the tests rated, we can throw out the bottom ten percent of them. We can offer a "merit bonus" to the company that made the best one (and peanuts to everyone else) because that will reward their excellence and encourage them to do a good job! And for the bottom twenty-five percent of the bad tests, we can call in turnaround experts to take over the company.
In fact-- why not test choice? If my student wants to take the PARCC instead of the ITESS because the PARCC is rated higher, why shouldn't my student be able to do that. And if I don't like any of them, why shouldn't I be able to create a charter test of my own in order to look out for my child's best interests? We can give every student a little testing voucher and let the money follow them t whatever test they would prefer to take from whatever vendors pop up.
Let's get on this quickly, because I think I've just figured out to make a few million dollars, and it's going to take at least a weekend to whip up my charter test company product. Let the sorting and comparing and ranking begin!
Monday, February 2, 2015
No Grad Exams for PA?
Pennsylvania, where I live and teach, has been steadily moving toward a full array of graduation exams.
Currently we have what we call the Keystone exams for math and reading. We are one of the last states to do our examing on paper; this may be related to an attempt a few years back to do our 4Sights (test prep tests for the old exam, known as the PSSA's, which we still give to elementary students because--) well , anyway, we tried to do the testing online and it was a massive clusterfinagle that wasted a week of school and resulted in zero actual scoreable tests. When it comes to online testing, we haven't been all the way around the block because we got in a ten-car pileup on the way.
The math and reading Keystones are exams that you would recognize even if you were from out of state. I attended a state-mounted training last year and the presenter used all PARCC materials, and the message was that it was perfectly comparable to the Keystones.
Our Class of 2017 has to passmath algebra, reading literature, and biology Keystones in order to graduate. Depending on who's talking, there are as many as seven more subject-specific tests coming, just as soon as Harrisburg can scrape together the money to get them made.
Students are already taking the Keystones every year, partly as a way of meeting federal testing requirements and partly as a way to warm up for 2017 (everyone just tries not to tell the students that the Keystones currently mean nothing at all to them). So our students have been taking the tests, and we can see how they're doing. And as 2017 gets closer and closer, one thing becomes increasingly clear-- many students are about to have huge problems getting out of high school.
Students get two tries, and then they move onto a project-based assessment, known to my students as The Binder (or, out of my earshot, that Bigass Stupid Binder) which is essentially an independent study course in looseleaf form. There are now adapted or modified forms for students with special needs. And things could be worse-- you should have been around for the first concept for the science test which was to test all science disciplines, forcing high schools to implement a bio-chem-physics-physiology sequence for ninth and tenth grade.
But it's going to be ugly, and lawmakers are starting to take note.
State Senator Andrew Dinneman (D-Chester County) has been trying to de-testify us for a while. But most recently a group of Republicans have gotten on the Get Off The Testing Bandwagon bandwagon.
Over at newsworks, Kevin McCorry reports about a new bill proposed by state Rep. Mike Tobash (R- Dauphin County) to flat-out repeal the state graduation test mandate.
"The children of the Commonwealth of Pennsylvania, they need to learn, they need to be assessed, but when we've gone so far that we end up handcuffing our educational system with really an overwhelming amount of standardized assessment, we need to stop and put the brakes on here, take a look at it," said Tobash.
Much of the conversation has centered on the tests as unfunded mandates-- schools are required to get the students through them, but have not been given the resources to meet the challenge. And that's not untrue, but it's beside the real problem.
The real problem is that the Keystone exams aren't very good.
As this discussion ramps up, many supporters of the testing are going to make an obvious comment-- if the students aren't doing well on the tests, can't teachers just fix that by doing their jobs better?
McCorry catches Tobash having close to the right thought in response.
Tobash, who testified on the matter at a hearing at Philadelphia City Hall in November, is skeptical that the tests are actually judging students on material that's applicable to modern workforce.
The Keystones are like every other standardized test-- what they measure best is the student's ability to take a standardized test.
So we play the test prep game-- how much can we do in order to get test results without sacrificing the students' actual education? In reading, the Keystones require a particular vocabulary so that we teach the words they want to have covered with the meanings they prefer. This is part of the standard technique of using tests to dictate local curriculum.
More problematic is the whole approach to reading required by the test. Every selection on the test can be read only one way; there's only one acceptable response to the work. The Keystone also has a keen interest in student's psychic powers, regularly asking them to reach conclusions about what the author intended. Some of the interpretive questions, like the author's intent questions, really are topics that are generally considered fair game in an English classroom-- but we have those discussions as open-ended inquiries, where many ideas can be proposed and supported, but no absolute truth can be known. The exam requires the reverse.
And so the exam requires one more reading skill-- the ability to read a standardized test question and figure out what the exam writer wants you to say. This has nothing to do with learning how to be an active, capable reader of literature, and everything to do with being a compliant tool who can take instructions.
So the biggest problem with the Keystones is not that there are too many of them or they take up too much time, though that's a huge problem. The biggest problem is not that they are unfunded mandates, though that's a problem, particularly in a state that as a matter of policy rips the guts out of public school budgets so that charter and cybercharter operators get rich.
No, the biggest problem is that the Keystones twist instruction and education all out of shape, foster educational malpractice, and ultimately don't provide any of the data that their supporters think they're supposed to provide. The Keystones don't tell Harrisburg how well we're doing. They are supposed to provide all sorts of information to teachers to help us address student needs, and that's baloney as well. There's only one thing we find out from test results-- which sorts of questions our students found most tricky. Keystone results are good for refining test prep, and nothing else.The Keystone exams tell one thing and one thing only-- how well students did on the Keystone exams.
McCorry offers this quote from a rep of our new governor:
Gov. Wolf knows we need tools to measure students' progress and ensure they are equipped the skills needed to flourish in the 21st century, but testing should not be the only measurement.
And there's our continuing problem. Not only should testing be the single measurement, but it shouldn't be any measurement.
Even if standardized testing didn't heap unnecessary stress on young students, even if it didn't require wasting time on test prep that has no educational value, even if it didn't put unfunded financial demands on already-strapped districts-- even if it didn't do any of those things, the Keystone exam would still be a bad idea because it simply doesn't tell anyone what they want to know. Repealing state requirements for a graduation exam is the right choice. We'll see if Harrisburg actually makes it.
Currently we have what we call the Keystone exams for math and reading. We are one of the last states to do our examing on paper; this may be related to an attempt a few years back to do our 4Sights (test prep tests for the old exam, known as the PSSA's, which we still give to elementary students because--) well , anyway, we tried to do the testing online and it was a massive clusterfinagle that wasted a week of school and resulted in zero actual scoreable tests. When it comes to online testing, we haven't been all the way around the block because we got in a ten-car pileup on the way.
The math and reading Keystones are exams that you would recognize even if you were from out of state. I attended a state-mounted training last year and the presenter used all PARCC materials, and the message was that it was perfectly comparable to the Keystones.
Our Class of 2017 has to pass
Students are already taking the Keystones every year, partly as a way of meeting federal testing requirements and partly as a way to warm up for 2017 (everyone just tries not to tell the students that the Keystones currently mean nothing at all to them). So our students have been taking the tests, and we can see how they're doing. And as 2017 gets closer and closer, one thing becomes increasingly clear-- many students are about to have huge problems getting out of high school.
Students get two tries, and then they move onto a project-based assessment, known to my students as The Binder (or, out of my earshot, that Bigass Stupid Binder) which is essentially an independent study course in looseleaf form. There are now adapted or modified forms for students with special needs. And things could be worse-- you should have been around for the first concept for the science test which was to test all science disciplines, forcing high schools to implement a bio-chem-physics-physiology sequence for ninth and tenth grade.
But it's going to be ugly, and lawmakers are starting to take note.
State Senator Andrew Dinneman (D-Chester County) has been trying to de-testify us for a while. But most recently a group of Republicans have gotten on the Get Off The Testing Bandwagon bandwagon.
Over at newsworks, Kevin McCorry reports about a new bill proposed by state Rep. Mike Tobash (R- Dauphin County) to flat-out repeal the state graduation test mandate.
"The children of the Commonwealth of Pennsylvania, they need to learn, they need to be assessed, but when we've gone so far that we end up handcuffing our educational system with really an overwhelming amount of standardized assessment, we need to stop and put the brakes on here, take a look at it," said Tobash.
Much of the conversation has centered on the tests as unfunded mandates-- schools are required to get the students through them, but have not been given the resources to meet the challenge. And that's not untrue, but it's beside the real problem.
The real problem is that the Keystone exams aren't very good.
As this discussion ramps up, many supporters of the testing are going to make an obvious comment-- if the students aren't doing well on the tests, can't teachers just fix that by doing their jobs better?
McCorry catches Tobash having close to the right thought in response.
Tobash, who testified on the matter at a hearing at Philadelphia City Hall in November, is skeptical that the tests are actually judging students on material that's applicable to modern workforce.
The Keystones are like every other standardized test-- what they measure best is the student's ability to take a standardized test.
So we play the test prep game-- how much can we do in order to get test results without sacrificing the students' actual education? In reading, the Keystones require a particular vocabulary so that we teach the words they want to have covered with the meanings they prefer. This is part of the standard technique of using tests to dictate local curriculum.
More problematic is the whole approach to reading required by the test. Every selection on the test can be read only one way; there's only one acceptable response to the work. The Keystone also has a keen interest in student's psychic powers, regularly asking them to reach conclusions about what the author intended. Some of the interpretive questions, like the author's intent questions, really are topics that are generally considered fair game in an English classroom-- but we have those discussions as open-ended inquiries, where many ideas can be proposed and supported, but no absolute truth can be known. The exam requires the reverse.
And so the exam requires one more reading skill-- the ability to read a standardized test question and figure out what the exam writer wants you to say. This has nothing to do with learning how to be an active, capable reader of literature, and everything to do with being a compliant tool who can take instructions.
So the biggest problem with the Keystones is not that there are too many of them or they take up too much time, though that's a huge problem. The biggest problem is not that they are unfunded mandates, though that's a problem, particularly in a state that as a matter of policy rips the guts out of public school budgets so that charter and cybercharter operators get rich.
No, the biggest problem is that the Keystones twist instruction and education all out of shape, foster educational malpractice, and ultimately don't provide any of the data that their supporters think they're supposed to provide. The Keystones don't tell Harrisburg how well we're doing. They are supposed to provide all sorts of information to teachers to help us address student needs, and that's baloney as well. There's only one thing we find out from test results-- which sorts of questions our students found most tricky. Keystone results are good for refining test prep, and nothing else.The Keystone exams tell one thing and one thing only-- how well students did on the Keystone exams.
McCorry offers this quote from a rep of our new governor:
Gov. Wolf knows we need tools to measure students' progress and ensure they are equipped the skills needed to flourish in the 21st century, but testing should not be the only measurement.
And there's our continuing problem. Not only should testing be the single measurement, but it shouldn't be any measurement.
Even if standardized testing didn't heap unnecessary stress on young students, even if it didn't require wasting time on test prep that has no educational value, even if it didn't put unfunded financial demands on already-strapped districts-- even if it didn't do any of those things, the Keystone exam would still be a bad idea because it simply doesn't tell anyone what they want to know. Repealing state requirements for a graduation exam is the right choice. We'll see if Harrisburg actually makes it.
Subscribe to:
Posts (Atom)