ICYMI-- one of the best summaries of the Pearson surveillance flap is by Anthony Cody over at Living in Dialogue.
In addition to covering the various types of surveillance in play, and he includes a reference to the column by Cynthia Liu at K-12 News which advances and articulates an argument that several edu-bloggers have raised. It's not just the surveillance (which odes not really reach the level of spying) , but the enforcement-- that Pearson et al have turned state Boards of Education into agencies tasked with preserving corporate intellectual property rights. What we're being treated to is the spectacle of a system whose first priority is watching out for the business interests of a corporation-- the rights and education of students is a lesser priority. Why doe the corporation's concerns come first?
Daniel Katz has raised the question "Why is Pearson's intellectual property a thing?" It's a good question; why are the states that joined together in consortia in order to hire a corporation to produce a test for them-- why do these states not take possession at the end of the process?
I think Cody's column answers all of these questions. He writes this:
Any system that imparts heavy consequences for success or failure must have intense security.
That's correct. If the stakes are huge, that means that the people on the receiving end of potential punishments or rewards are highly motivated to make their numbers any possible way they can. If you put all the food in the castle and tell the villagers that only those who get inside the walls get to eat, you'd better believe that the villagers will be highly motivated to get past those walls any way they can.
The testing system requires Big Security because it is a big system. It's spread all over the country. Back when gold mattered, we put it all in a couple of forts because that was easier to defend. If we had put one gold bar in every city hall in America, defending it would have been a nightmare because there would be a million points of vulnerability.
The Big Standardized Test has a million points of vulnerability. BS Tests face an inherent contradiction-- security is maintained by letting as few people as possible actually see the product, and yet the product can't be used without being viewed. This means (in a relationship dynamic repeated throughout the world of education reform) that the clients are also the enemy.
All across the nation, millions of pairs of eyeballs are seeing what must not be seen.
As Cody notes, the credibility of the entire reform program rests on those tests. Everything in the reformster program depends on those tests being a fair and accurate measure of the (many, many) things they purport to measure. So the state education departments, the Data Overlords, the reformsters entrenched in various offices across the nation-- they need for the test to at least look secure and valid. This means security must be tight because 1) it's a lousy test whose gotcha questions must be sprung as a surprise and 2) the more responsible grown-ups see the test, the more criticism of the test gains traction.
So both state reformsters and corporations need tight super-security, and only the corporation has the resources. The state will be willing to pitch in on security because they have a stake in it, and they'll be willing to let Pearson et al cruise social media and deploy test police because the corporations have those kind of resources. And of course Pearson won't actually hand over the test to the states because the system has given the states a huge stake in the security of Pearson's "intellectual property."
The issue of test security is welded to test validity, and both are bonded tightly to the issues of ed reform itself. The BS Tests have simply advanced the smooshing together of state and corporate interests. The only interests not represented in all this-- the interests of students and of public education.
Monday, March 16, 2015
Sunday, March 15, 2015
Why Critical Thinking Won't Be on The Test
Critical thinking is one of the Great White Whales of education. Every new education reform promises to foster it, and every new generation of Big Standardized Tests promises to measure it.
Everybody working in education has some idea of what it is, and yet it can be hard to put into a few words. There are entire websites devoted to it, and organizations and foundations dedicated to it. Here, for example, is the website of the Foundation for Critical Thinking. They've got a definition of critical thinking from the 1987 National Council for Excellence in Critical Thinking that goes on for five paragraphs. One of the shortest definitions I can pull out of their site is this one:
The intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action.
Bottom line-- critical thinking is complicated.
So can we believe test manufacturers when they say that their test measures critical thinking skills? Can a series of questions that can be delivered and scored on a national scale be designed that would actually measure the critical thinking skills of the test takers?
I think the obstacles to creating such a standardized test are huge. Here are the hurdles that test manufacturers would have to leap.
Critical thinking takes time.
Certainly there are people who can make rapid leaps to a conclusion, who can see patterns and structure of ideas quickly and clearly (though we could argue that's more intuitive thinking than critical thinking, but then, intuition might just be critical thinking that runs below the level of clear consciousness, so, again, complicated). But mostly the kind of analyses and evaluation that we associate with critical thinking takes time.
There's a reason that English teachers rarely give the assignment, "The instant you finish reading the last page of the assigned novel, immediately start writing the assigned paper and complete it within a half hour." Critical thinking is most often applied to complex constructions, and for most people it takes a while to examine, reflect, re-examine and pull apart the pieces of the matter.
If you are asking a question that must be answered right now, this second, you are at the very best asking a question that measures how quickly the student can critically think-- but you're probably not measuring critical thinking at all.
Critical thinking takes place in a personal context.
We do not do our critical thinking in a vacuum. We are all standing in a particular spot in space and time, and that vantage point gives us a particular perspective. What we bring to the problem in terms of prior understanding, background, and our own mental constructs, profoundly influences how we critically think about any problem.
We tend to make sense out of unfamiliar things by looking for familiar structures and patterns within them, and so our thinking is influenced by what we already know. I've been an amateur musician my whole life, so I can readily spot structures and patterns that mimic the sorts of things I know form the world of music. However, I am to athletics what Justin Bieber is to quantum physics, and my mental default is not to look at things in athletic terms. Think about your favorite teachers and explainers-- they are people who took something you couldn't understand and put it in terms you could understand. They connected what you didn't know to what you did know.
None of this is a revolutionary new insight, but we have to remember that it means every individual human beings brings a different set of tools to each critical thinking problem. That means it is impossible to design a critical thinking question that is a level playing field for all test takers. Impossible.
Critical thinking is social.
How many big critical thinking problems of the world were solved single-handedly by a single, isolated human being?
Our sciences have a finely-tuned carefully-structured method for both carrying on and acknowledging dialogue with the critical thinkers of the past. If a scientist popped up claiming to have written a groundbreaking paper for which he needed no citations nor footnotes because he had done it all himself, he would be lucky to be taken seriously for five minutes. The Einsteins of history worked in constant dialogue with other scientists; quantum theories were hammered out in part by dialogue by a disbelieving Einstein ("God does not play dice") and the wave of scientists building on the implications of his work.
On the less grand scale, we find our own students who want to talk about the test, want to compare answers, want to (and sometimes love to) argue about the finer points of every thinking assignment.
Look at our own field. We've all been working on a big final test question-- "What is the best way to take American education forward?"-- and almost everyone on every side of the question is involved in a huge sprawling debate that sees most of us pushing forward by trying to articulate our own perspective and thoughts while in dialogue with hundreds of other thinkers in varying degrees of agreement and disagreement. One of the reasons I trust and believe David Coleman far less than other reformsters is that he almost never acknowledges the value of any other thinker in his development of Common Core. To watch Coleman talk, you would think he developed the entire thing single-handedly in his own head. That is not the mark of a serious person.
Do people occasionally single-handedly solve critical thinking problems on their own, in isolation, like a keep-your-eyes-on-your-own-paper test? It's certainly not unheard of-- but it's not the norm. If your goal is to make the student answer the question in an isolation chamber, you are not testing critical thinking.
Critical thinking is divergent.
Let's go back to that critical thinking problem about how to best move forward with public education. You may have noticed that people have arrived a wide variety of conclusions about what the answer might be. There are two possible explanations for the wide variety of answers.
The first explanation is the childish one, and folks from both sides indulge in it-- people who have reached a conclusion other than mine are some combination of stupid, uninformed, and evil.
The more likely explanation is that, given a wide variety of different perspectives, different histories, and different values, intelligent people will use critical thinking skills and arrive at different conclusions.
Critical thinking is NOT starting with the conclusion that you want to reach and then constructing a bridge of arguments specifically designed to get you there, and yet this is perilously close to the kind of thinking a standardized test requires.
But here's a good rule of thumb for anyone trying to test critical thinking skills-- if you are designing your assessment and thinking, "Okay, any student who is really using critical thinking skills must come up with answer B," you are not testing critical thinking skills. No-- I take that back. Oddly enough this is a sort of critical thinking question, but the actual question is, "Given what you know about the people giving you the test and the clues they have left for you, what answer do you think the testmakers want you to select?" But that is probably not the question that you thought you were asking. As soon as you ask a question with one right answer (even if the one right answer is to select both correct answers), you are not testing critical thinking.
Critical thinking must be assessed by critical thinking.
How do you assess the answer to your critical thinking question? Again, I direct you to the education debates, where we "grade" each others' work all the time, checking and analyzing, probing for logical fallacies, mis-presentation of data, mis-reading of other peoples' writing, honesty of logic, etc etc etc.
To assess how well someone has answered a critical thinking question, you need to be knowledgeable about the answerer, the subject matter, and whatever background knowledge they have brought to the table (if I answer a question using a music analogy and you know nothing about music, will you know if my analogy holds up). On top of all that, you need some critical thinking skills of your own. And that means all of the issues listed above come back into play.
What are the odds that you can get all that in a cadre of minimum-wage test-scorers who can wade through a nation's worth of tests quickly, efficiently, and accurately?
Can it be done?
When I look at all those hurdles and try to imagine a nationally scaled test that gets deals with all of them, I'm stumped. Heck, it's a challenge to come up with good measure for my own classroom, and that's because critical thinking is more of a tool than an end in itself. Testing for critical thinking skills is kind of like testing for hammering skills-- it can be done, but it will be an artificial situation and not as compelling and useful and telling as having the student actually build something.
So I try to come up with assessments that require critical thinking as a tool for completion of the assignment. Then I try to come up with the time to grade them. Could I come up with something for the entire nation? Practically speaking, no. Even if I get past the first few hurdles, when I reach the point that I need a couple million teachers to score it, I'm stumped. Plus, standardized test fans are not going to like the lack of standardization in my test.
No, I think that standardized testing and critical thinking are permanently at odds and we'd be further ahead trying to develop a test to compare the flammability of the water from different rivers.
Critical thinking is not on the BS Tests. It will not be on the new generations of the BS Tests. It will never be on the BS Tests. Test manufacturers should stop promising what they cannot hope to deliver.
Everybody working in education has some idea of what it is, and yet it can be hard to put into a few words. There are entire websites devoted to it, and organizations and foundations dedicated to it. Here, for example, is the website of the Foundation for Critical Thinking. They've got a definition of critical thinking from the 1987 National Council for Excellence in Critical Thinking that goes on for five paragraphs. One of the shortest definitions I can pull out of their site is this one:
The intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action.
Bottom line-- critical thinking is complicated.
So can we believe test manufacturers when they say that their test measures critical thinking skills? Can a series of questions that can be delivered and scored on a national scale be designed that would actually measure the critical thinking skills of the test takers?
I think the obstacles to creating such a standardized test are huge. Here are the hurdles that test manufacturers would have to leap.
Critical thinking takes time.
Certainly there are people who can make rapid leaps to a conclusion, who can see patterns and structure of ideas quickly and clearly (though we could argue that's more intuitive thinking than critical thinking, but then, intuition might just be critical thinking that runs below the level of clear consciousness, so, again, complicated). But mostly the kind of analyses and evaluation that we associate with critical thinking takes time.
There's a reason that English teachers rarely give the assignment, "The instant you finish reading the last page of the assigned novel, immediately start writing the assigned paper and complete it within a half hour." Critical thinking is most often applied to complex constructions, and for most people it takes a while to examine, reflect, re-examine and pull apart the pieces of the matter.
If you are asking a question that must be answered right now, this second, you are at the very best asking a question that measures how quickly the student can critically think-- but you're probably not measuring critical thinking at all.
Critical thinking takes place in a personal context.
We do not do our critical thinking in a vacuum. We are all standing in a particular spot in space and time, and that vantage point gives us a particular perspective. What we bring to the problem in terms of prior understanding, background, and our own mental constructs, profoundly influences how we critically think about any problem.
We tend to make sense out of unfamiliar things by looking for familiar structures and patterns within them, and so our thinking is influenced by what we already know. I've been an amateur musician my whole life, so I can readily spot structures and patterns that mimic the sorts of things I know form the world of music. However, I am to athletics what Justin Bieber is to quantum physics, and my mental default is not to look at things in athletic terms. Think about your favorite teachers and explainers-- they are people who took something you couldn't understand and put it in terms you could understand. They connected what you didn't know to what you did know.
None of this is a revolutionary new insight, but we have to remember that it means every individual human beings brings a different set of tools to each critical thinking problem. That means it is impossible to design a critical thinking question that is a level playing field for all test takers. Impossible.
Critical thinking is social.
How many big critical thinking problems of the world were solved single-handedly by a single, isolated human being?
Our sciences have a finely-tuned carefully-structured method for both carrying on and acknowledging dialogue with the critical thinkers of the past. If a scientist popped up claiming to have written a groundbreaking paper for which he needed no citations nor footnotes because he had done it all himself, he would be lucky to be taken seriously for five minutes. The Einsteins of history worked in constant dialogue with other scientists; quantum theories were hammered out in part by dialogue by a disbelieving Einstein ("God does not play dice") and the wave of scientists building on the implications of his work.
On the less grand scale, we find our own students who want to talk about the test, want to compare answers, want to (and sometimes love to) argue about the finer points of every thinking assignment.
Look at our own field. We've all been working on a big final test question-- "What is the best way to take American education forward?"-- and almost everyone on every side of the question is involved in a huge sprawling debate that sees most of us pushing forward by trying to articulate our own perspective and thoughts while in dialogue with hundreds of other thinkers in varying degrees of agreement and disagreement. One of the reasons I trust and believe David Coleman far less than other reformsters is that he almost never acknowledges the value of any other thinker in his development of Common Core. To watch Coleman talk, you would think he developed the entire thing single-handedly in his own head. That is not the mark of a serious person.
Do people occasionally single-handedly solve critical thinking problems on their own, in isolation, like a keep-your-eyes-on-your-own-paper test? It's certainly not unheard of-- but it's not the norm. If your goal is to make the student answer the question in an isolation chamber, you are not testing critical thinking.
Critical thinking is divergent.
Let's go back to that critical thinking problem about how to best move forward with public education. You may have noticed that people have arrived a wide variety of conclusions about what the answer might be. There are two possible explanations for the wide variety of answers.
The first explanation is the childish one, and folks from both sides indulge in it-- people who have reached a conclusion other than mine are some combination of stupid, uninformed, and evil.
The more likely explanation is that, given a wide variety of different perspectives, different histories, and different values, intelligent people will use critical thinking skills and arrive at different conclusions.
Critical thinking is NOT starting with the conclusion that you want to reach and then constructing a bridge of arguments specifically designed to get you there, and yet this is perilously close to the kind of thinking a standardized test requires.
But here's a good rule of thumb for anyone trying to test critical thinking skills-- if you are designing your assessment and thinking, "Okay, any student who is really using critical thinking skills must come up with answer B," you are not testing critical thinking skills. No-- I take that back. Oddly enough this is a sort of critical thinking question, but the actual question is, "Given what you know about the people giving you the test and the clues they have left for you, what answer do you think the testmakers want you to select?" But that is probably not the question that you thought you were asking. As soon as you ask a question with one right answer (even if the one right answer is to select both correct answers), you are not testing critical thinking.
Critical thinking must be assessed by critical thinking.
How do you assess the answer to your critical thinking question? Again, I direct you to the education debates, where we "grade" each others' work all the time, checking and analyzing, probing for logical fallacies, mis-presentation of data, mis-reading of other peoples' writing, honesty of logic, etc etc etc.
To assess how well someone has answered a critical thinking question, you need to be knowledgeable about the answerer, the subject matter, and whatever background knowledge they have brought to the table (if I answer a question using a music analogy and you know nothing about music, will you know if my analogy holds up). On top of all that, you need some critical thinking skills of your own. And that means all of the issues listed above come back into play.
What are the odds that you can get all that in a cadre of minimum-wage test-scorers who can wade through a nation's worth of tests quickly, efficiently, and accurately?
Can it be done?
When I look at all those hurdles and try to imagine a nationally scaled test that gets deals with all of them, I'm stumped. Heck, it's a challenge to come up with good measure for my own classroom, and that's because critical thinking is more of a tool than an end in itself. Testing for critical thinking skills is kind of like testing for hammering skills-- it can be done, but it will be an artificial situation and not as compelling and useful and telling as having the student actually build something.
So I try to come up with assessments that require critical thinking as a tool for completion of the assignment. Then I try to come up with the time to grade them. Could I come up with something for the entire nation? Practically speaking, no. Even if I get past the first few hurdles, when I reach the point that I need a couple million teachers to score it, I'm stumped. Plus, standardized test fans are not going to like the lack of standardization in my test.
No, I think that standardized testing and critical thinking are permanently at odds and we'd be further ahead trying to develop a test to compare the flammability of the water from different rivers.
Critical thinking is not on the BS Tests. It will not be on the new generations of the BS Tests. It will never be on the BS Tests. Test manufacturers should stop promising what they cannot hope to deliver.
Saturday, March 14, 2015
Canaries, Schools & Poverty
Let's step back from public education itself for a moment. Look at the bigger picture.
The economic engine of the US is messed up. Call it conspiracy, policy, oligarchy, or just a bad turn-- the poor are being left further and further behind, and the rich are consolidating their own piece of the pie. The shared energy and mission of the country are fragmenting, and more and more people and communities are sinking into poverty. But this is a long, slow process, and it wouldn't show up everywhere at once.
What might be a leading indicator of the growing corrosive and destructive power of poverty? How about schools-- the common good that is supposed to be provided by all citizens for all citizens.
Public education is the canary in the coal mine, an early notable indicator that something is wrong, that something toxic and damaging is in the air. And of the public schools, those that are already weak and poor, least able to stand the shock and the strain, that are most bowed under the weight of poverty will start to falter first.
Now, when the canary starts to falter and fall, that's a sign that something is wrong, that we need to get the people out of the mine or more fresh air into it.
But suppose instead we had a bunch of people who said, "No, what we need to do is work on resuscitating the canary! We need to hire canary doctors and develop new canary breathing programs." Those would be the reformers. They are not wrong about the distress of the canary, or the need to do something before the canary dies, but it's a huge mistake to ignore the conditions that are killing the canary in the first place. All the respiratory therapy in the world will not save the canary if we don't get it some oxygen and get rid of the bad gas poisoning its system.
The Data Overlords want to run extensive tests on the canary. "Let's measure its oxygen intake every five minutes. If we keep measuring, it should start breathing more freely." When questioned on that point, they simply reply, "Look, this is the same oxygen intake test we use for those canaries up on the surface in the special gilded cages. Why shouldn't these mine canaries get to take the same test?"
Charter operators just want to bring in other canaries. "Your canary is weak and stupid and has a bad attitude," they say. "What we need are these fresh new alternative canaries. Once we get those canaries in there, they will breathe so much better than your dumb canary."
Meanwhile, the profiteers are in talks with the mine operators. "If you would just unleash the power of the free market, we could make a delicious and profitable canary stew."
When some folks try to push the idea that pumping oxygen into the mine could help revive the canary, some reformers cry foul. "What's the matter with you? Don't you believe this canary can breathe? Do you think this canary isn't good enough to survive!"
Meanwhile, the canaries continue to die and air in the mine becomes more and more toxic, until not a canary or an eagle or a full grown human could hope to survive there. The canaries absolutely deserve attention and assistance, and we absolutely have an imperative to keep them alive. But if we don't find a way to replace the bad air with good, to sweep out the lung-clenching methane of poverty and bring in some oxygen, we'll just be stuck in endless cycle of canary rescue, complete with arguments about how to rescue the canary, who should rescue the canary, and whether or not anyone can profit from rescuing the canary. All the while the bad air spreads.
York PA: Charters Blocked
You may recall that the last time we checked in, York PA was on a fast track to the suspension of democracy. But that train has been called back to the station.
York Schools were among the PA schools suffering sever financial distress (PA has operated with a school funding system that produces a lot of local financial hardship). The previous administration of Tom Corbett had used that as a trigger to install a district recovery office, and just a few months ago-- almost as if we were in a hurry to do a deal before the new governor took office-- a PA judged ruled that the district could go into receivership, a nifty system in which the democraticaly elected school board is stripped of power and the state-appointed receiver could do as he wished.
What David Meckley, the receiver, wanted to do was turn the whole district over to for-profit charter chain, Charter Schools USA. Lots of people thought that was an awful idea (among other problems, there was no reason to believe that CSUSA had a clue what to do with the district once they took it over). The judge who ruled in the case did so based on close reading of the law, declaring that even if the plan was clearly terrible, that wasn't his problem. That ruling was being appealed.
But now all of that has come to a screeching halt.
The full account is in Friday's York Daily Record. The short headline version is simple-- David Meckley has resigned as recovery officer. The longer version is encouraging for Pennsylvanians (like me) who weren't really sure which way new governor Tom Wolf's wind would be blowing-- Meckley resigned because the governor's office made it plain that charters were off the table.
There was apparently an intermediate stage, during which Meckley and locals and the state fiddled with a charter-public mix plan.
Meckley said in an interview that, around December, he, district administrators, the proposed charter board and some community leaders had crafted an alternative plan that involved a mix of district- and charter-run buildings. He said he had significant conversations with the Wolf administration about it, but "ultimately the position came down that charters are off the table."
And so, reading the writing on the wall, Meckley has resigned, and the search for a new receiver is on. The board president is wryly hopeful.
"My understanding is they wanted to put someone in that position who knows about the educational aspect of schools," she said.
Meckley, even on his way out the door, continued to demonstrate that he was not that education-understanding guy by expressing his belief that a receivership was necessary because if the schools weren't going to be punished into excellence, they would never get there (I'm paraphrasing).
Wolf has stated, via his proposed budget, his intention to get funding back up to a higher level in Pennsylvania. What the budget will actually look like once it gets past the GOP-controlled legislature is another question. But this move in York follows Wolf's replacement of the chairman of the board that runs Philly schools after the defrocked chair approved more charters in Philly in opposition to Wolf's stated desire to have no more Philly charters.
Meanwhile, York has plenty of problems still to solve. The York Daily Record quotes Clovis Gallon, a teacher who was one of the leaders of the local charter opposition:
"Clearly we recognize the fact there's a lot of work to do with our students, with our community, with our school district," he said. "We're ready to accept that challenge. As a parent, as a teacher, I'm ready to accept the challenge."
York Schools were among the PA schools suffering sever financial distress (PA has operated with a school funding system that produces a lot of local financial hardship). The previous administration of Tom Corbett had used that as a trigger to install a district recovery office, and just a few months ago-- almost as if we were in a hurry to do a deal before the new governor took office-- a PA judged ruled that the district could go into receivership, a nifty system in which the democraticaly elected school board is stripped of power and the state-appointed receiver could do as he wished.
What David Meckley, the receiver, wanted to do was turn the whole district over to for-profit charter chain, Charter Schools USA. Lots of people thought that was an awful idea (among other problems, there was no reason to believe that CSUSA had a clue what to do with the district once they took it over). The judge who ruled in the case did so based on close reading of the law, declaring that even if the plan was clearly terrible, that wasn't his problem. That ruling was being appealed.
But now all of that has come to a screeching halt.
The full account is in Friday's York Daily Record. The short headline version is simple-- David Meckley has resigned as recovery officer. The longer version is encouraging for Pennsylvanians (like me) who weren't really sure which way new governor Tom Wolf's wind would be blowing-- Meckley resigned because the governor's office made it plain that charters were off the table.
There was apparently an intermediate stage, during which Meckley and locals and the state fiddled with a charter-public mix plan.
Meckley said in an interview that, around December, he, district administrators, the proposed charter board and some community leaders had crafted an alternative plan that involved a mix of district- and charter-run buildings. He said he had significant conversations with the Wolf administration about it, but "ultimately the position came down that charters are off the table."
And so, reading the writing on the wall, Meckley has resigned, and the search for a new receiver is on. The board president is wryly hopeful.
"My understanding is they wanted to put someone in that position who knows about the educational aspect of schools," she said.
Meckley, even on his way out the door, continued to demonstrate that he was not that education-understanding guy by expressing his belief that a receivership was necessary because if the schools weren't going to be punished into excellence, they would never get there (I'm paraphrasing).
Wolf has stated, via his proposed budget, his intention to get funding back up to a higher level in Pennsylvania. What the budget will actually look like once it gets past the GOP-controlled legislature is another question. But this move in York follows Wolf's replacement of the chairman of the board that runs Philly schools after the defrocked chair approved more charters in Philly in opposition to Wolf's stated desire to have no more Philly charters.
Meanwhile, York has plenty of problems still to solve. The York Daily Record quotes Clovis Gallon, a teacher who was one of the leaders of the local charter opposition:
"Clearly we recognize the fact there's a lot of work to do with our students, with our community, with our school district," he said. "We're ready to accept that challenge. As a parent, as a teacher, I'm ready to accept the challenge."
Pearson Proves PARCC Stinks
When I was in tenth grade, I took a course called Biological Sciences Curriculum Studies (BSCS). It was a course known for its rigor and for its exceedingly tough tests.
The security on these tests? Absolutely zero. We took them as take-home tests. We had test-taking parties. We called up older siblings who were biology majors. The teacher knew we did these things. The teacher did not care, and it did not matter, because the tests required reasoning and application of the basic understanding of the scientific concepts. It wasn't enough, for instance, to know the parts of a single-celled organism-- you had to work out how those parts were analogous to the various parts of a city where the residents made pottery. You had to break down the implications of experimental design. And as an extra touch, after taking the test for a week outside of class, you had to take a different version of the same test (basically the same questions in a different order) in class.
Did people fail these zero-security take home tests? Oh, yes. They did.
I often think of those tests these days, because they were everything that modern standardized test manufacturers claim their tests are.
Test manufacturers and their proxies tell us repeatedly that their tests require critical thinking, rigorous mental application, answering questions with more than just rote knowledge.
They are lying.
They prove they are lying with their relentless emphasis on test security. Teachers may not look at the test, cannot so much as read questions enough to understand the essence of them. Students, teacher, and parents are not allowed to know anything specific about student responses after the fact (making the tests even less useful than the could possibly be).
And now, of course, we've learned that Pearson apparently has a super-secret cyber-security squad that just cruises the interwebs, looking for any miscreant teens who are violating the security of the test and calling the state and local authorities to have that student punished(and, perhaps, mounting denial of service attacks on any bloggers who dare to blog about it).
This shows a number of things, not the least of which is what everyone should already have know-- Pearson puts its own business interests ahead of anything and everything.
But it also tells us something about the test.
You know what kind of test need this sort of extreme security? A crappy one.
Questions that test "critical thinking" do not test it by saying, "Okay, you can only have a couple of minutes to read and think about this because if you had time to think about it, that wouldn't be critical thinking." A good, solid critical thinking question could take weeks to answer.
Test manufacturers and their cheerleaders like to say that these tests are impervious to test prep-- but if that were true, no security would be necessary. If the tests were impervious to any kind of advance preparation aimed directly at those tests, test manufacturers would be able to throw the tests out there in plain sight, like my tenth grade biology teacher did.
A good assessment has no shortcuts and needs no security. Look at performance-based measures-- no athlete shows up at an event and discovers at that moment, "Surprise! Today you're jumping over that bar!"
Authentic assessment is no surprise at all. It is exactly what you expect because it is exactly what yo prepared for, exactly what you've been doing all along-- just, this time, for a grade.
Big Stupid Test manufacturers insist that their test must be a surprise, that nobody can know anything about it, is a giant, screaming red alarm signal that these tests are crap. In what other industry can you sell a customer a product and refuse to allow them to look at it! It's like selling the emperor his new clothes and telling him they have to stay in the factory closet. Who falls for this kind of bad sales pitch? "Let me sell you this awesome new car, but you can never drive it and it will stay parked in our factory garage. We will drive you around in it, but you must be blindfolded. Trust us. It's a great car." Who falls for that??!!
The fact that they will go to such extreme and indefensible lengths to preserve the security of their product is just further proof that their product cannot survive even the simplest scrutiny.
The fact that product security trumps use of the product just raises this all to a super-kafka-esque level. It is more important that test security be maintained than it is that teachers and parents get any detailed and useful information from it. Test fans like to compare these tests to, say, tests at a doctor's office. That's a bogus comparison, but even if it weren't, test manufacturers have created a doctors office in which the doctor won't tell you what test you're getting, and when the test results come back STILL won't tell you what kind of test they gave you and will only tell you whether you're sick or well-- but nothing else because the details of your test results are proprietary and must remain a secret.
Test manufacturers like Pearson are right about one thing-- we don't need the tests to know how badly they suck, because this crazy-pants emphasis on product security tells us all we need to know. These are tests that can't survive the light of day, that are so frail and fragile and ineffectual that these tests can never be tested, seen, examined, or even, apparently, discussed.
Test manufacturers are telling us, via their security measures, just how badly these tests suck. People just have to start listening.
The security on these tests? Absolutely zero. We took them as take-home tests. We had test-taking parties. We called up older siblings who were biology majors. The teacher knew we did these things. The teacher did not care, and it did not matter, because the tests required reasoning and application of the basic understanding of the scientific concepts. It wasn't enough, for instance, to know the parts of a single-celled organism-- you had to work out how those parts were analogous to the various parts of a city where the residents made pottery. You had to break down the implications of experimental design. And as an extra touch, after taking the test for a week outside of class, you had to take a different version of the same test (basically the same questions in a different order) in class.
Did people fail these zero-security take home tests? Oh, yes. They did.
I often think of those tests these days, because they were everything that modern standardized test manufacturers claim their tests are.
Test manufacturers and their proxies tell us repeatedly that their tests require critical thinking, rigorous mental application, answering questions with more than just rote knowledge.
They are lying.
They prove they are lying with their relentless emphasis on test security. Teachers may not look at the test, cannot so much as read questions enough to understand the essence of them. Students, teacher, and parents are not allowed to know anything specific about student responses after the fact (making the tests even less useful than the could possibly be).
And now, of course, we've learned that Pearson apparently has a super-secret cyber-security squad that just cruises the interwebs, looking for any miscreant teens who are violating the security of the test and calling the state and local authorities to have that student punished(and, perhaps, mounting denial of service attacks on any bloggers who dare to blog about it).
This shows a number of things, not the least of which is what everyone should already have know-- Pearson puts its own business interests ahead of anything and everything.
But it also tells us something about the test.
You know what kind of test need this sort of extreme security? A crappy one.
Questions that test "critical thinking" do not test it by saying, "Okay, you can only have a couple of minutes to read and think about this because if you had time to think about it, that wouldn't be critical thinking." A good, solid critical thinking question could take weeks to answer.
Test manufacturers and their cheerleaders like to say that these tests are impervious to test prep-- but if that were true, no security would be necessary. If the tests were impervious to any kind of advance preparation aimed directly at those tests, test manufacturers would be able to throw the tests out there in plain sight, like my tenth grade biology teacher did.
A good assessment has no shortcuts and needs no security. Look at performance-based measures-- no athlete shows up at an event and discovers at that moment, "Surprise! Today you're jumping over that bar!"
Authentic assessment is no surprise at all. It is exactly what you expect because it is exactly what yo prepared for, exactly what you've been doing all along-- just, this time, for a grade.
Big Stupid Test manufacturers insist that their test must be a surprise, that nobody can know anything about it, is a giant, screaming red alarm signal that these tests are crap. In what other industry can you sell a customer a product and refuse to allow them to look at it! It's like selling the emperor his new clothes and telling him they have to stay in the factory closet. Who falls for this kind of bad sales pitch? "Let me sell you this awesome new car, but you can never drive it and it will stay parked in our factory garage. We will drive you around in it, but you must be blindfolded. Trust us. It's a great car." Who falls for that??!!
The fact that they will go to such extreme and indefensible lengths to preserve the security of their product is just further proof that their product cannot survive even the simplest scrutiny.
The fact that product security trumps use of the product just raises this all to a super-kafka-esque level. It is more important that test security be maintained than it is that teachers and parents get any detailed and useful information from it. Test fans like to compare these tests to, say, tests at a doctor's office. That's a bogus comparison, but even if it weren't, test manufacturers have created a doctors office in which the doctor won't tell you what test you're getting, and when the test results come back STILL won't tell you what kind of test they gave you and will only tell you whether you're sick or well-- but nothing else because the details of your test results are proprietary and must remain a secret.
Test manufacturers like Pearson are right about one thing-- we don't need the tests to know how badly they suck, because this crazy-pants emphasis on product security tells us all we need to know. These are tests that can't survive the light of day, that are so frail and fragile and ineffectual that these tests can never be tested, seen, examined, or even, apparently, discussed.
Test manufacturers are telling us, via their security measures, just how badly these tests suck. People just have to start listening.
Pearson Is Big Brother
You've already heard the story by now-- Pearson has been found monitoring students on social media in New Jersey, catching them tweeting about the PARCC test, and contacting the state Department of Education so that the DOE can contact the local school district to get the students in trouble.
You can read the story here at the blog of NJ journalist Bob Braun. Well, unless the site is down again. Since posting the story, Braun's site has gone down twice that I know of. Initially it looked like Braun had simply broken the internet, as readers flocked to the report. Late last night Braun took to facebook to report that the site was under attack and that he had taken it down to stop the attack. As I write this (6:17 AM Saturday) the site and the story are up, though loading slowly.
The story was broken by Superintendent Elizabeth Jewett of Watchung Hills Regional High School district in an email to her colleagues. But in contacting Jewett he has learned that she confirmed three instances in which Pearson contacted the NJDOE to turn over miscreant students for the state to track down and punish. [Update: Jewett here authenticates the email that Braun ran.]
Meanwhile, many alert eyes turned up this: Pearson's Tracx, a program that may or may not allow the kind of monitoring we're talking about here.
Several thoughts occur. First, under exactly whose policy are these students to be punished. Does the PARCC involve them taking the same kind of high security secrecy pledge that teachers are required to take, and would such a pledge signed by a minor, anyway?
How does this fit with the ample case law already establishing that, for instance, students can go on line and create websites or fake facebook accounts mocking school administrators? They can mock their schools, but they have to leave important corporations alone?
I'm also wondering, again, how any test that requires this much tight security could not suck. Seriously.
How much of the massive chunk of money paid by NJ went to the line item "keep an eye on students on line?"
Granted, the use of the word "spying" is a bit much-- social media are not exactly secret places where the expectation of privacy is reasonable or enforceable, and spying on someone there is a little like spying on someone in a Wal-mart. But it's still creepy, and it's still one more clear indicator that Pearson's number one concern is Pearson's business interests, not students or schools or anything else. And while this is not exactly spying, the fact that Pearson never said a public word about their special test police cyber-squad, not even to spin it in some useful way, shows just how far above student, school, and state government they envision themselves to be.
Pearson really is Big Brother-- and not just to students, but to their parents, their schools, and their state government. It's time to put some serious pressure on politicians. If they're even able to stand up to Pearson at this point, now is the time for them to show us.
You can read the story here at the blog of NJ journalist Bob Braun. Well, unless the site is down again. Since posting the story, Braun's site has gone down twice that I know of. Initially it looked like Braun had simply broken the internet, as readers flocked to the report. Late last night Braun took to facebook to report that the site was under attack and that he had taken it down to stop the attack. As I write this (6:17 AM Saturday) the site and the story are up, though loading slowly.
The story was broken by Superintendent Elizabeth Jewett of Watchung Hills Regional High School district in an email to her colleagues. But in contacting Jewett he has learned that she confirmed three instances in which Pearson contacted the NJDOE to turn over miscreant students for the state to track down and punish. [Update: Jewett here authenticates the email that Braun ran.]
Meanwhile, many alert eyes turned up this: Pearson's Tracx, a program that may or may not allow the kind of monitoring we're talking about here.
Several thoughts occur. First, under exactly whose policy are these students to be punished. Does the PARCC involve them taking the same kind of high security secrecy pledge that teachers are required to take, and would such a pledge signed by a minor, anyway?
How does this fit with the ample case law already establishing that, for instance, students can go on line and create websites or fake facebook accounts mocking school administrators? They can mock their schools, but they have to leave important corporations alone?
I'm also wondering, again, how any test that requires this much tight security could not suck. Seriously.
How much of the massive chunk of money paid by NJ went to the line item "keep an eye on students on line?"
Granted, the use of the word "spying" is a bit much-- social media are not exactly secret places where the expectation of privacy is reasonable or enforceable, and spying on someone there is a little like spying on someone in a Wal-mart. But it's still creepy, and it's still one more clear indicator that Pearson's number one concern is Pearson's business interests, not students or schools or anything else. And while this is not exactly spying, the fact that Pearson never said a public word about their special test police cyber-squad, not even to spin it in some useful way, shows just how far above student, school, and state government they envision themselves to be.
Pearson really is Big Brother-- and not just to students, but to their parents, their schools, and their state government. It's time to put some serious pressure on politicians. If they're even able to stand up to Pearson at this point, now is the time for them to show us.
Friday, March 13, 2015
PA: All About the Tests (And Poverty)
In Pennsylvania, we rate schools with the School Performance Profile (SPP). Now a new research report reveals that the SPP is pretty much just a means of converting test scores into a school rating. This has huge implications for all teachers in PA because our teacher evaluations include the SPP for the school at which we teach.
Research for Action, a Philly-based education research group, just released its new brief, "Pennsylvania'a School Performance Profile: Not the Sum of Its Parts." The short version of its findings are pretty stark and not very encouraging--
90% of the SPP is directly based on test results.
90%.
SPP is our answer to the USED waiver requirement for a test-based school-level student achievement report. It replaces the old Adequate Yearly Progress of NCLB days by supposedly considering student growth instead of simple raw scores. It rates schools on a scale of 0-100, with 70 or above considered "passing." In addition to being used to rate schools and teachers, SPP's get trotted out any time someone wants to make a political argument about failing schools.
RFA was particularly interested in looking at the degree to which SPP actually reflects poverty level, and their introduction includes this sentence:
Studies both in the United States and internationally have established a consistent, negative link between poverty and student outcomes on standardized tests, and found that this relationship has become stronger in recent years.
Emphasis mine. But let's move on.
SPP is put together from a variety of calculations performed on test scores. Five of the six-- which account for 90% of the score-- "rely entirely on test scores."
Our analysis finds that this reliance on test scores, despite the partial use of growth measures, results in a school rating system that favors more advantaged schools.
Emphasis theirs.
The brief opens with a consideration of the correlation of SPP to poverty. I suggest you go look at the graph for yourself, but I will tell you that you don't need any statistics background at all to see the clear correlation between poverty and a lower SPP. And as we break down the elements of the SPP, it's easy to see why the correlation is there.
Indicators of Academic Achievement (40%)
Forty percent of the school's SPP comes from a proficiency rating (aka just plain straight on test results) that comes from tested subjects, third grade read, and the SAT/ACT College Ready Benchmark. Whether we're talking third grade reading or high school Keystone exams, "performance declines as poverty increases."*
Out of 2,200 schools sampled, 187 had proficiency ratings higher than 90, and only seven of those had more than 50% economically disadvantaged enrollment. Five of those were Philly magnet schools.
Indicators of Academic Growth aka PVAAS (40%)
PVAAS is our version of a VAM rating, in which we compare actual student performance to the performance of imaginary students in an alternate neutral universe run through a magical formula that corrects for everything in the world except teacher influence. It is junk science.
RFA found that while the correlation with poverty was still there, when it came to PSSAs (our elementary test) it was not quite as strong as the proficiency correlation. For the Keystones, writing and science tests, however, the correlation with poverty is, well, robust. Strong. Undeniable. Among other things, this means that you can blunt the impact of Keystone test results by getting some PSSA test-takers under the same roof. Time to start that 5-9 middle school!!
Closing the Achievement Gap (10%)
This particular measure has a built-in penalty for low-achieving schools (aka high poverty schools-- see above). Basically, you've got six years to close half the proficiency gap between where you are and 100%. If you have 50% proficiency, you've got six years to hit 75%. If you have 60%, you have six years to hit 80%. The lower you are, the more students you must drag over the test score finish line.
That last 10%, incidentally, is items like graduation rate and attendance rate. Pennsylvania also gives you points for the number of students you can convince to buy the products and services of the College Board, including AP stuff and PSAT. So kudos to the College Board people on superior product placement. Remember kids-- give your money to the College Board. It's the law!
Bottom line-- we have schools in PA being judged directly on test performance, and we have data once again clearly showing that the state could save a ton of money by simply issuing school ratings based on the income level of students.
For those who want to complain, "How dare you say those poor kids can't achieve," I'll add this. We aren't measuring whether poor kids can achieve, learn, accomplish great things, or grow up to be exemplary adults-- there is no disputing that they can do all those things. But we aren't measuring that. We are measuring how well they do on a crappy standardized test, and the fact that poverty correlates with results on that crappy test should be a screaming red siren that the crappy test is not measuring what people claim it measures.
*Correction: I had originally include a mistyping here that reversed the meaning of the study.
Research for Action, a Philly-based education research group, just released its new brief, "Pennsylvania'a School Performance Profile: Not the Sum of Its Parts." The short version of its findings are pretty stark and not very encouraging--
90% of the SPP is directly based on test results.
90%.
SPP is our answer to the USED waiver requirement for a test-based school-level student achievement report. It replaces the old Adequate Yearly Progress of NCLB days by supposedly considering student growth instead of simple raw scores. It rates schools on a scale of 0-100, with 70 or above considered "passing." In addition to being used to rate schools and teachers, SPP's get trotted out any time someone wants to make a political argument about failing schools.
RFA was particularly interested in looking at the degree to which SPP actually reflects poverty level, and their introduction includes this sentence:
Studies both in the United States and internationally have established a consistent, negative link between poverty and student outcomes on standardized tests, and found that this relationship has become stronger in recent years.
Emphasis mine. But let's move on.
SPP is put together from a variety of calculations performed on test scores. Five of the six-- which account for 90% of the score-- "rely entirely on test scores."
Our analysis finds that this reliance on test scores, despite the partial use of growth measures, results in a school rating system that favors more advantaged schools.
Emphasis theirs.
The brief opens with a consideration of the correlation of SPP to poverty. I suggest you go look at the graph for yourself, but I will tell you that you don't need any statistics background at all to see the clear correlation between poverty and a lower SPP. And as we break down the elements of the SPP, it's easy to see why the correlation is there.
Indicators of Academic Achievement (40%)
Forty percent of the school's SPP comes from a proficiency rating (aka just plain straight on test results) that comes from tested subjects, third grade read, and the SAT/ACT College Ready Benchmark. Whether we're talking third grade reading or high school Keystone exams, "performance declines as poverty increases."*
Out of 2,200 schools sampled, 187 had proficiency ratings higher than 90, and only seven of those had more than 50% economically disadvantaged enrollment. Five of those were Philly magnet schools.
Indicators of Academic Growth aka PVAAS (40%)
PVAAS is our version of a VAM rating, in which we compare actual student performance to the performance of imaginary students in an alternate neutral universe run through a magical formula that corrects for everything in the world except teacher influence. It is junk science.
RFA found that while the correlation with poverty was still there, when it came to PSSAs (our elementary test) it was not quite as strong as the proficiency correlation. For the Keystones, writing and science tests, however, the correlation with poverty is, well, robust. Strong. Undeniable. Among other things, this means that you can blunt the impact of Keystone test results by getting some PSSA test-takers under the same roof. Time to start that 5-9 middle school!!
Closing the Achievement Gap (10%)
This particular measure has a built-in penalty for low-achieving schools (aka high poverty schools-- see above). Basically, you've got six years to close half the proficiency gap between where you are and 100%. If you have 50% proficiency, you've got six years to hit 75%. If you have 60%, you have six years to hit 80%. The lower you are, the more students you must drag over the test score finish line.
That last 10%, incidentally, is items like graduation rate and attendance rate. Pennsylvania also gives you points for the number of students you can convince to buy the products and services of the College Board, including AP stuff and PSAT. So kudos to the College Board people on superior product placement. Remember kids-- give your money to the College Board. It's the law!
Bottom line-- we have schools in PA being judged directly on test performance, and we have data once again clearly showing that the state could save a ton of money by simply issuing school ratings based on the income level of students.
For those who want to complain, "How dare you say those poor kids can't achieve," I'll add this. We aren't measuring whether poor kids can achieve, learn, accomplish great things, or grow up to be exemplary adults-- there is no disputing that they can do all those things. But we aren't measuring that. We are measuring how well they do on a crappy standardized test, and the fact that poverty correlates with results on that crappy test should be a screaming red siren that the crappy test is not measuring what people claim it measures.
*Correction: I had originally include a mistyping here that reversed the meaning of the study.
Subscribe to:
Posts (Atom)