The Marshmallow Experiment was a series of studies performed at Stanford starting the late sixties (and also not a bad band name) that purported to study self-control and the ability to defer gratification. It comes up these days in many contexts, including discussions of grit and prudence and character education.
It also shows the flaws in some models of human behavior.
In case you slept in that day in Psych 101, here's the basic layout. Put a child and some marshmallows in a room together. Promise the child even more marshmallows if she'll refrain from eating the ones in front of her. Then leave the room. The child's subsequent behavior provides a measure of how much ability the child has to delay gratification.
In 2012, a new variation on the study was conducted at the Rochester Baby Lab, and it revealed a whole aspect of the problem that was not covered in the original experiments.
What if the child's ability isn't the only important variable? What if environment also matters?
The new experiment varied the environment. Some children dealt with a reliable environment, and some dealt with an unreliable one in which the adults they dealt with were not as good as their word.
The format was a one-two punch. The children were given a box of lame crayons and told, "If you can just wait a minute, I'll bring back some better art supplies." Then they were left alone with a sticker and told, "Don't touch this sticker and I'll bring you a bunch of better ones."
In the reliable environment, the adult followed through as promised. In the other environment the adult returned empty-handed with excuses. And then it was time for the marshmallow.
The effect was huge. The mean wait time for children in the unreliable environment was about three minutes. For those in the reliable environment, about twelve. Compared to previous research, that's half as much waiting for unreliables and twice as much waiting for reliables.
In other words, the quality of deferred gratification is not just an innate immutable quality that the child possesses in some sort of vacuum-- it's a rational reasoned response to what one knows about conditions in the environment. Put another way, this quality of "self-control" is really about the relationship between the person and the environment (particularly the parts of that environment shaped by other people).
The broadest conclusion I can draw from this is that what we often ascribe to deficiencies in a person's character are actually behaviors developed in response to that person's environment. We are focusing on the person when we should be focusing on the relationship between that person and the surroundings.
Say your engine is running hot. Should you be looking for a particular engine part that is running with too much friction, or should you check the oil? Say your child has developed hives all around his upper torso and arms where his shirt touches his skin? Should you worry about why his skin has such a hivey quality, or should you be checking to see if he's having an allergic reaction to something in the shirt?
Say your kid won't wait long when you set a marshmallow in front of her. Should you declare the child character-deficient, with a sad lack of self-control? Or should you look at the environment that child lives in every day and ask how it has taught her that waiting is a fool's game.
Children are learning machines. They are learning all the time, and they are learning lessons like whether or not the world around them can be trusted or counted on. When they arrive at school, they have already earned a PhD in Human Behavior, and they operate with a set of assumptions based on what they've learned.
It is not helpful to say that children who have learned certain lessons from their environment, and who now make choices based on what they've learned-- it is not helpful to label those students as character-deficient because the lessons they've learned are different from the lessons we wish they had learned.
If it helps, think of the conclusions you reach about students as marshmallows. You can reach some conclusions quickly and easily right now. Or you can wait, and you'll get more to work with. Show some self-control.
Friday, February 13, 2015
Thursday, February 12, 2015
Standardized Tests: Bitter But Necessary Medicine?
At EdWeek, Cristina Duncan Evans had this to say about standardized testing:
What's worse than annual standardized testing? Not having it at all.
Well, no. I don't think so. Her argument is not an unusual one.
What would happen if we no longer had to take the bitter pill of standardized testing? At the most basic level, it would become much harder to figure out which schools aren't doing an adequate job of reaching students.
I don't think so. I don't believe that standardized tests are telling us that now, so this is kind of like arguing that closing down the telegraph company would be bad because I would never get any more phone calls from that guy who never calls me on the phone.
There are at least two disconnects. One, the tests aren't telling us about how adequate schools are and two, they never will, because they can't.
Politicians and bureaucrats could game statistics to make achievement gaps disappear in order to appeal to voters who don't know what is going on in their local schools.
Yes, because the past decade of test-driven accountability has kept politicians so honest.
In fact, we've been treated to a decade of politicians gaming statistics in order to make schools look like failures in order to justify initiatives for charts, vouchers, turnaround scammers and other folks lined up to get their mitts on the goose that lays golden taxpayer-financed eggs. If there's anything standardized tests have NOT been used for, it's to let people know what's going on in their local schools.
And, as always, I have a problem with the idea that local folks have no knowledge of what's going on in schools unless a government bureaucrat with a test results spread sheet tells them.
Without comparisons, failing schools would face little pressure to improve.
Really? Nobody would know they were failing? Not students nor parents nor teachers working there? And the only clue, the only possible hint that they were failing would be standardized test results? A click-and-bubble test that narrowly measures slim aspects of two disciplines is the best measure we can think of for telling whether a school is failing or not?
The needs of historically underserved populations would go unnoticed beyond their classrooms.
I just addressed this, so I'll be brief. This is a legitimate concern, but after a decade-plus of NCLB, there is no evidence that standardized tests help with the issue in the slightest, and plenty of evidence that they hurt.
Without standardized testing, successful schools with a strong sense of mission would continue to thrive, but would their lessons be adopted for all students?
Because other teachers aren't interested in hearing about what works, or because they have no means of contacting fellow professionals? And why does success need to be scaleable? Can it be scaleable? What makes you think that something that works at my school with my students when implemented by me will work at your school in your classroom with your students? I think I'm a pretty good husband to my wife. Does it follow that my statement is only true if I would be a great husband to every straight woman and gay man in America?
In the comments, Evans goes on to underline that she believes we need to be able to compare schools so that we know if students are getting a good education. This makes no sense. Do I need to compare my performance as a husband to that of other husbands to know whether I have a good marriage or not, or can my wife and I depend on our own judgment of our own circumstances. Every student should get a good education, and that means something different in every situation. Comparison has nothing to do with it.
Then in the comments Evans adds this:
That's why I favor fewer, better tests that are well designed and that align with not just standards, but our values. If we value critical thinking, creativity, and depth of knowledge, then we need to design assessments that measure those things. Would that be expensive? Certainly. Would such assessments be computer graded? Almost certainly not.
Sigh. I favor magical unicorns flying in on rainbow wings to lick my head and make my hair magically grow back. But it's not going to happen. I agree that the tests she describes would be useful, but we don't have those tests, and we are never, ever, ever, EVER going to have those tests. Instead we have tests that devalue and disincetivize the qualities she lists. She really lost me here-- it's like saying we'd like a really great house paint for our home, but until we can have that, we'll just have to bathe the walls in flames instead.
Finally, this:
I don't trust schools and states to equitable teach ALL of their students without some oversight, because historically, that just doesn't tend to happen in this country.
In this, we agree. But I don't think standardized tests help with this problem in the slightest. In fact, they make things worse by creating the illusion that the issue is being addressed and take resources away from initiatives that actually would help. Standardized tests are not the solution, not in the slightest.
What's worse than annual standardized testing? Not having it at all.
Well, no. I don't think so. Her argument is not an unusual one.
What would happen if we no longer had to take the bitter pill of standardized testing? At the most basic level, it would become much harder to figure out which schools aren't doing an adequate job of reaching students.
I don't think so. I don't believe that standardized tests are telling us that now, so this is kind of like arguing that closing down the telegraph company would be bad because I would never get any more phone calls from that guy who never calls me on the phone.
There are at least two disconnects. One, the tests aren't telling us about how adequate schools are and two, they never will, because they can't.
Politicians and bureaucrats could game statistics to make achievement gaps disappear in order to appeal to voters who don't know what is going on in their local schools.
Yes, because the past decade of test-driven accountability has kept politicians so honest.
In fact, we've been treated to a decade of politicians gaming statistics in order to make schools look like failures in order to justify initiatives for charts, vouchers, turnaround scammers and other folks lined up to get their mitts on the goose that lays golden taxpayer-financed eggs. If there's anything standardized tests have NOT been used for, it's to let people know what's going on in their local schools.
And, as always, I have a problem with the idea that local folks have no knowledge of what's going on in schools unless a government bureaucrat with a test results spread sheet tells them.
Without comparisons, failing schools would face little pressure to improve.
Really? Nobody would know they were failing? Not students nor parents nor teachers working there? And the only clue, the only possible hint that they were failing would be standardized test results? A click-and-bubble test that narrowly measures slim aspects of two disciplines is the best measure we can think of for telling whether a school is failing or not?
The needs of historically underserved populations would go unnoticed beyond their classrooms.
I just addressed this, so I'll be brief. This is a legitimate concern, but after a decade-plus of NCLB, there is no evidence that standardized tests help with the issue in the slightest, and plenty of evidence that they hurt.
Without standardized testing, successful schools with a strong sense of mission would continue to thrive, but would their lessons be adopted for all students?
Because other teachers aren't interested in hearing about what works, or because they have no means of contacting fellow professionals? And why does success need to be scaleable? Can it be scaleable? What makes you think that something that works at my school with my students when implemented by me will work at your school in your classroom with your students? I think I'm a pretty good husband to my wife. Does it follow that my statement is only true if I would be a great husband to every straight woman and gay man in America?
In the comments, Evans goes on to underline that she believes we need to be able to compare schools so that we know if students are getting a good education. This makes no sense. Do I need to compare my performance as a husband to that of other husbands to know whether I have a good marriage or not, or can my wife and I depend on our own judgment of our own circumstances. Every student should get a good education, and that means something different in every situation. Comparison has nothing to do with it.
Then in the comments Evans adds this:
That's why I favor fewer, better tests that are well designed and that align with not just standards, but our values. If we value critical thinking, creativity, and depth of knowledge, then we need to design assessments that measure those things. Would that be expensive? Certainly. Would such assessments be computer graded? Almost certainly not.
Sigh. I favor magical unicorns flying in on rainbow wings to lick my head and make my hair magically grow back. But it's not going to happen. I agree that the tests she describes would be useful, but we don't have those tests, and we are never, ever, ever, EVER going to have those tests. Instead we have tests that devalue and disincetivize the qualities she lists. She really lost me here-- it's like saying we'd like a really great house paint for our home, but until we can have that, we'll just have to bathe the walls in flames instead.
Finally, this:
I don't trust schools and states to equitable teach ALL of their students without some oversight, because historically, that just doesn't tend to happen in this country.
In this, we agree. But I don't think standardized tests help with this problem in the slightest. In fact, they make things worse by creating the illusion that the issue is being addressed and take resources away from initiatives that actually would help. Standardized tests are not the solution, not in the slightest.
What's The Matter With Indiana
In the modern era of education reform, each state has tried to create its own special brand of educational dysfunction. If the point of Common Core related reforms was to bring standardization to the country's many and varied state systems, it has failed miserably by failing in fifty different ways.
What Indiana provides is an example of what happens when the political process completely overwhelms educational concerns. If there is anyone in the Indiana state capitol more worried about education students than in political maneuvering and political posturing, it's not immediately evident who that person might be.
The current marquee conflagration of the moment is the announcement of a new Big Standardized Test that will take twelve hours to complete. This announcement has triggered a veritable stampede from responsibility, as every elected official in Indianapolis tries to put some air space between themselves and this testing disaster. And it brings up some of the underlying issues of the moment in Indiana.
Currently, all roads lead to Glenda Ritz.
Back before the fall of 2012, Indiana had become a reformster playground. They'd made early strides solving the puzzle of how to turn an entire urban school district over to privatizers, and they loved them some Common Core, too. Tony Bennett, buddy of Jeb Bush and big-time Chief for Change, was running the state's education department just the way reformsters thought it should be done. And then came the 2012 election.
Bennett was the public face of Indiana education reform. He dumped a ton of money into the race. And he lost. Not just lost, but looooooooosssssssssst!!! As is frequently noted, Glenda Ritz was elected Superintendent for Public Instruction with more votes than Governor Mike Pence. I like this account of the fallout by Joy Resmovits mostly because it includes a quote from Mike Petrilli that I think captures well the reaction of reformsters when Bennett lost.
"Shit shit shit shit shit," he said. "You can quote me on that."
After Ritz became a Democrat education in a GOP administration, Republican politicians decided that given her overwhelming electoral victory, they'd better just suck it up and find a way to honor the will of the people by working productively with her to fashion bi-partisan educational policies that put the needs of Indiana's students ahead of political gamesmanship. Ha! Just kidding. The GOP started using every trick they could think of to strip Ritz of her power.
As Scott Elliot tells it in this piece at Chalkbeat, things actually started out okay, with Ritz and the Pence administration carving out some useful compromises. Elliot marks the start of open warfare at Bennettgate-- the release of emails showing that Tony Bennett had gamed the less-than-awesome Indiana school grading system to favor certain charter operators.
Certainly Ritz and pence have different ideas about how to operate an education system. Mike Pence particularly loves charters-- so much so that he has taken the unusual move of proposing that charter schools be paid $1,500 more per student than public schools (so forget all about that charters-are-cheaper business).
Indiana has also created a complicated relationship with the Common Core, legislating a withdrawal from the Core, but one that required the state to do it without losing their federalbribes payments. The result was a fat-free Twinkie of education standards-- not enough like the original for some people and too much like it for others.
The Indiana GOP has been trying to separate Ritz from any power. They cite any number of complaints about her work style and competence (the GOP president of the Senate famously commented "In all fairness, Superintendent Ritz was a librarian, okay?") and most of the complaints smell like nothing but political posturing.
It's understandable that the state Board of Education would be a cantankerous group. Consider this op-ed piece from Gordon Hendry, newest member of the board, Democrat, attorney, business exec, and director of economic development under former Indianapolis mayor and current charter profiteer Bart Peterson. Hendry opens with, "To me, education policy is economic policy" (pro tip, Mr. Hendry-- education policy is education policy). After castigating Ritz for not running pleasant, orderly meetings (because her job is, apparently, to make alleged grownups behave like actual grownups), Hendry works up to this:
As a Democrat, I don't know why the superintendent insists on creating conflict where rational debate should instead exist.
That just sets off the bovine fecal detector into loud whoops. First, we've got an accusation buried as an assumption (she's the one creating conflict). Combine that with playing the feigned ignorance card-- I just have no idea why she could be so touchy! Really, dude? I'm all the way over here in Pennsylvania, and I can tell why she might be involved in some crankypants activity. I'm pretty sure winning an election and being forced to work with people who dismiss you and try to cut you out of power-- I'm pretty sure that would put someone in a bad mood. So I can understand finding her ideas obnoxious and disagreeing with how she runs a meeting, but when you claim her point of view is incomprehensible, that tells me way more about you than about her.
Most of the statements I read coming out of Indiana are like that-- they carry a screaming barely-subtext of "I am just stringing words together in a way that I've calculated might bring political advantage, but I am paying no real attention to what they actual mean to real humans."
I have no idea how good at her job Glenda Ritz actually is, but the political statement represented by her landslide election seems clear enough, and it's a little astonishing that Indiana's leaders are so hell bent on thwarting the will of the electorate. But damned if the legislature isn't trying to strip her of chairmanship of the Board of Education.
Meanwhile, the fat-free Twinkie standards have spawned some massive tumor of a test, coming in at an advertised length of twelve hours which breaks down to A) weeks of wasted classroom time and B) at least six hours worth of frustrated and bored students making random marks which of course gets Indiana C) results even more meaningless than the usual standardized test results although D) McGraw-Hill will still make a mountain of money for producing it. Whose fault is that? Tom LoBianco seems close to the answer when he says, basically, everyone. (Although Pence has offered a gubernatorial edict that the test be cut to six hours, so, I don't know- just do every other page, kids? Not sure exactly how one cuts a test in half in about a week, but perhaps Indiana is a land of miracles.*) But it's hard for me not see Ritz and Indiana schools as the victims of a system so clogged and choked with political asshattery that it may well be impossible to get anything done that actually benefits the students of Indiana.
UPDATE: On February 11, the Senate Education Committee gave the okay to a bill that would exempt voucher schools from taking the same assessment as public schools. In fact, the voucher schools can just go ahead and create a test of their own. It is remarkable that the State of Indiana has not just closed all public schools, dumped all the education money in a giant Scrooge McDuck sized vault, and sold tickets to just go in a dive around in it.
There's going to be a rally at the Statehouse on Monday, February 16th. If I were an Indiana taxpayer-- hell if I were a live human who lived considerably closer-- I would be there. This is a state that really hates its public schools.
* Edit-- I somehow lost the sentence about the shortening of the test in posting. I've since put the parenthetical point back.
What Indiana provides is an example of what happens when the political process completely overwhelms educational concerns. If there is anyone in the Indiana state capitol more worried about education students than in political maneuvering and political posturing, it's not immediately evident who that person might be.
The current marquee conflagration of the moment is the announcement of a new Big Standardized Test that will take twelve hours to complete. This announcement has triggered a veritable stampede from responsibility, as every elected official in Indianapolis tries to put some air space between themselves and this testing disaster. And it brings up some of the underlying issues of the moment in Indiana.
Currently, all roads lead to Glenda Ritz.
Back before the fall of 2012, Indiana had become a reformster playground. They'd made early strides solving the puzzle of how to turn an entire urban school district over to privatizers, and they loved them some Common Core, too. Tony Bennett, buddy of Jeb Bush and big-time Chief for Change, was running the state's education department just the way reformsters thought it should be done. And then came the 2012 election.
Bennett was the public face of Indiana education reform. He dumped a ton of money into the race. And he lost. Not just lost, but looooooooosssssssssst!!! As is frequently noted, Glenda Ritz was elected Superintendent for Public Instruction with more votes than Governor Mike Pence. I like this account of the fallout by Joy Resmovits mostly because it includes a quote from Mike Petrilli that I think captures well the reaction of reformsters when Bennett lost.
"Shit shit shit shit shit," he said. "You can quote me on that."
After Ritz became a Democrat education in a GOP administration, Republican politicians decided that given her overwhelming electoral victory, they'd better just suck it up and find a way to honor the will of the people by working productively with her to fashion bi-partisan educational policies that put the needs of Indiana's students ahead of political gamesmanship. Ha! Just kidding. The GOP started using every trick they could think of to strip Ritz of her power.
As Scott Elliot tells it in this piece at Chalkbeat, things actually started out okay, with Ritz and the Pence administration carving out some useful compromises. Elliot marks the start of open warfare at Bennettgate-- the release of emails showing that Tony Bennett had gamed the less-than-awesome Indiana school grading system to favor certain charter operators.
Certainly Ritz and pence have different ideas about how to operate an education system. Mike Pence particularly loves charters-- so much so that he has taken the unusual move of proposing that charter schools be paid $1,500 more per student than public schools (so forget all about that charters-are-cheaper business).
Indiana has also created a complicated relationship with the Common Core, legislating a withdrawal from the Core, but one that required the state to do it without losing their federal
The Indiana GOP has been trying to separate Ritz from any power. They cite any number of complaints about her work style and competence (the GOP president of the Senate famously commented "In all fairness, Superintendent Ritz was a librarian, okay?") and most of the complaints smell like nothing but political posturing.
It's understandable that the state Board of Education would be a cantankerous group. Consider this op-ed piece from Gordon Hendry, newest member of the board, Democrat, attorney, business exec, and director of economic development under former Indianapolis mayor and current charter profiteer Bart Peterson. Hendry opens with, "To me, education policy is economic policy" (pro tip, Mr. Hendry-- education policy is education policy). After castigating Ritz for not running pleasant, orderly meetings (because her job is, apparently, to make alleged grownups behave like actual grownups), Hendry works up to this:
As a Democrat, I don't know why the superintendent insists on creating conflict where rational debate should instead exist.
That just sets off the bovine fecal detector into loud whoops. First, we've got an accusation buried as an assumption (she's the one creating conflict). Combine that with playing the feigned ignorance card-- I just have no idea why she could be so touchy! Really, dude? I'm all the way over here in Pennsylvania, and I can tell why she might be involved in some crankypants activity. I'm pretty sure winning an election and being forced to work with people who dismiss you and try to cut you out of power-- I'm pretty sure that would put someone in a bad mood. So I can understand finding her ideas obnoxious and disagreeing with how she runs a meeting, but when you claim her point of view is incomprehensible, that tells me way more about you than about her.
Most of the statements I read coming out of Indiana are like that-- they carry a screaming barely-subtext of "I am just stringing words together in a way that I've calculated might bring political advantage, but I am paying no real attention to what they actual mean to real humans."
I have no idea how good at her job Glenda Ritz actually is, but the political statement represented by her landslide election seems clear enough, and it's a little astonishing that Indiana's leaders are so hell bent on thwarting the will of the electorate. But damned if the legislature isn't trying to strip her of chairmanship of the Board of Education.
Meanwhile, the fat-free Twinkie standards have spawned some massive tumor of a test, coming in at an advertised length of twelve hours which breaks down to A) weeks of wasted classroom time and B) at least six hours worth of frustrated and bored students making random marks which of course gets Indiana C) results even more meaningless than the usual standardized test results although D) McGraw-Hill will still make a mountain of money for producing it. Whose fault is that? Tom LoBianco seems close to the answer when he says, basically, everyone. (Although Pence has offered a gubernatorial edict that the test be cut to six hours, so, I don't know- just do every other page, kids? Not sure exactly how one cuts a test in half in about a week, but perhaps Indiana is a land of miracles.*) But it's hard for me not see Ritz and Indiana schools as the victims of a system so clogged and choked with political asshattery that it may well be impossible to get anything done that actually benefits the students of Indiana.
UPDATE: On February 11, the Senate Education Committee gave the okay to a bill that would exempt voucher schools from taking the same assessment as public schools. In fact, the voucher schools can just go ahead and create a test of their own. It is remarkable that the State of Indiana has not just closed all public schools, dumped all the education money in a giant Scrooge McDuck sized vault, and sold tickets to just go in a dive around in it.
There's going to be a rally at the Statehouse on Monday, February 16th. If I were an Indiana taxpayer-- hell if I were a live human who lived considerably closer-- I would be there. This is a state that really hates its public schools.
* Edit-- I somehow lost the sentence about the shortening of the test in posting. I've since put the parenthetical point back.
Wednesday, February 11, 2015
Testing the Invisibles
Last weekend, Chad Aldeman of Bellwether Education Partners took to the op-ed pages of the NYT to make his case for annual standardized testing. I offered my response to that here (short version: I found it mostly unconvincing).
But Aldeman is back today on Bellwether's blog to elaborate on one of his supporting points, and I think it's worth responding to because it's one of the more complicated fails in the pro-testing argument.
Aldeman's point is this: NCLB's requirement that districts be accountable for subgroups forced schools to pay attention to previously-ignored portions of their student population, and that led to extra attention that paid off in test score gains for members of those groups. Aldeman did some data crunching, and he believes that they crunched results show "a move away from annual testing would leave many subgroups and more than 1 million students functionally “invisible” to state accountability systems."
This whole portion of the testing argument shows a perfect pairing of a real problem and a false solution. I just wrote about how this technique works, but let me lay out what the issue is here.
I believe that Aldeman's statement of the basic issue is valid. I believe that we are right to question just how much certain school districts hope to hide their problem students, their difficult students, their we-just-aren't-sure-what-to-do-with-them students. I believe it's right to make sure that a school is serving all students, regardless of race, ability, class, or any other differential identifier you care to name.
But where Aldeman and I part ways comes next.
Are tests our only eyes?
Aldeman adds a bunch of specific data about how many groups of students at various districts would become invisible if annual testing stopped, which just makes me ask-- is a BST the only possible way to see those students? There's no other possible measure, like, say, the actual grades and class performance in the school, that the groups could be broken out of? (And-- it should be noted that Aldeman skips right over the part where we ask if any such ignoring and invisibility was actually taking place.)
Because I'm thinking that not only are Big Standardized Tests not the only possible way to hold schools accountable for how they educate the subgroups, but they aren't even the best way. Or a good way.
Disagregated bad data is still bad data.
Making sure that we break out test results for certain subgroups is only useful if the test results tell us something useful. There's no reason to believe that the PARCC, the SBA, and the various other Big Standardized Tests tell us anything significant about the quality of a student's education.
Aldeman writes that losing the annual BST would be bad "because NCLB’s emphasis on historically disadvantaged groups forced schools to pay attention to these groups and led to real achievement gains." But by "real achievement gains" Aldeman just means better test scores, and after over a decade of test-based accountability, we still have no real evidence that test scores have anything to do with real educational achievement.
This part of the argument continues to be tautological-- we need to get these students' test scores because otherwise, how will we know what their test scores are. The testy worm continues to devour its own tail, but still nobody can offer evidence that the BST measures any of the things we are rightfully concerned about.
Still, even as bad data, it forces school districts to pay attention these "historically disadvantaged groups." That's got to be a good thing, right?
Well, no.
The other point that goes unexamined by Aldeman and other advocates of this argument is just what being visible gets these students.
Once we have disagregated a group and rendered them visible, what exactly comes next?
Does the local district say, "Wow- we must take steps to redirect resources and staff to make sure the school provides a richer, fuller, better education to these students." Does the state say, "This district needs an increase in state education aid money in order to meet the needs of these students."
Generally, no.
Instead, the students with low test scores win a free trip to the bowels of test-prep hell. Since NCLB began, we've heard a steady drip-drip-drip of stories about students who, having failed the BST (or the BST pre-test that schools started giving for precisely the purpose of spotting probable test-failers before they killed the school's numbers) lose access to art and music and gym or even science and history. These students get tagged for days filled with practice tests, test prep, test practice, test sundaes with test cherries on top. In order to insure that their test scores go up, their access to a full, rounded education goes down. This is particularly damaging when we're talking about students who have great strengths in areas that have nothing to do with taking a standardized reading and math test.
Disagregation also makes it easier to inflict Death By Subgroup on a school. Too many low BST subgroup failures, and a school can become a target for turnaround or privatization.
Visibility needs a purpose
Nobody should be invisible-- not in school, not in life. But it's not enough just to be seen. It matters what people do once they see you.
So far we have mostly failed to translate visibility into a better education for members of the subgroups. In fact, at many schools we have actually given them less education, an education in nothing but test taking. And by making them the instruments of a school's punishment, we encourage schools to view these students as problems and obstacles rather than human beings to assist and serve.
NCLB turned schools backwards, turning children from students to be served by the school into employees whose job is to earn good test scores for the school. As with many portions of NCLB, the original goal may well have been noble, but the execution turned that goal into a toxic backwards version of itself.
Making sure that "historically disadvantaged subgroups" don't become overlooked and under-served (or, for that matter, ejected by a charter school for being low achievers) is a laudable and essential goal, but using Big Standardized Tests, annually or otherwise, fails as an instrument of achieving that goal.
But Aldeman is back today on Bellwether's blog to elaborate on one of his supporting points, and I think it's worth responding to because it's one of the more complicated fails in the pro-testing argument.
Aldeman's point is this: NCLB's requirement that districts be accountable for subgroups forced schools to pay attention to previously-ignored portions of their student population, and that led to extra attention that paid off in test score gains for members of those groups. Aldeman did some data crunching, and he believes that they crunched results show "a move away from annual testing would leave many subgroups and more than 1 million students functionally “invisible” to state accountability systems."
This whole portion of the testing argument shows a perfect pairing of a real problem and a false solution. I just wrote about how this technique works, but let me lay out what the issue is here.
I believe that Aldeman's statement of the basic issue is valid. I believe that we are right to question just how much certain school districts hope to hide their problem students, their difficult students, their we-just-aren't-sure-what-to-do-with-them students. I believe it's right to make sure that a school is serving all students, regardless of race, ability, class, or any other differential identifier you care to name.
But where Aldeman and I part ways comes next.
Are tests our only eyes?
Aldeman adds a bunch of specific data about how many groups of students at various districts would become invisible if annual testing stopped, which just makes me ask-- is a BST the only possible way to see those students? There's no other possible measure, like, say, the actual grades and class performance in the school, that the groups could be broken out of? (And-- it should be noted that Aldeman skips right over the part where we ask if any such ignoring and invisibility was actually taking place.)
Because I'm thinking that not only are Big Standardized Tests not the only possible way to hold schools accountable for how they educate the subgroups, but they aren't even the best way. Or a good way.
Disagregated bad data is still bad data.
Making sure that we break out test results for certain subgroups is only useful if the test results tell us something useful. There's no reason to believe that the PARCC, the SBA, and the various other Big Standardized Tests tell us anything significant about the quality of a student's education.
Aldeman writes that losing the annual BST would be bad "because NCLB’s emphasis on historically disadvantaged groups forced schools to pay attention to these groups and led to real achievement gains." But by "real achievement gains" Aldeman just means better test scores, and after over a decade of test-based accountability, we still have no real evidence that test scores have anything to do with real educational achievement.
This part of the argument continues to be tautological-- we need to get these students' test scores because otherwise, how will we know what their test scores are. The testy worm continues to devour its own tail, but still nobody can offer evidence that the BST measures any of the things we are rightfully concerned about.
Still, even as bad data, it forces school districts to pay attention these "historically disadvantaged groups." That's got to be a good thing, right?
Well, no.
The other point that goes unexamined by Aldeman and other advocates of this argument is just what being visible gets these students.
Once we have disagregated a group and rendered them visible, what exactly comes next?
Does the local district say, "Wow- we must take steps to redirect resources and staff to make sure the school provides a richer, fuller, better education to these students." Does the state say, "This district needs an increase in state education aid money in order to meet the needs of these students."
Generally, no.
Instead, the students with low test scores win a free trip to the bowels of test-prep hell. Since NCLB began, we've heard a steady drip-drip-drip of stories about students who, having failed the BST (or the BST pre-test that schools started giving for precisely the purpose of spotting probable test-failers before they killed the school's numbers) lose access to art and music and gym or even science and history. These students get tagged for days filled with practice tests, test prep, test practice, test sundaes with test cherries on top. In order to insure that their test scores go up, their access to a full, rounded education goes down. This is particularly damaging when we're talking about students who have great strengths in areas that have nothing to do with taking a standardized reading and math test.
Disagregation also makes it easier to inflict Death By Subgroup on a school. Too many low BST subgroup failures, and a school can become a target for turnaround or privatization.
Visibility needs a purpose
Nobody should be invisible-- not in school, not in life. But it's not enough just to be seen. It matters what people do once they see you.
So far we have mostly failed to translate visibility into a better education for members of the subgroups. In fact, at many schools we have actually given them less education, an education in nothing but test taking. And by making them the instruments of a school's punishment, we encourage schools to view these students as problems and obstacles rather than human beings to assist and serve.
NCLB turned schools backwards, turning children from students to be served by the school into employees whose job is to earn good test scores for the school. As with many portions of NCLB, the original goal may well have been noble, but the execution turned that goal into a toxic backwards version of itself.
Making sure that "historically disadvantaged subgroups" don't become overlooked and under-served (or, for that matter, ejected by a charter school for being low achievers) is a laudable and essential goal, but using Big Standardized Tests, annually or otherwise, fails as an instrument of achieving that goal.
ESEA Hearing: What Wasn't Answered
The first Senate hearing on the NCLB rewrite focused on testing and accountability. Discussion at and around the hearing has centered on questions of the Big Standardized Test. How many tests should be given? How often should the test be given? Should it be a federal test or a state test? Who should decide where to draw the pass-fail line on the test?
These are all swell questions to ask, but they are absolutely pointless until we answer a more fundamental question:
What do the tests actually tell us?
Folks keep saying things such as "We need to continue testing because we must have accountability." But that statement assumes that tests actually provide accountability. And that is a gargantuan assumption, leading Congress to contemplate building a five-story grand gothic mansion of accountability on top of a foundation of testing sand in a high stakes swamp.
The question did not go completely unaddressed. Dr. Martin West led off with some observations about the validity of the test. And then he trotted out Chetty, Friedman and Rockoff (2014) a study that piles tautology (we define good teachers as those with good test results, and then we discover that those good teachers get good test results; also, red paint is red) on top of correlation dressed up as causation. If you like your Chetty debunking with a more scholarly flair, try this. If you like it with Phineas and Ferb references, try this.
Then West piled up more correlation dressed as causation. Citing Deming et al (2014), West takes a stand for the predictive power of testing, and in doing so, he himself makes clear why his support of testing validity is actually no support at all.
Predictive power is not causation. Let's take a stroll through a business district and meet some random folks. I'll bet you that the quality of their shoes is predictive of the quality of their cars and their homes. Expensive shoes predict a Lexus parked in front of a five story grand gothic mansion.
It does not follow, however, that if I buy really nice shoes for all the homeless people in that part of town, they will suddenly have expensive homes and fancy cars.
And here's how test-based accountability works. People off in some capital tell local authorities, "We want to end homelessness. So we expect pictures of all your homeless wearing nice shoes. And if the number doesn't go up, we will dock your pay, kill your dog, and take away your dessert for a year." The local authorities will get those pictures (even if they have to use fake shoes or the same shoes on multiple feet), send off the snapshots to the capital, the capital folks will congratulate themselves for ending poverty, and the homeless people will still be sleeping under a bridge and not in a fancy gothic mansion.
Another version of the same central question that was neither asked nor answered at the hearing would be:
What would give us the best, most complete, most accurate sense of how well educated a young person might be? How many people would seriously answer, "Oh, given the need to measure the full range of a person's skills, knowledge and aptitudes, I would absolutely depend on a bubble test covering just two thin slivers out of the whole pizza of that person." When you think of a well-educated person, do you automatically think of a person who does really well on standardized tests of certain math and reading skills?
Oddly enough, it was a nominally pro-test witness whose testimony underlined that. Paul Leather, of the New Hampshire Department of Education, testified at some length about the granite state's extensive work in developing something more like a whole-child, full-range assessment-- something that is robust and flexible and individual and authentic and basically everything that a standardized mass-produced test is not.
Congress put the cart not only before the horse, but before the wheels came back from the blacksmith shop. What they need to do is bring in the testing whizzes of Pearson/PARCC/SBA/etc and ask them to show how the Big Standardized Test measures anything other than a student's ability to take the Big Standardized Test. And I have not even addressed the question of whether or not the Big Standardized Test accurately measures even the slim slice of skills that it claims to assess-- but that question needs to be asked as well. We're missing serious discussions of testing's actual results, like this one. Instead, Congress engaged in a long discussion of how best to clean and press the emperor's new clothes.
There is no point in discussing what testing program best provides accountability if the tests do not actually measure any of the things we want schools to be accountable for. You can build your big gothic mansion in the swamp, but it will be sad, scary and dangerous for any people who have to live there.
Originally posted at View from the Cheap Seats
These are all swell questions to ask, but they are absolutely pointless until we answer a more fundamental question:
What do the tests actually tell us?
Folks keep saying things such as "We need to continue testing because we must have accountability." But that statement assumes that tests actually provide accountability. And that is a gargantuan assumption, leading Congress to contemplate building a five-story grand gothic mansion of accountability on top of a foundation of testing sand in a high stakes swamp.
The question did not go completely unaddressed. Dr. Martin West led off with some observations about the validity of the test. And then he trotted out Chetty, Friedman and Rockoff (2014) a study that piles tautology (we define good teachers as those with good test results, and then we discover that those good teachers get good test results; also, red paint is red) on top of correlation dressed up as causation. If you like your Chetty debunking with a more scholarly flair, try this. If you like it with Phineas and Ferb references, try this.
Then West piled up more correlation dressed as causation. Citing Deming et al (2014), West takes a stand for the predictive power of testing, and in doing so, he himself makes clear why his support of testing validity is actually no support at all.
Predictive power is not causation. Let's take a stroll through a business district and meet some random folks. I'll bet you that the quality of their shoes is predictive of the quality of their cars and their homes. Expensive shoes predict a Lexus parked in front of a five story grand gothic mansion.
It does not follow, however, that if I buy really nice shoes for all the homeless people in that part of town, they will suddenly have expensive homes and fancy cars.
And here's how test-based accountability works. People off in some capital tell local authorities, "We want to end homelessness. So we expect pictures of all your homeless wearing nice shoes. And if the number doesn't go up, we will dock your pay, kill your dog, and take away your dessert for a year." The local authorities will get those pictures (even if they have to use fake shoes or the same shoes on multiple feet), send off the snapshots to the capital, the capital folks will congratulate themselves for ending poverty, and the homeless people will still be sleeping under a bridge and not in a fancy gothic mansion.
Another version of the same central question that was neither asked nor answered at the hearing would be:
What would give us the best, most complete, most accurate sense of how well educated a young person might be? How many people would seriously answer, "Oh, given the need to measure the full range of a person's skills, knowledge and aptitudes, I would absolutely depend on a bubble test covering just two thin slivers out of the whole pizza of that person." When you think of a well-educated person, do you automatically think of a person who does really well on standardized tests of certain math and reading skills?
Oddly enough, it was a nominally pro-test witness whose testimony underlined that. Paul Leather, of the New Hampshire Department of Education, testified at some length about the granite state's extensive work in developing something more like a whole-child, full-range assessment-- something that is robust and flexible and individual and authentic and basically everything that a standardized mass-produced test is not.
Congress put the cart not only before the horse, but before the wheels came back from the blacksmith shop. What they need to do is bring in the testing whizzes of Pearson/PARCC/SBA/etc and ask them to show how the Big Standardized Test measures anything other than a student's ability to take the Big Standardized Test. And I have not even addressed the question of whether or not the Big Standardized Test accurately measures even the slim slice of skills that it claims to assess-- but that question needs to be asked as well. We're missing serious discussions of testing's actual results, like this one. Instead, Congress engaged in a long discussion of how best to clean and press the emperor's new clothes.
There is no point in discussing what testing program best provides accountability if the tests do not actually measure any of the things we want schools to be accountable for. You can build your big gothic mansion in the swamp, but it will be sad, scary and dangerous for any people who have to live there.
Originally posted at View from the Cheap Seats
Tuesday, February 10, 2015
Sorting the Tests
Since the beginnings of the current wave of test-driven accountability, reformsters have been excited about stack ranking-- the process of sorting out items from the very best to the very worst (and then taking a chainsaw to the very worst).
This has been one of the major supporting points for continued large-scale standardized testing-- if we didn't have test results, how would we compare students to other students, teachers to other teachers, schools to other schools. The devotion to sorting has been foundational, rarely explained but generally presented as an article of faith, a self-evident value-- well, of course, we want to compare and sort schools and teachers and students!
But you know what we still aren't sorting?
The Big Standardized Tests.
Since last summer the rhetoric to pre-empt the assault on testing has focused on "unnecessary" or "redundant" or even "bad" tests, but we have done nothing to find these tests.
Where is our stack ranking for the tests?
We have two major BSTs-- the PARCC and the SBA. In order to better know how my child is doing (isn't that one of our repeated reasons for testing), I'd like to know which one of these is a better test. There are other state-level BSTs that we're flinging at our students willy-nilly. Which one of these is the best? Which one is the worst?
I mean, we've worked tirelessly to sort and rank teachers in our efforts to root out the bed ones, because apparently "everybody" knows some teachers are bad. Well, apparently everybody knows some tests are bad, so why aren't we tracking them down, sorting them out, and publishing their low test ratings in the local paper?
We've argued relentlessly that I need to be able to compare my student's reading ability with the reading ability of Chris McNoname in Iowa, so why can't I compare the tests that each one is taking?
I realize that coming up with a metric would be really hard, but so what? We use VAM to sort out teachers and it has been debunked by everyone except people who work for the USED. I think we've established that the sorting instrument doesn't have to be good or even valid-- it just has to generate some sort of rating.
So let's get on this. Let's come up with a stack-ranking method for sorting out the SBA and the PARCC and the Keystones and the Indiana Test of Essential Student Swellness and whatever else is out there. If we're going to rate every student and teacher and school, why would we not also rate the raters? And then once we've got the tests rated, we can throw out the bottom ten percent of them. We can offer a "merit bonus" to the company that made the best one (and peanuts to everyone else) because that will reward their excellence and encourage them to do a good job! And for the bottom twenty-five percent of the bad tests, we can call in turnaround experts to take over the company.
In fact-- why not test choice? If my student wants to take the PARCC instead of the ITESS because the PARCC is rated higher, why shouldn't my student be able to do that. And if I don't like any of them, why shouldn't I be able to create a charter test of my own in order to look out for my child's best interests? We can give every student a little testing voucher and let the money follow them t whatever test they would prefer to take from whatever vendors pop up.
Let's get on this quickly, because I think I've just figured out to make a few million dollars, and it's going to take at least a weekend to whip up my charter test company product. Let the sorting and comparing and ranking begin!
This has been one of the major supporting points for continued large-scale standardized testing-- if we didn't have test results, how would we compare students to other students, teachers to other teachers, schools to other schools. The devotion to sorting has been foundational, rarely explained but generally presented as an article of faith, a self-evident value-- well, of course, we want to compare and sort schools and teachers and students!
But you know what we still aren't sorting?
The Big Standardized Tests.
Since last summer the rhetoric to pre-empt the assault on testing has focused on "unnecessary" or "redundant" or even "bad" tests, but we have done nothing to find these tests.
Where is our stack ranking for the tests?
We have two major BSTs-- the PARCC and the SBA. In order to better know how my child is doing (isn't that one of our repeated reasons for testing), I'd like to know which one of these is a better test. There are other state-level BSTs that we're flinging at our students willy-nilly. Which one of these is the best? Which one is the worst?
I mean, we've worked tirelessly to sort and rank teachers in our efforts to root out the bed ones, because apparently "everybody" knows some teachers are bad. Well, apparently everybody knows some tests are bad, so why aren't we tracking them down, sorting them out, and publishing their low test ratings in the local paper?
We've argued relentlessly that I need to be able to compare my student's reading ability with the reading ability of Chris McNoname in Iowa, so why can't I compare the tests that each one is taking?
I realize that coming up with a metric would be really hard, but so what? We use VAM to sort out teachers and it has been debunked by everyone except people who work for the USED. I think we've established that the sorting instrument doesn't have to be good or even valid-- it just has to generate some sort of rating.
So let's get on this. Let's come up with a stack-ranking method for sorting out the SBA and the PARCC and the Keystones and the Indiana Test of Essential Student Swellness and whatever else is out there. If we're going to rate every student and teacher and school, why would we not also rate the raters? And then once we've got the tests rated, we can throw out the bottom ten percent of them. We can offer a "merit bonus" to the company that made the best one (and peanuts to everyone else) because that will reward their excellence and encourage them to do a good job! And for the bottom twenty-five percent of the bad tests, we can call in turnaround experts to take over the company.
In fact-- why not test choice? If my student wants to take the PARCC instead of the ITESS because the PARCC is rated higher, why shouldn't my student be able to do that. And if I don't like any of them, why shouldn't I be able to create a charter test of my own in order to look out for my child's best interests? We can give every student a little testing voucher and let the money follow them t whatever test they would prefer to take from whatever vendors pop up.
Let's get on this quickly, because I think I've just figured out to make a few million dollars, and it's going to take at least a weekend to whip up my charter test company product. Let the sorting and comparing and ranking begin!
Monday, February 9, 2015
6 Testing Talking Points
Anthony Cody scored a great little handout last week that is a literal guide to how reformster want to talk about testing. The handout-- "How To Talk About Testing"-- covers six specific testing arguments and how reformsters should respond to them, broken down into finding common ground, pivoting to a higher emotional place, do's, don'ts, rabbit holes to avoid, and handy approaches for both parents and business folks. Many of these talking points will seem familiar.
But hey-- just because something is a talking point doesn't mean that it's untrue. Let's take a look:
Argument: There's too much testing
Advice: You can't win this one because people mostly think it's true (similar to the way that most people think the earth revolves around the sun). But you can pivot back with the idea that newer, better Common Core tests will fix that, somehow, and also "parents want to know how their kids are doing and they need a [sic] objective measuring stick."
We've been waiting for these newer, better tests for at least a decade. They haven't arrived and they never will. And aren't parents yet tired of the assertion that they are too dopey to know how their children are doing unless a standardized test tells them? How can this still be a viable talking point? Also, objective measuring sticks are great-- unless you're trying to weigh something or measure the density of a liquid or check photon direction in a quantum physics experiment. Tests may well be measuring sticks-- but that doesn't mean they're the tool for the job.
Do tell parents that the new tests will make things better, but don't overpromise (because the new tests won't make a damn bit of difference). Do tell parents to talk to the teacher, but don't encourage them to get all activisty becausethat would cramp our style because that will probably scare them, poor dears.
And tell business guys that we're getting lots of accountability bang for our buck. Because who cares if it's really doing the job as long as it's cheap?
Argument: We can't treat schools like businesses
Advice: People don't want to think of schools as cutthroat, but tell them we need to know if the school is getting results. "Parents have a right to know if their kids are getting the best education they can." Then, I guess, cross your fingers and hope that parents don't ask, "So what does this big standardized test have to do with knowing if my child is getting a great education?"
People want results and like accountability (in theory). "Do normalize the practice of measuring performance." Just don't let anybody ask how exactly a standardized test measures the performance of a whole school. But do emphasize how super-important math and reading are, just in case anyone wants to ask how the Big Standardized Test can possibly measure the performance of every other part of the school.
At the same time, try not to make this about the teachers and how their evaluation system is completely out of whack thanks to the completely-debunked idea of VAM (this guide does not mention value-added). Yes, it measures teacher performance, but gosh, we count classroom observation, too. "First and foremost the tests were created to help parents and teachers know if a student is reading and doing math at the level they should."
Yikes-- so many questions should come up in response to this. Like, we've now been told multiple reasons for the test to be given-- is it possible to design a single test that works for all those purposes? Or, who decides what level the students "should" be achieving?
The writer wants you to know that the facts are on your side, because there's a 2012 study that shows a link between 7 year old reading and math ability and social class thirty-five years later. From the University of Edinburgh. One more useful talking point to use on people who don't understand the difference between correlation and causation.
Argument: It's just more teaching to the test
Advice: A hailstorm of non-sequitors. You should agree with them that teaching to the test is a waste of time, but the new tests are an improvement and finally provide parents with valuable information.
Okay, so not just non-sequitors, but also Things That Aren't True. The writer wants you to argue essentially that new generation tests are close to authentic assessment (though we don't use those words), which is baloney. We also recycle the old line that these tests don't just require students to fill in the blanks with facts they memorized last week. Which is great, I guess, in the same way that tests no longer require students to dip their pens in inkwells.
As always, the test prep counter-argument depends on misrepresenting what test prep means. Standardized tests will always require test prep, because any assessment at all is a measure of tasks that are just like the assessment. Writing an essay is an assessment of how well a student can write an essay. Shooting foul shots is a good assessment of how well a player can shoot foul shots. Answering standardized test questions is an assessment of how well a student answers standardized test questions, and so the best preparation for the test will always be learning to answer similar sorts of test questions under similar test-like conditions, aka test prep.
The business-specific talking point is actually dead-on correct-- "What gets measured gets done!" And what gets measured with a standardized test is the ability to take a standardized test, and therefor teachers and schools are highly motivated to teach students how to take a standardized tests. (One might also ask what implications WGMGD has for all the subjects that aren't math and reading.)
The suggestion for teacher-specific message is hilarious-- "The new tests free teachers to do what they love: create a classroom environment that's about real learning, teaching kids how to get to the answer, not just memorize it." And then after school the children can pedal home on their pennyfarthings and stop for strawberry phosphates.
Argument: One size doesn't fit all
This is really the first time the sheet resorts to a straw man, saying of test opponents that "they want parents to feel that their kids are too unique for testing." Nope (nor can one be "too unique" or "more unique" or "somewhat pregnant"). I don't avoid one-size-fits-all hats because I think I'm too special; I just know that they won't fit.
But the advice here is that parents need to know how their kids are doing at reading and math because all success in life depends on reading and math. And they double down on this as well:
There are many different kinds of dreams and aspirations, with one way to get there: reading and math... There isn't much you can do without reading and math... Without solid reading and math skills, you're stuck
And, man-- I am a professional English teacher. It is what I have devoted my life to. But I'll be damned if I would stand in front of any of my classes, no matter how low in ability, and say to them, "You guys read badly, and you are all going to be total failures in life because you are getting a lousy grade in my class." I mean-- I believe with all my heart that reading and writing are hugely important skills, but even I would not suggest that nobody can amount to anything in life without them.
Then there's this:
It's not about standardization. Quite the opposite. It's about providing teachers with another tool, getting them the information they need so they can adapt their teaching and get your kids what they need to reach their full potential.
So here's yet another alleged purpose for the test, on top of the many others listed so far. This is one magical test, but as a parent, I would ask just one question-- When will the test be given, and when will my child's teacher get back the results that will inform these adaptations? As a teacher, I might ask how I'll get test results that will both tell me what I have yet to do this year AND how well I did this year. From the same test! Magical, I'm telling you!
Argument: A drop in scores is proof
I didn't think the drop in test scores was being used as proof of anything by defenders of public ed. We know why there was a drop-- because cut scores were set to insure it.
Advice: present lower test scores as proof of the awesomeness of these new, improved tests. But hey-- look at this:
We expected the drop in scores. Any time you change a test scores drop. We know that. Anything that's new has a learning curve.
But wait. I thought these new improved tests didn't require any sort of test prep, that they were such authentic measures of what students learn in class that students would just transfer that learning seamlessly to the new tests. Didn't you say that? Because it sounds now like students need a few years to get the right kind of test preparation do well on these.
Interesting don'ts on this one--don't trot out the need to have internationally competitive standards to save the US economy with college and career ready grads.
Argument: Testing is bad. Period.
Advice: Yes, tests aren't fun. They're not supposed to be. But tests are a part of life. "They let us know we're ready to move on." So, add one more item to the Big List of Things The Test Can Do.
Number one thing to do? Normalize testing. Tests are like annual checkups with measures for height and weight, which I guess is true if all the short kids are flunked and told they are going to fail at life and then the doctors with the most short kids get paid less by the insurance company and given lower ratings. In that case then, yes, testing is just like a checkup.
The writer wants you to sell the value of information, not the gritty character-building experience of testing. It's a good stance because it assumes the sale-- it assumes that the Big Standardized Test is actually collecting real information that means what it says it means, which is a huge assumption with little evidence to back it up.
Look, testing is not universal. Remember when you had to pass your pre-marital spousing test before you could get married, or the pre-parenting test before you could have kids? No, of course not. Nor do CEO's get the job by taking a standardized test that all CEO's must take before they can be hired.
Where testing does occur, it occurs because it has proven to have value and utility. Medical tests are selected because they are deemed appropriate for the specific situation by medical experts, who also have reason to believe that the tests deliver useful information.
Of all the six points, this one is the most genius because it complete skips past the real issue. There are arguments to be made against all testing (Alfie Kohn makes the best ones), but in a world where tests are unlikely to be eradicated, the most important question is, "Is this test any good?" All tests are not created equal. Some are pretty okay. Some are absolute crap. Distinguishing between them is critical.
So there are our six testing talking points. You can peruse the original to find more details-- they're very peppy and have snappy layouts and fonts. They are baloney, but it's baloney in a pretty wrapper in small, easy-to-eat servings. But still baloney.
But hey-- just because something is a talking point doesn't mean that it's untrue. Let's take a look:
Argument: There's too much testing
Advice: You can't win this one because people mostly think it's true (similar to the way that most people think the earth revolves around the sun). But you can pivot back with the idea that newer, better Common Core tests will fix that, somehow, and also "parents want to know how their kids are doing and they need a [sic] objective measuring stick."
We've been waiting for these newer, better tests for at least a decade. They haven't arrived and they never will. And aren't parents yet tired of the assertion that they are too dopey to know how their children are doing unless a standardized test tells them? How can this still be a viable talking point? Also, objective measuring sticks are great-- unless you're trying to weigh something or measure the density of a liquid or check photon direction in a quantum physics experiment. Tests may well be measuring sticks-- but that doesn't mean they're the tool for the job.
Do tell parents that the new tests will make things better, but don't overpromise (because the new tests won't make a damn bit of difference). Do tell parents to talk to the teacher, but don't encourage them to get all activisty because
And tell business guys that we're getting lots of accountability bang for our buck. Because who cares if it's really doing the job as long as it's cheap?
Argument: We can't treat schools like businesses
Advice: People don't want to think of schools as cutthroat, but tell them we need to know if the school is getting results. "Parents have a right to know if their kids are getting the best education they can." Then, I guess, cross your fingers and hope that parents don't ask, "So what does this big standardized test have to do with knowing if my child is getting a great education?"
People want results and like accountability (in theory). "Do normalize the practice of measuring performance." Just don't let anybody ask how exactly a standardized test measures the performance of a whole school. But do emphasize how super-important math and reading are, just in case anyone wants to ask how the Big Standardized Test can possibly measure the performance of every other part of the school.
At the same time, try not to make this about the teachers and how their evaluation system is completely out of whack thanks to the completely-debunked idea of VAM (this guide does not mention value-added). Yes, it measures teacher performance, but gosh, we count classroom observation, too. "First and foremost the tests were created to help parents and teachers know if a student is reading and doing math at the level they should."
Yikes-- so many questions should come up in response to this. Like, we've now been told multiple reasons for the test to be given-- is it possible to design a single test that works for all those purposes? Or, who decides what level the students "should" be achieving?
The writer wants you to know that the facts are on your side, because there's a 2012 study that shows a link between 7 year old reading and math ability and social class thirty-five years later. From the University of Edinburgh. One more useful talking point to use on people who don't understand the difference between correlation and causation.
Argument: It's just more teaching to the test
Advice: A hailstorm of non-sequitors. You should agree with them that teaching to the test is a waste of time, but the new tests are an improvement and finally provide parents with valuable information.
Okay, so not just non-sequitors, but also Things That Aren't True. The writer wants you to argue essentially that new generation tests are close to authentic assessment (though we don't use those words), which is baloney. We also recycle the old line that these tests don't just require students to fill in the blanks with facts they memorized last week. Which is great, I guess, in the same way that tests no longer require students to dip their pens in inkwells.
As always, the test prep counter-argument depends on misrepresenting what test prep means. Standardized tests will always require test prep, because any assessment at all is a measure of tasks that are just like the assessment. Writing an essay is an assessment of how well a student can write an essay. Shooting foul shots is a good assessment of how well a player can shoot foul shots. Answering standardized test questions is an assessment of how well a student answers standardized test questions, and so the best preparation for the test will always be learning to answer similar sorts of test questions under similar test-like conditions, aka test prep.
The business-specific talking point is actually dead-on correct-- "What gets measured gets done!" And what gets measured with a standardized test is the ability to take a standardized test, and therefor teachers and schools are highly motivated to teach students how to take a standardized tests. (One might also ask what implications WGMGD has for all the subjects that aren't math and reading.)
The suggestion for teacher-specific message is hilarious-- "The new tests free teachers to do what they love: create a classroom environment that's about real learning, teaching kids how to get to the answer, not just memorize it." And then after school the children can pedal home on their pennyfarthings and stop for strawberry phosphates.
Argument: One size doesn't fit all
This is really the first time the sheet resorts to a straw man, saying of test opponents that "they want parents to feel that their kids are too unique for testing." Nope (nor can one be "too unique" or "more unique" or "somewhat pregnant"). I don't avoid one-size-fits-all hats because I think I'm too special; I just know that they won't fit.
But the advice here is that parents need to know how their kids are doing at reading and math because all success in life depends on reading and math. And they double down on this as well:
There are many different kinds of dreams and aspirations, with one way to get there: reading and math... There isn't much you can do without reading and math... Without solid reading and math skills, you're stuck
And, man-- I am a professional English teacher. It is what I have devoted my life to. But I'll be damned if I would stand in front of any of my classes, no matter how low in ability, and say to them, "You guys read badly, and you are all going to be total failures in life because you are getting a lousy grade in my class." I mean-- I believe with all my heart that reading and writing are hugely important skills, but even I would not suggest that nobody can amount to anything in life without them.
Then there's this:
It's not about standardization. Quite the opposite. It's about providing teachers with another tool, getting them the information they need so they can adapt their teaching and get your kids what they need to reach their full potential.
So here's yet another alleged purpose for the test, on top of the many others listed so far. This is one magical test, but as a parent, I would ask just one question-- When will the test be given, and when will my child's teacher get back the results that will inform these adaptations? As a teacher, I might ask how I'll get test results that will both tell me what I have yet to do this year AND how well I did this year. From the same test! Magical, I'm telling you!
Argument: A drop in scores is proof
I didn't think the drop in test scores was being used as proof of anything by defenders of public ed. We know why there was a drop-- because cut scores were set to insure it.
Advice: present lower test scores as proof of the awesomeness of these new, improved tests. But hey-- look at this:
We expected the drop in scores. Any time you change a test scores drop. We know that. Anything that's new has a learning curve.
But wait. I thought these new improved tests didn't require any sort of test prep, that they were such authentic measures of what students learn in class that students would just transfer that learning seamlessly to the new tests. Didn't you say that? Because it sounds now like students need a few years to get the right kind of test preparation do well on these.
Interesting don'ts on this one--don't trot out the need to have internationally competitive standards to save the US economy with college and career ready grads.
Argument: Testing is bad. Period.
Advice: Yes, tests aren't fun. They're not supposed to be. But tests are a part of life. "They let us know we're ready to move on." So, add one more item to the Big List of Things The Test Can Do.
Number one thing to do? Normalize testing. Tests are like annual checkups with measures for height and weight, which I guess is true if all the short kids are flunked and told they are going to fail at life and then the doctors with the most short kids get paid less by the insurance company and given lower ratings. In that case then, yes, testing is just like a checkup.
The writer wants you to sell the value of information, not the gritty character-building experience of testing. It's a good stance because it assumes the sale-- it assumes that the Big Standardized Test is actually collecting real information that means what it says it means, which is a huge assumption with little evidence to back it up.
Look, testing is not universal. Remember when you had to pass your pre-marital spousing test before you could get married, or the pre-parenting test before you could have kids? No, of course not. Nor do CEO's get the job by taking a standardized test that all CEO's must take before they can be hired.
Where testing does occur, it occurs because it has proven to have value and utility. Medical tests are selected because they are deemed appropriate for the specific situation by medical experts, who also have reason to believe that the tests deliver useful information.
Of all the six points, this one is the most genius because it complete skips past the real issue. There are arguments to be made against all testing (Alfie Kohn makes the best ones), but in a world where tests are unlikely to be eradicated, the most important question is, "Is this test any good?" All tests are not created equal. Some are pretty okay. Some are absolute crap. Distinguishing between them is critical.
So there are our six testing talking points. You can peruse the original to find more details-- they're very peppy and have snappy layouts and fonts. They are baloney, but it's baloney in a pretty wrapper in small, easy-to-eat servings. But still baloney.
Subscribe to:
Posts (Atom)