Brad Roae is running for re-election for the PA House of Representatives for District 6. First elected in 2006, Roae has had some interesting things to say about education in Pennsylvania.
Hitler blamed the Jews for everything that was wrong with the world and school boards blame charter schools.
This was on Facebook, in response to a question about the currently-off-the-table HB 530, a bill that was supposed to provide big fat early Christmas presents to the charter school industry in PA.
Roae's district is just up the road from me and just down the road from Erie, where the schools have made some headlines with their economic issues, to the point that their board was seriously considering closing all of its high schools. Erie is one of several school districts that highlight the economic troubles of school districts in Pennsylvania. It's a complex mess, but the basic problems boil down to this.
First, Pennsylvania ranks 45th in the country for level of state support for local districts. That means the bulk of school district funding comes from local taxpayers, and that means that as cities like Erie with a previously-industrial tax base have lost those big employers, local revenue has gone into freefall, opening up some of the largest gaps between rich and poor districts in the country.
Second, Pennsylvania's legislature (the largest full-time legislature in the country, one of the most highly paid, and one of the most impressively gerrymandered) decided in the early 2000s that they would let local districts skimp on payments to the pension fund because, hey, those investments will grow the fund like wildfire anyway. Then Wall Street tanked the economy, and now local districts are looking at spectacularly ballooning pension payments on the order of payments equal to as much as one third of their total budget.
Oh, and a side note-- the legislature also periodically goes into spectacular failure mode about the budget. Back in 2015 districts across the state had to borrow huge chunks of money just to function, because Harrisburg couldn't get their job done.
Third, Pennsylvania is home to what our own Auditor General calls the worst charter laws in the country. There are many reasons for that judgment, but for local districts the most difficult part is that charter school students take 100% of their per-capita cost with them.
So Erie City Schools, despite some emergency funding from the state, will run up as much as a $10 million deficit this year, with a full quarter of their spending going to charter and pension costs. Meanwhile, the legislature is trying to phase in a new funding formula (or, one might say, its first actual funding formula). This is going to be a painful process because, to even things out, it will have to involve giving some cities a far bigger injection of state tax dollars than richer communities will get. Politicians face the choice of either explaining this process and making a case for fairness and justice, or they can just play to the crowd and decry Harrisburg "stealing our tax dollars to send to Those People." Place your bets now on which way that wind will blow.
Oh, and that formula is supposed to get straightened out over the next twenty years!!
Meanwhile, guys like Roae want to blame teachers and school districts. You can't give teachers raises and benefits. If Erie (and school districts like it) want state aid, then they should cut costs and stop blaming charter schools. Meanwhile, Roae has been lauded by the PA cyber industry as a "champion of school choice."
Roae, who graduated from Gannon in 1990 with a business degree and worked in the insurance biz until starting his legislative career, ought to know better.
When hospitals throughout Northwest PA wanted to cut costs, they didn't open more hospitals. If you are having trouble meeting your household budget, you do not open a second home and move part of your family into it.
Education seems to be the only field in which people suggest that when you don't have enough money to fund one facility, you should open more facilities. Charters are in fact a huge drain on public schools in the state. If my district serves 1,000 students and 100 leave for a charter school, my operating costs do not decrease by 10% even if my student population does. In fact, depending on which 100 students leave, my costs may not decrease at all. On top of that, I have to maintain capacity to handle those students because if some or all come back (and many of them do) I have to be able to accommodate them.
And while you can argue that losing students to charters may allow me to reduce the number of teachers in my school, in effect "moving" those jobs to the charter, the charter will still have to duplicate administrative costs.
Roae need only look at the schools all around his district to see schools that are cutting programs, closing buildings, jamming more students into classrooms, and offering the taxpayers of the district less in their public school system. Charter school costs aren't responsible for all of that, but they are certainly responsible for a lot of it, and that is doubly frustrating for school boards who, unlike Hitler, feel some responsibility for watching over the tax dollars that they were elected to spend wisely. And yet, unlike Hitler, they have no say at all over how those dollars are spent by the charters and, under Pennsylvania's lousy charter laws, nobody really has oversight once those public tax dollars go into private charter operator hands.
It's like board members get ten dollars from a taxpayer for lunch
Taxpayer: Couldn't you get us a better lunch with the money we gave you?
Board member: Well, the state said I had to give three dollars to that guy.
Taxpayer: Well, did he at least buy lunch with it.
Board member: I have no idea.
We could discuss the widespread fraud and scandal of Pennsylvania charter schools (if you hate the idea of your NWPA tax dollars going to Philly, you'll really hate what happens when they go to Philly charters), but that's really beside the point. If PA legislators think charters are such a good idea, they could come up with a funding system that didn't bleed the public system dry in order to get charters running.
Meanwhile, voters in District 6 might try voting against someone who thinks that after you rob Peter to pay Paul, you yell at Peter for not being thrifty enough to withstand the theft, and then compare him to Hitler.
Sunday, October 30, 2016
ICYMI: Catching Up with Reading (10/30)
I've been home for about a week and I am just about back up to speed. There's a lot to read this time around. As always, I encourage you to share wildly whatever you like here.
What Are the Main Reasons Teachers Call It Quits
NPR takes a look at why some folks are getting out of the teaching biz. No surprises here, but nice to see NPR catching on
LA Unified Takes a Hard Look at Charter Schools
Charters have taken it on the chin in LA, and there's a definite shift in attitude there.
A Public Education
Friend of this blog Phyllis Bush ran an op-ed this week that gets to the heart of what does and does not make a public education.
State-Run Kids: Suleika's Story
Here's a moving story of what the charter mess in New Jersey looks like to the families and children of the city.
Black Children Deserve the Stability That Neighborhood Schools Offer
Andre Perry absolutely nails it in discussing one of the worst effects of charter schools-- the loss of a stabilizing institutions for a community
What I Miss
Friend of this blog Mary Holden (it's nice to have all these friends) has been writing an honest and personal account of her departure from the classroom. Here's her look back at what she misses.
King of the Castle
Jennifer Berkshire (Edushyster) takes a look at the infamous Massachusetts charter that makes its teachers pay to leave.
Wall Street Firms Make Money from Pension Funds, Spend It On Charters
Actual reporter (you know-- the old fashioned type who actually goes out and finds thing out) David Sirota reports the maddening but predictable news that public teacher pension funds are helping fund the attack on public education.
The Vivisection of Literature
Another examination of how the study of literature has been beaten up in the rush to High Standards
The Absurd Defense of Standards Post-Common Core
Jane Robbins takes a quick look at how some folks are in a Kentucky spitting match over the Core
NAACP President: Why We Should Pause the Expansion of Charter Schools
Since they made the charteristas all sad, the NAACP has had lots of folks trying to tell them what happened, why it happened, and what they should really do. Here's the president of NAACP to explain what they did and why they did it.
What Are the Main Reasons Teachers Call It Quits
NPR takes a look at why some folks are getting out of the teaching biz. No surprises here, but nice to see NPR catching on
LA Unified Takes a Hard Look at Charter Schools
Charters have taken it on the chin in LA, and there's a definite shift in attitude there.
A Public Education
Friend of this blog Phyllis Bush ran an op-ed this week that gets to the heart of what does and does not make a public education.
State-Run Kids: Suleika's Story
Here's a moving story of what the charter mess in New Jersey looks like to the families and children of the city.
Black Children Deserve the Stability That Neighborhood Schools Offer
Andre Perry absolutely nails it in discussing one of the worst effects of charter schools-- the loss of a stabilizing institutions for a community
What I Miss
Friend of this blog Mary Holden (it's nice to have all these friends) has been writing an honest and personal account of her departure from the classroom. Here's her look back at what she misses.
King of the Castle
Jennifer Berkshire (Edushyster) takes a look at the infamous Massachusetts charter that makes its teachers pay to leave.
Wall Street Firms Make Money from Pension Funds, Spend It On Charters
Actual reporter (you know-- the old fashioned type who actually goes out and finds thing out) David Sirota reports the maddening but predictable news that public teacher pension funds are helping fund the attack on public education.
The Vivisection of Literature
Another examination of how the study of literature has been beaten up in the rush to High Standards
The Absurd Defense of Standards Post-Common Core
Jane Robbins takes a quick look at how some folks are in a Kentucky spitting match over the Core
NAACP President: Why We Should Pause the Expansion of Charter Schools
Since they made the charteristas all sad, the NAACP has had lots of folks trying to tell them what happened, why it happened, and what they should really do. Here's the president of NAACP to explain what they did and why they did it.
Saturday, October 29, 2016
How (Not) To Grade Schools
Bellwether Education Partners is a right-tilted thinky tank from the same basic neighborhood as the Fordham Institute. Chad Aldeman is one of their big guns, and this month he's out with Grading Schools: How States Should Define “School Quality” Under the Every Student Succeeds Act. It's a pretty thing with thirty-two pages of thoughts about how to implement school accountability under ESSA, and I've read the whole thing so that you don't have to. Let's take a look under the hood.
Introduction
Aldeman offers a few thoughts to start that give a hint about where he might be headed. School evaluation has been too rigid and rule-bound. We've focused too much on student test scores instead of student growth. But the window is now open for a "new conversation," which kind of presumes that there was an old conversation, and I suppose for people in the thinky tank world it might seem as if there were a conversation, but from out here the actual education field, school accountability has been imposed from the top down with deliberate efforts to silence any attempts at conversation.
In other words, the news that school accountability has been too rigid and rules-bound is only news to people who have steadfastly ignored the voices of actual teachers, who called that one from the very first moment that No Child Left Behind raised its rigid, inflexible, and not-very-smart head.
So to have this "new conversation," policy folks should brace themselves for a certain amount of "Told you so" or "No kidding" or even "No shit, Sherlock." Or alternately, as this new conversation is probably going to resemble the old one insofar as actual teacher voices will be once again excluded, something along the lines of, "Remember what happened the last time you ignored us?"
What Is Accountability and Why Does It Matter?
Alderman acknowledges that accountability covers a wide range of functions, from transparency for the general public on one end to rewards and punishments by government on the other end. He posits that somewhere in the middle that "accountability can act as a tool for improvement through goal-setting, performance benchmarking, and re-evaluation." And he also notes that accountability measures are state government's way of signalling what it values.
So accountability can be very many things. Who is it for?
Well, teachers and school leaders, who are supposed to be able to use the data to do a better job. And parents, too. And also the political leaders who are responsible for the oversight of public tax dollars. And on top of that, ESSA requires states to grade schools in order to stack rank and target some for some manner of fixing, including targeting the bottom five percent.
Aldfeman barrels on, pretending that meeting that last set of ESSA mandated stack-ranking, school-grading requirements will meet all the various versions of accountability that he has listed. He suggests in passing that we're really talking about different degrees of transparency for different groups of accountability viewers, but that's not really true either.
Neither Aldeman or, for that matter, the feds have seriously or realistically addressed the problems that come when you try to create an instrument that measures all things for all audiences. This is bananas, and it's why the entire accountability system continues to be built on a foundation of sand and silly putty. The instrument that tells a parent how their child is doing is not the same as the instrument that tells a teacher how to tweak instruction, and neither is the same as the instrument that tells the state and federal government if the school is doing a good job, and none of those are the same as an instrument used to stack ran all the schools in the state (and, it should also be noted, none of those functions are best done by a Big Standardized Test, and yet policymakers seem unable to let go of the assumption that the BS Tests are good for anything).
It's like weighing the entrees at a restaurant as a way of determining customer satisfaction, chef quality and efficiency, how well the restaurant is managed, compliance with health code regulations, reviews for the AAA guide, and the stability of the building in which the restaurant is housed. It's simply nuts.
Aldeman cites assorted research that is all based on the assumption that narrow poorly-written standardized math and reading tests are actually measuring something useful. They are not. Virtually all of the data generated by these tests is junk, and as their use becomes more widespread and students become more weary of them, the data becomes junkier and junkier.
Bottom line-- real accountability requires a wide range of instruments for a wide range of audiences, and we have not remotely solved that challenge. Not, let me note, that it isn't a challenge worth solving. But as long as we base the whole system on the BS Tests, we will not be remotely in the right neighborhood.
How Should States Select Accountability Measures
Again, Aldeman is working from some bad assumptions about what the system is for. Can you spot the key word in this sentence?
The trick, then, is to design accountability systems in which schools are competing on measures that truly matter
A competition system is not a measuring system. If I tell you that Chris is the tallest kid in class and Pat is the shortest, you still have no idea of Chris's or Pat's actual height.
Aldeman gets his next point right-- an accountability system should be simple, clear and fair. Well, partly right. His idea of "fair" is that the system only measures things that schools actually have control over. So he's skipped one other key attribute-- the accountability system needs to be accurate and measure what it actually says it measures. So, for instance, we should stop saying "student achievement" when we actually mean "student score on a single narrow standardized math and reading test that has never really passed tests for validity and reliability."
Aldeman notes the four required elements per ESSA:
1) "Achievement rates" aka "test scores."
2) Some other "valid and reliable" academic indicator. The word "other" assumes facts not in evidence.
3) Progress in achieving English language proficiency
4) Some other indicator of school quality or success
Aldeman offers a chart in which some possible elements are judged against qualities like simplicity, fairness, disagregatability, and giving guidance to the school. So measuring grit or other personal qualities is iffy because measuring and teaching it are iffy. Teacher and student surveys get a thumbs up for measuring stuff, but thumbs down for being actionable, though I think a good student or staff survey would provide a school with very specific issues to address.
Aldeman says to avoid redundant measures and reminds us that ESSA doesn't put a maximum limit on measures to be used.
How Can States Design School Ratings Systems That Are Simple, Clear, and Fair?
A fake subheading that simply covers an introduction that says, "And now I will tell you how." It does include a fun sidebar about how K-2 should be included in the accountability system. Aldeman notes that leaving them out previously was because of things like the unsolved challenge of how to assess the littles; he does not offer any new insights about that issue that have turned up since NCLB days, and in fact subjecting the littles to any kind of formal or standardized assessment is a truly, deeply indefensible policy notion, and serves as nothing more than a clear-cut example of putting the desires of policy-makers and data-grubbers over the needs of small children.
Incorporating Student Achievement
Of course, by "student achievement," we just mean "test scores." Aldeman recommends we start out with a simple performance scale index for points. He suggests five performance levels, with emphasis on proficiency because "proficiency is, after all, a benchmark for future success in college and careers." Which-- no, no it's not. There isn't an iota of data anywhere to connect a proficiency level on the BS Tests with college and career success, particularly because the proficiency rating is a normed ranking, so it moves every year depending on the mass of scores and the cut scores set annually by state testocrats.
So we're talking about using the test scores, which are junk, after they have been run through a normed scale, which adds more junk.
Using Growth as the "Other" Academic Indicator
Aldeman pays tribute to the "growth mindset" as a worthy stance for schools, though we are once again talking only about growth as it applies to standardized test scores. If the student grew in some other way, nobody cares.
The problem with coming up with a measure of student growth is, of course, that nobody has successfully done it yet. Aldeman mentions several models.
* Without using the words "value-added," Aldeman nods to the model that uses obtuse, opaque, and unproven mumbo-jumbo to make the claim that student performance can be statistically stripped from other characteristics. Aldeman suggests this is disqualified because it is neither simple nor understandable; he might also mention that it is baloney that has been debunked by all manner of authorities.
* Aldeman mentions the student percentiles model, a stack-ranking competitive model that compares a student's test score to the score of other students who had a similar score last year. Like all such normed models, this one involves goal posts that move every year, and like all percentile-based models, it guarantees the exact same distribution year after year. No amount of school quality will raise all students to the top 25%.
* Aldeman favors a transitional matrix, judging schools on how many students move from one group to another (say, below basic to basic). This is also a bad idea. Aldeman has elsewhere shown sensitivity to the unintended consequences of some of these policy choices, so I'm not sure how he misses the obvious implications here. A school's best strategy will be to invest its energy on students who are near a threshhold and not those for whom there's no real hope of enough improvement.
Creating an Overall Index and Incorporating Subgroup Results
Aldeman wants to use the two indicators we've got so far and average them for an overall index, and this is the score by which we'll "flag" the bottom 5%. These indexes would also be computed for subgroups so that schools can also be flagged for failing to close their achievement gaps.
To be clear, this approach assumes that identifying schools for improvement is an important lever at the state’s disposal. That’s intentional, because there are positive effects associated with the mere act of notifying schools that they need to improve. That’s especially true for accountability systems bearing consequences for schools, but it’s even true in systems relying purely on information and transparency.
In other words, threats work. At least, they work on raising test scores (and he's got some research from reformster research rock star Eric Hanushek to back it up). This is a deeply irresponsible policy idea, ignoring completely the question of what schools give up and get rid of in order to raise their test scores. Cutting recess, phys ed, art, music, etc. In my own district I have seen schools strip student schedules so that middle school students with low test scores spent their entire day in English and math class, with no history, art, science or other non-tested subjects.
This is the test-centered school at its worst. This is a lousy idea.
Incorporating Other Measures of School Success Into Final School Ratings
Here Aldeman brings out the English model of school inspections, in which trained and experienced educators visit the school for an extended inspection, both detailed and holistic, of how the school works, how well it ticks, how well it serves students, and how well it matches the notion of what a good school should be.
This is a good idea.
Though I can imagine that for schools that have been "flagged" because of test scores, the inspection visit might be a bit harrowing.
I would offer one editing suggestion to Aldeman for his system. Keep the school inspection system and get rid of everything else.
Yes, yes, ESSA has kept us beholden to the BS Testing system. But any sensible, realistic, useful accountability system is going to shrink the use of the BS Test down to the absolute minimum the feds will let the state get away with. Making the test scores the foundation of the rest of the accountability is the absolute wrong way to go.
Conclusion
Aldeman notes that ESSA somehow focuses less attention on punishing "failing" schools than on actually helping them, which, maybe, depending on how you read it. It would be worth it for the feds and states to back away from that, since they have shown absolutely no aptitude for turning around failing schools.
There is one other huge hole in Aldeman's plan, and that is the space where we should find the voice of the community in which the school is located. He has dodged one of the big accountability questions, which is this-- if the community in which a school is located is happy with their school, exactly what reason is there for the state and federal bureaucrats to get involved? I remain puzzled that the right-leaning policy folks continue to remain uninterested in local control of schools.
Introduction
Aldeman offers a few thoughts to start that give a hint about where he might be headed. School evaluation has been too rigid and rule-bound. We've focused too much on student test scores instead of student growth. But the window is now open for a "new conversation," which kind of presumes that there was an old conversation, and I suppose for people in the thinky tank world it might seem as if there were a conversation, but from out here the actual education field, school accountability has been imposed from the top down with deliberate efforts to silence any attempts at conversation.
In other words, the news that school accountability has been too rigid and rules-bound is only news to people who have steadfastly ignored the voices of actual teachers, who called that one from the very first moment that No Child Left Behind raised its rigid, inflexible, and not-very-smart head.
So to have this "new conversation," policy folks should brace themselves for a certain amount of "Told you so" or "No kidding" or even "No shit, Sherlock." Or alternately, as this new conversation is probably going to resemble the old one insofar as actual teacher voices will be once again excluded, something along the lines of, "Remember what happened the last time you ignored us?"
What Is Accountability and Why Does It Matter?
Alderman acknowledges that accountability covers a wide range of functions, from transparency for the general public on one end to rewards and punishments by government on the other end. He posits that somewhere in the middle that "accountability can act as a tool for improvement through goal-setting, performance benchmarking, and re-evaluation." And he also notes that accountability measures are state government's way of signalling what it values.
So accountability can be very many things. Who is it for?
Well, teachers and school leaders, who are supposed to be able to use the data to do a better job. And parents, too. And also the political leaders who are responsible for the oversight of public tax dollars. And on top of that, ESSA requires states to grade schools in order to stack rank and target some for some manner of fixing, including targeting the bottom five percent.
Aldfeman barrels on, pretending that meeting that last set of ESSA mandated stack-ranking, school-grading requirements will meet all the various versions of accountability that he has listed. He suggests in passing that we're really talking about different degrees of transparency for different groups of accountability viewers, but that's not really true either.
Neither Aldeman or, for that matter, the feds have seriously or realistically addressed the problems that come when you try to create an instrument that measures all things for all audiences. This is bananas, and it's why the entire accountability system continues to be built on a foundation of sand and silly putty. The instrument that tells a parent how their child is doing is not the same as the instrument that tells a teacher how to tweak instruction, and neither is the same as the instrument that tells the state and federal government if the school is doing a good job, and none of those are the same as an instrument used to stack ran all the schools in the state (and, it should also be noted, none of those functions are best done by a Big Standardized Test, and yet policymakers seem unable to let go of the assumption that the BS Tests are good for anything).
It's like weighing the entrees at a restaurant as a way of determining customer satisfaction, chef quality and efficiency, how well the restaurant is managed, compliance with health code regulations, reviews for the AAA guide, and the stability of the building in which the restaurant is housed. It's simply nuts.
Aldeman cites assorted research that is all based on the assumption that narrow poorly-written standardized math and reading tests are actually measuring something useful. They are not. Virtually all of the data generated by these tests is junk, and as their use becomes more widespread and students become more weary of them, the data becomes junkier and junkier.
Bottom line-- real accountability requires a wide range of instruments for a wide range of audiences, and we have not remotely solved that challenge. Not, let me note, that it isn't a challenge worth solving. But as long as we base the whole system on the BS Tests, we will not be remotely in the right neighborhood.
How Should States Select Accountability Measures
Again, Aldeman is working from some bad assumptions about what the system is for. Can you spot the key word in this sentence?
The trick, then, is to design accountability systems in which schools are competing on measures that truly matter
A competition system is not a measuring system. If I tell you that Chris is the tallest kid in class and Pat is the shortest, you still have no idea of Chris's or Pat's actual height.
Aldeman gets his next point right-- an accountability system should be simple, clear and fair. Well, partly right. His idea of "fair" is that the system only measures things that schools actually have control over. So he's skipped one other key attribute-- the accountability system needs to be accurate and measure what it actually says it measures. So, for instance, we should stop saying "student achievement" when we actually mean "student score on a single narrow standardized math and reading test that has never really passed tests for validity and reliability."
Aldeman notes the four required elements per ESSA:
1) "Achievement rates" aka "test scores."
2) Some other "valid and reliable" academic indicator. The word "other" assumes facts not in evidence.
3) Progress in achieving English language proficiency
4) Some other indicator of school quality or success
Aldeman offers a chart in which some possible elements are judged against qualities like simplicity, fairness, disagregatability, and giving guidance to the school. So measuring grit or other personal qualities is iffy because measuring and teaching it are iffy. Teacher and student surveys get a thumbs up for measuring stuff, but thumbs down for being actionable, though I think a good student or staff survey would provide a school with very specific issues to address.
Aldeman says to avoid redundant measures and reminds us that ESSA doesn't put a maximum limit on measures to be used.
How Can States Design School Ratings Systems That Are Simple, Clear, and Fair?
A fake subheading that simply covers an introduction that says, "And now I will tell you how." It does include a fun sidebar about how K-2 should be included in the accountability system. Aldeman notes that leaving them out previously was because of things like the unsolved challenge of how to assess the littles; he does not offer any new insights about that issue that have turned up since NCLB days, and in fact subjecting the littles to any kind of formal or standardized assessment is a truly, deeply indefensible policy notion, and serves as nothing more than a clear-cut example of putting the desires of policy-makers and data-grubbers over the needs of small children.
Incorporating Student Achievement
Of course, by "student achievement," we just mean "test scores." Aldeman recommends we start out with a simple performance scale index for points. He suggests five performance levels, with emphasis on proficiency because "proficiency is, after all, a benchmark for future success in college and careers." Which-- no, no it's not. There isn't an iota of data anywhere to connect a proficiency level on the BS Tests with college and career success, particularly because the proficiency rating is a normed ranking, so it moves every year depending on the mass of scores and the cut scores set annually by state testocrats.
So we're talking about using the test scores, which are junk, after they have been run through a normed scale, which adds more junk.
Using Growth as the "Other" Academic Indicator
Aldeman pays tribute to the "growth mindset" as a worthy stance for schools, though we are once again talking only about growth as it applies to standardized test scores. If the student grew in some other way, nobody cares.
The problem with coming up with a measure of student growth is, of course, that nobody has successfully done it yet. Aldeman mentions several models.
* Without using the words "value-added," Aldeman nods to the model that uses obtuse, opaque, and unproven mumbo-jumbo to make the claim that student performance can be statistically stripped from other characteristics. Aldeman suggests this is disqualified because it is neither simple nor understandable; he might also mention that it is baloney that has been debunked by all manner of authorities.
* Aldeman mentions the student percentiles model, a stack-ranking competitive model that compares a student's test score to the score of other students who had a similar score last year. Like all such normed models, this one involves goal posts that move every year, and like all percentile-based models, it guarantees the exact same distribution year after year. No amount of school quality will raise all students to the top 25%.
* Aldeman favors a transitional matrix, judging schools on how many students move from one group to another (say, below basic to basic). This is also a bad idea. Aldeman has elsewhere shown sensitivity to the unintended consequences of some of these policy choices, so I'm not sure how he misses the obvious implications here. A school's best strategy will be to invest its energy on students who are near a threshhold and not those for whom there's no real hope of enough improvement.
Creating an Overall Index and Incorporating Subgroup Results
Aldeman wants to use the two indicators we've got so far and average them for an overall index, and this is the score by which we'll "flag" the bottom 5%. These indexes would also be computed for subgroups so that schools can also be flagged for failing to close their achievement gaps.
To be clear, this approach assumes that identifying schools for improvement is an important lever at the state’s disposal. That’s intentional, because there are positive effects associated with the mere act of notifying schools that they need to improve. That’s especially true for accountability systems bearing consequences for schools, but it’s even true in systems relying purely on information and transparency.
In other words, threats work. At least, they work on raising test scores (and he's got some research from reformster research rock star Eric Hanushek to back it up). This is a deeply irresponsible policy idea, ignoring completely the question of what schools give up and get rid of in order to raise their test scores. Cutting recess, phys ed, art, music, etc. In my own district I have seen schools strip student schedules so that middle school students with low test scores spent their entire day in English and math class, with no history, art, science or other non-tested subjects.
This is the test-centered school at its worst. This is a lousy idea.
Incorporating Other Measures of School Success Into Final School Ratings
Here Aldeman brings out the English model of school inspections, in which trained and experienced educators visit the school for an extended inspection, both detailed and holistic, of how the school works, how well it ticks, how well it serves students, and how well it matches the notion of what a good school should be.
This is a good idea.
Though I can imagine that for schools that have been "flagged" because of test scores, the inspection visit might be a bit harrowing.
I would offer one editing suggestion to Aldeman for his system. Keep the school inspection system and get rid of everything else.
Yes, yes, ESSA has kept us beholden to the BS Testing system. But any sensible, realistic, useful accountability system is going to shrink the use of the BS Test down to the absolute minimum the feds will let the state get away with. Making the test scores the foundation of the rest of the accountability is the absolute wrong way to go.
Conclusion
Aldeman notes that ESSA somehow focuses less attention on punishing "failing" schools than on actually helping them, which, maybe, depending on how you read it. It would be worth it for the feds and states to back away from that, since they have shown absolutely no aptitude for turning around failing schools.
There is one other huge hole in Aldeman's plan, and that is the space where we should find the voice of the community in which the school is located. He has dodged one of the big accountability questions, which is this-- if the community in which a school is located is happy with their school, exactly what reason is there for the state and federal bureaucrats to get involved? I remain puzzled that the right-leaning policy folks continue to remain uninterested in local control of schools.
Friday, October 28, 2016
GA: Ed Consultant Slams Takeover Amendment
In Georgia, reformsters are pushing hard for Amendment 1, a constitutional amendment that would institute a state-level takeover district, modeled after the pioneering Achievement School District in Tennessee.
Dr. David K. Lerch is a Georgia resident and ran his own educational consulting firm for over three decades. He has worked all over the country, writing grants and overseeing programs (e.g. Pueblo hired him to evaluate their STEM programs).
Lerch has presumably seen plenty in the ed field; he earned his Master's Degree in Public School Administration from the University of Virginia back in 1967. By 1984 he was forming the National Association of Magnet School Development and was touting magnets as a path to desegregation and what we now call educational equity. He was also saying the kinds of things that charter fans would chime in on decades later:
Parents want neighborhood schools until they find a program they support and then they will send a child halfway across the county if the education program is attractive.
Lerch now works for the Juliana Group, Inc, a Savannah-based business that specializes in selling furniture for Montessori schools.
In short, Lerch is not a long-time hard-core supporter for traditional public education. However, when a letter-writer to the Savannah Morning News wrote to warn against Amendment 1, Lerch felt moved to back her up.
I can add some first-hand experiences validating her timely concern about what will happen with the loss of local control of schools and the resulting loss of millions of state and federal revenue.
I served as a consultant to school districts in two states, Louisiana and Michigan, where the governors set up takeover districts identical to that proposed by Gov. Nathan Deal.
The legislative amendment in Louisiana’s constitution (Recovery School District) provided for the same type of state control. While I was working with East Baton Rouge Parish School District, the state took over Istrouma High School, operated it for five years and returned it to the district without students showing any measurable academic success.
Then the school board had to spend over a $21 million of local funds to repair the facility.
I also worked with Michigan’s Education Achievement Authority (EAA), which was set up by Governor Snyder as a model of Louisiana’s Recovery School District.
I helped them obtain a $35 million federal grant for teacher training and support in 15 of 60 schools that were scheduled for operation by EAA. After only four years of state control, and massive evidence of EAA’s failure cited by education experts and the federal government monitoring their grant, Governor Snyder decided to shut down the agency and turn EAA’s schools into charter schools.
Those two failures are, of course, on top of the failure of Tennessee's ASD. (see here, here and here). And just in case you have doubts:
Why anyone would duplicate a state controlled takeover district that has proven to be a failure in two states is beyond belief. If you don’t believe the controversy caused by the takeover districts similar to OSD, read the February 2016 document “State Takeover of Low-Performing Schools – A Record of Academic Failure, Financial Mismanagement & Student Harm.”
It is available on the Internet and will shake you to the core about Amendment 1.
He's correct. That report is available on the internet, and it is yet more evidence that state-run takeover turnaround districts have failed-- and not just marginally, but spectacularly and totally-- every time they have been attempted. Georgia has ample evidence and ample warning. Here's hoping that Georgia voters get the message.
Dr. David K. Lerch is a Georgia resident and ran his own educational consulting firm for over three decades. He has worked all over the country, writing grants and overseeing programs (e.g. Pueblo hired him to evaluate their STEM programs).
Lerch has presumably seen plenty in the ed field; he earned his Master's Degree in Public School Administration from the University of Virginia back in 1967. By 1984 he was forming the National Association of Magnet School Development and was touting magnets as a path to desegregation and what we now call educational equity. He was also saying the kinds of things that charter fans would chime in on decades later:
Parents want neighborhood schools until they find a program they support and then they will send a child halfway across the county if the education program is attractive.
Lerch now works for the Juliana Group, Inc, a Savannah-based business that specializes in selling furniture for Montessori schools.
In short, Lerch is not a long-time hard-core supporter for traditional public education. However, when a letter-writer to the Savannah Morning News wrote to warn against Amendment 1, Lerch felt moved to back her up.
I can add some first-hand experiences validating her timely concern about what will happen with the loss of local control of schools and the resulting loss of millions of state and federal revenue.
I served as a consultant to school districts in two states, Louisiana and Michigan, where the governors set up takeover districts identical to that proposed by Gov. Nathan Deal.
The legislative amendment in Louisiana’s constitution (Recovery School District) provided for the same type of state control. While I was working with East Baton Rouge Parish School District, the state took over Istrouma High School, operated it for five years and returned it to the district without students showing any measurable academic success.
Then the school board had to spend over a $21 million of local funds to repair the facility.
I also worked with Michigan’s Education Achievement Authority (EAA), which was set up by Governor Snyder as a model of Louisiana’s Recovery School District.
I helped them obtain a $35 million federal grant for teacher training and support in 15 of 60 schools that were scheduled for operation by EAA. After only four years of state control, and massive evidence of EAA’s failure cited by education experts and the federal government monitoring their grant, Governor Snyder decided to shut down the agency and turn EAA’s schools into charter schools.
Those two failures are, of course, on top of the failure of Tennessee's ASD. (see here, here and here). And just in case you have doubts:
Why anyone would duplicate a state controlled takeover district that has proven to be a failure in two states is beyond belief. If you don’t believe the controversy caused by the takeover districts similar to OSD, read the February 2016 document “State Takeover of Low-Performing Schools – A Record of Academic Failure, Financial Mismanagement & Student Harm.”
It is available on the Internet and will shake you to the core about Amendment 1.
He's correct. That report is available on the internet, and it is yet more evidence that state-run takeover turnaround districts have failed-- and not just marginally, but spectacularly and totally-- every time they have been attempted. Georgia has ample evidence and ample warning. Here's hoping that Georgia voters get the message.
CA: Is the Fox Guarding the Henhouse?
The Los Angeles Unified School District put away their charter rubber stamp, and it has touched off a wave of hand wringing and baloney shoveling.
Earlier this month, the LAUSD board pulled the plug on five charters. Three of them were Magnolia schools, part of the Gulen charter web of schools allegedly tied to the reclusive cleric who is also an exiled political leader from Turkey allegedly tied to this year's coup attempt. The Magnolia chain has been accused of significant financial shenanigans, The other two were Celerity schools, a chain that has such a spotted record that even reformy John Deasy has cast a wary eye in their direction. Oversight and transparency, two important qualities that charter schools generally do very badly, were cited as issues with the five.
But the unexpected move by the board to hold any charters accountable for anything ever has stirred some folks up.
Here's a charter-friendly look at the "issue" from KPCC, the Southern California Public Radio station, that opens with the exactly wrong question:
Is the Los Angeles Unified School District able to give a fair shake to the charter schools it authorizes and oversees even though the district loses money every time a student leaves to attend a charter?
And follows it up with this mis-statement of the issue:
On Tuesday, board members addressed the underlying concern the California Charter Schools Association and others have raised in the wake of their vote: that letting L.A. Unified review such requests from charter schools — especially in an environment where the district and charters compete for funding — is letting the fox guard the henhouse.
Emphasis mine. Because I wouldn't frame the situation by suggesting that the school board is somehow out to steal money it's not entitled to.
Instead of "letting the fox guard te henhouse," let's say "requiring the elected representatives of the taxpayers to oversee how those taxpayers' dollars are used."
Some members of the board expressed frustration that the California system allows unhappy charters to next ask the county to authorize them. Board member Richard Vladovic noted that the district would save a lot of money if charters were authorized and supervised by the state. He neglected to add having the charters financed by the state as well.
Other board members clearly get the backwardness of the system:
Charter school petitioners “who are turned down will always have a complaint,” said school board vice president George McKenna. “Their opinion will always be that they were wronged, that we weren’t fair, that the burden is on [L.A. Unified] to prove their guilt, not on them to prove their innocence.”
Yes, in California (and several other states), we've got a system in which charters feel entitled to open and stay open, drawing on public tax dollars as long as they're inclined.
There really isn't anything like this. If I want to pave the driveway to my private business, I can't demand state highway tax dollars to finance the driveway and expect to get those dollars unless someone can prove I've done something really terrible. If I want to start my own private security force, I can't bill the Department of Defense and expect them to shoulder the burden of proving why I shouldn't be paid public tax dollars.
But somehow California charters feel entitled to public tax dollars and will hold onto them until someone can pry the pursestrings out of their chartery fingers. This is not the fox guarding the henhouse; this the fox moving into the henhouse and getting indignant when the farmer shows up with an eviction notice.
Earlier this month, the LAUSD board pulled the plug on five charters. Three of them were Magnolia schools, part of the Gulen charter web of schools allegedly tied to the reclusive cleric who is also an exiled political leader from Turkey allegedly tied to this year's coup attempt. The Magnolia chain has been accused of significant financial shenanigans, The other two were Celerity schools, a chain that has such a spotted record that even reformy John Deasy has cast a wary eye in their direction. Oversight and transparency, two important qualities that charter schools generally do very badly, were cited as issues with the five.
But the unexpected move by the board to hold any charters accountable for anything ever has stirred some folks up.
Here's a charter-friendly look at the "issue" from KPCC, the Southern California Public Radio station, that opens with the exactly wrong question:
Is the Los Angeles Unified School District able to give a fair shake to the charter schools it authorizes and oversees even though the district loses money every time a student leaves to attend a charter?
And follows it up with this mis-statement of the issue:
On Tuesday, board members addressed the underlying concern the California Charter Schools Association and others have raised in the wake of their vote: that letting L.A. Unified review such requests from charter schools — especially in an environment where the district and charters compete for funding — is letting the fox guard the henhouse.
Emphasis mine. Because I wouldn't frame the situation by suggesting that the school board is somehow out to steal money it's not entitled to.
Instead of "letting the fox guard te henhouse," let's say "requiring the elected representatives of the taxpayers to oversee how those taxpayers' dollars are used."
Some members of the board expressed frustration that the California system allows unhappy charters to next ask the county to authorize them. Board member Richard Vladovic noted that the district would save a lot of money if charters were authorized and supervised by the state. He neglected to add having the charters financed by the state as well.
Other board members clearly get the backwardness of the system:
Charter school petitioners “who are turned down will always have a complaint,” said school board vice president George McKenna. “Their opinion will always be that they were wronged, that we weren’t fair, that the burden is on [L.A. Unified] to prove their guilt, not on them to prove their innocence.”
Yes, in California (and several other states), we've got a system in which charters feel entitled to open and stay open, drawing on public tax dollars as long as they're inclined.
There really isn't anything like this. If I want to pave the driveway to my private business, I can't demand state highway tax dollars to finance the driveway and expect to get those dollars unless someone can prove I've done something really terrible. If I want to start my own private security force, I can't bill the Department of Defense and expect them to shoulder the burden of proving why I shouldn't be paid public tax dollars.
But somehow California charters feel entitled to public tax dollars and will hold onto them until someone can pry the pursestrings out of their chartery fingers. This is not the fox guarding the henhouse; this the fox moving into the henhouse and getting indignant when the farmer shows up with an eviction notice.
Thursday, October 27, 2016
Reflect now. Now!! NOW!!!
One of the fully screwed-up features of modern standardized assessments is the time frame.
A standardized test is the only place where students are told, "Starting from scratch, read this, reflect on it, answer questions about it, and do it all in the next fifteen minutes." We accept the accelerated time line as a normal feature of assessment, but why?
Never ever in a college course was a student handed a book for the first time and told, "Read this book and write an intelligent, thoughtful paper about the text. Hand it in sixty minutes from now."
Reflective, thoughtful, deep, even close reading, the kind of reading that reformsters insist they want, takes time. The text has to be read and considered carefully. Theories about the ideas, the themes, the characters, the author's use of language, the thoughtful consideration of the various elements of the writing-- those all need time to percolate, to simmer, to be mulled by the reader. Those of us who teach literature and reading in high school never have to tell our students, "Hurry up and zip through that faster." Most commonly we have to find ways to encourage our students to slow down, pay attention, really think about what they're reading instead of trying to race to the end.
A reader's relationship with a text, like any good relationship, takes time. It may start with a certain slow grudging acquaintance of necessity, or it may start with an instant spark of attraction, but either way, if the relationship is going to have any depth or quality, time and care will have to be invested. Standardized tests are the "hit it and quit it" of the reading world.
The reasons that we test this way are obvious. Test manufacturers want a short, closed test period so that no test items can "leak," though, of course, some of the best reflection on reading comes through discussion and sharing. English teachers have adopted reading circles for a reason. Test manufacturers also want to keep the testing experience uniform, which means a relatively short, set time (the longer the test lasts, the more variables creep in). But it's important to note that none of the reasons that we test this way have anything to do with more effectively testing the skills we say we want to test.
There's a whole other discussion to be had about trying to treat reading skills as discrete abilities that exist and can be measured in a vacuum without any concern about the content being read. They can't, but even if they could, none of the skills we say we want in readers are tested by the instant quicky test method. We say we want critical thinking, deep reading, and reflection beyond simple recall and fact-spitting, but none of that fits with the cold-reading and instant analysis method used in tests. We test as if we want to train students to cold read and draw conclusions quickly, in an isolated brief period.
This is nuts. It is a skill set that pretty much nobody is looking for, an ability favored by no-one, and yet, it is a fundamental part of the Big Standardized Test. No-- I take that back. This is a set of skills that is useful if you want to train a bunch of people to read and follow directions quickly and compliantly. That's about it.
Real reading takes time. Real reflection takes time. Both are best served by a rich environment that includes other thoughtful readers and resources to enrich the experience. To write any sort of thoughtful, deep, or thorough reflection on that reading also takes time.
If policymakers were serious about building critical thinking, deep reading skills, and thoughtful responses to the text, they would not consider BS Tests like the PARCC for even five minutes. It is one more area where stated intent and actual actions are completely out of alignment.
A standardized test is the only place where students are told, "Starting from scratch, read this, reflect on it, answer questions about it, and do it all in the next fifteen minutes." We accept the accelerated time line as a normal feature of assessment, but why?
Never ever in a college course was a student handed a book for the first time and told, "Read this book and write an intelligent, thoughtful paper about the text. Hand it in sixty minutes from now."
Reflective, thoughtful, deep, even close reading, the kind of reading that reformsters insist they want, takes time. The text has to be read and considered carefully. Theories about the ideas, the themes, the characters, the author's use of language, the thoughtful consideration of the various elements of the writing-- those all need time to percolate, to simmer, to be mulled by the reader. Those of us who teach literature and reading in high school never have to tell our students, "Hurry up and zip through that faster." Most commonly we have to find ways to encourage our students to slow down, pay attention, really think about what they're reading instead of trying to race to the end.
A reader's relationship with a text, like any good relationship, takes time. It may start with a certain slow grudging acquaintance of necessity, or it may start with an instant spark of attraction, but either way, if the relationship is going to have any depth or quality, time and care will have to be invested. Standardized tests are the "hit it and quit it" of the reading world.
The reasons that we test this way are obvious. Test manufacturers want a short, closed test period so that no test items can "leak," though, of course, some of the best reflection on reading comes through discussion and sharing. English teachers have adopted reading circles for a reason. Test manufacturers also want to keep the testing experience uniform, which means a relatively short, set time (the longer the test lasts, the more variables creep in). But it's important to note that none of the reasons that we test this way have anything to do with more effectively testing the skills we say we want to test.
There's a whole other discussion to be had about trying to treat reading skills as discrete abilities that exist and can be measured in a vacuum without any concern about the content being read. They can't, but even if they could, none of the skills we say we want in readers are tested by the instant quicky test method. We say we want critical thinking, deep reading, and reflection beyond simple recall and fact-spitting, but none of that fits with the cold-reading and instant analysis method used in tests. We test as if we want to train students to cold read and draw conclusions quickly, in an isolated brief period.
This is nuts. It is a skill set that pretty much nobody is looking for, an ability favored by no-one, and yet, it is a fundamental part of the Big Standardized Test. No-- I take that back. This is a set of skills that is useful if you want to train a bunch of people to read and follow directions quickly and compliantly. That's about it.
Real reading takes time. Real reflection takes time. Both are best served by a rich environment that includes other thoughtful readers and resources to enrich the experience. To write any sort of thoughtful, deep, or thorough reflection on that reading also takes time.
If policymakers were serious about building critical thinking, deep reading skills, and thoughtful responses to the text, they would not consider BS Tests like the PARCC for even five minutes. It is one more area where stated intent and actual actions are completely out of alignment.
The Death of Testing Fantasies
It is one of the least surprising research findings ever, confirmed now by at least two studies-- students would do better on the Big Standardized Test if they actually cared about the results.
One of the great fantasies of the testocrats is their belief that the Big Standardized Tests provide useful data. That fantasy is predicated on another fantasy-- that students actually try to do their best on the BS Test. Maybe it's a kind of confirmation bias. Maybe it's a kind of Staring Into Their Own Navels For Too Long bias. But test manufacturers and the policy wonks who love them have so convinced themselves that these tests are super-important and deeply valuable that they tend to believe that students think so, too.
Somehow they imagine a roomful of fourteen-year-olds, faced with a long, tedious standardized test, saying, "Well, this test has absolutely no bearing on any part of my life, but it's really important to me that bureaucrats and policy mavens at the state and federal level have the very best data to work from, so I am going to concentrate hard and give my sincere and heartfelt all to this boring, confusing test that will have no effect on my life whatsoever." Right.
This is not what happens. I often think that we would get some serious BS Test reform in this country if testocrats and bureaucrats and test manufacturers had to sit in the room with the students for the duration of the BS Tests. As I once wrote, if the students don't care, the data aren't there.
There are times when testocrats seem to sense this, though their response is often silly. For instance, back when Pennsylvania was using the PSSA test as our BS Test, state officials decided that students would take the test more seriously if a good score won them a shiny gold sticker on their diploma.
The research suggests that something more than a sticker may be needed. Some British research suggests that cash rewards for good test performance can raise test scores in poor, underperforming students. And then we've got this new, unpublished working paper from researchers John List (University of Chicago), Jeffrey Livngston (Bentley University) and Susan Neckermann (University of Chicago) which asks via title the key question-- "Do Students Show What They Know on Standardized Tests?" Here's the abstract, in all its stilted academic-languaged glory:
Standardized tests are widely used to evaluate teacher performance. It is thus crucial that they accurately measure a student’s academic achievement. We conduct a field experiment where students, parents and tutors are incentivized based partially on the results of standardized tests that we constructed. These tests were designed to measure the same skills as the official state
standardized tests; however, performance on the official tests was not incented. We find substantial improvement on the incented tests but no improvement on the official tests, calling into question whether students show what they know when they have no incentive to do so.
I skimmed through the full paper, though I admit I just didn't feel incented to examine it carefully because this paper is destined to be published in the Journal of Blindingly Obvious Conclusions. Basically, the researchers paid students to try harder on one pointless test, but found that this did not inspire the students to try harder on other pointless tests for free.
A comparable experiment would be for a parent to pay their teenage daughter to clean up her room, then wait to see if she decided to clean the living room, too. There is some useful information here (finding out if she actually knows how to clean a room), but what we already know about motivation (via both science and common sense) tells us that paying her to clean her room actually makes it less likely that she will clean the living room for free.
And my analogy is not perfect because she actually lives in her room and uses the living room, so she has some connection to the cleaning task. Perhaps it would improve my analogy to make it about two rooms in some stranger's home.
The study played with the results of different rewards for the student lab rats, again, with unsurprising results ("The effects are eliminated completely however when the reward amount is small or
payment is delayed by a month").
More problematically, the study authors do not seem to have fully understood what they were doing as witnessed by what they believed was their experimental design--
The experiment is designed to evaluate whether these incentives successfully encourage
knowledge acquisition, then measure whether this acquisition results in higher ISAT scores.
Using a system developed by Discovery Education, the organization which creates the ISAT, we
created “probe” tests which are designed to assess the same skills and knowledge that the official
standardized tests examine.
No. The experiment was designed, whether you grokked it or not, to determine if students could be bribed to try harder on the tests, thereby getting better scores.
The answer is, yes, yes they can, and that result underlines one of the central flaws of test-driven accountability-- if you give students a test that is a pointless exercise in answer-clicking, many will not make any effort to try, and your results are useless crap. The fantasy that BS Tests produce meaningful data is a fantasy deserves to die.
As for the secondary question raised by these studies-- should we start paying students for test performance-- we already know a thousand reasons that such extrinsic rewarding for performance tasks is a Very Bad Idea. So let me leave you with one of the most-linked pieces of work on this blog, Daniel Pink's "Drive"
One of the great fantasies of the testocrats is their belief that the Big Standardized Tests provide useful data. That fantasy is predicated on another fantasy-- that students actually try to do their best on the BS Test. Maybe it's a kind of confirmation bias. Maybe it's a kind of Staring Into Their Own Navels For Too Long bias. But test manufacturers and the policy wonks who love them have so convinced themselves that these tests are super-important and deeply valuable that they tend to believe that students think so, too.
Somehow they imagine a roomful of fourteen-year-olds, faced with a long, tedious standardized test, saying, "Well, this test has absolutely no bearing on any part of my life, but it's really important to me that bureaucrats and policy mavens at the state and federal level have the very best data to work from, so I am going to concentrate hard and give my sincere and heartfelt all to this boring, confusing test that will have no effect on my life whatsoever." Right.
This is not what happens. I often think that we would get some serious BS Test reform in this country if testocrats and bureaucrats and test manufacturers had to sit in the room with the students for the duration of the BS Tests. As I once wrote, if the students don't care, the data aren't there.
There are times when testocrats seem to sense this, though their response is often silly. For instance, back when Pennsylvania was using the PSSA test as our BS Test, state officials decided that students would take the test more seriously if a good score won them a shiny gold sticker on their diploma.
The research suggests that something more than a sticker may be needed. Some British research suggests that cash rewards for good test performance can raise test scores in poor, underperforming students. And then we've got this new, unpublished working paper from researchers John List (University of Chicago), Jeffrey Livngston (Bentley University) and Susan Neckermann (University of Chicago) which asks via title the key question-- "Do Students Show What They Know on Standardized Tests?" Here's the abstract, in all its stilted academic-languaged glory:
Standardized tests are widely used to evaluate teacher performance. It is thus crucial that they accurately measure a student’s academic achievement. We conduct a field experiment where students, parents and tutors are incentivized based partially on the results of standardized tests that we constructed. These tests were designed to measure the same skills as the official state
standardized tests; however, performance on the official tests was not incented. We find substantial improvement on the incented tests but no improvement on the official tests, calling into question whether students show what they know when they have no incentive to do so.
I skimmed through the full paper, though I admit I just didn't feel incented to examine it carefully because this paper is destined to be published in the Journal of Blindingly Obvious Conclusions. Basically, the researchers paid students to try harder on one pointless test, but found that this did not inspire the students to try harder on other pointless tests for free.
A comparable experiment would be for a parent to pay their teenage daughter to clean up her room, then wait to see if she decided to clean the living room, too. There is some useful information here (finding out if she actually knows how to clean a room), but what we already know about motivation (via both science and common sense) tells us that paying her to clean her room actually makes it less likely that she will clean the living room for free.
And my analogy is not perfect because she actually lives in her room and uses the living room, so she has some connection to the cleaning task. Perhaps it would improve my analogy to make it about two rooms in some stranger's home.
The study played with the results of different rewards for the student lab rats, again, with unsurprising results ("The effects are eliminated completely however when the reward amount is small or
payment is delayed by a month").
More problematically, the study authors do not seem to have fully understood what they were doing as witnessed by what they believed was their experimental design--
The experiment is designed to evaluate whether these incentives successfully encourage
knowledge acquisition, then measure whether this acquisition results in higher ISAT scores.
Using a system developed by Discovery Education, the organization which creates the ISAT, we
created “probe” tests which are designed to assess the same skills and knowledge that the official
standardized tests examine.
No. The experiment was designed, whether you grokked it or not, to determine if students could be bribed to try harder on the tests, thereby getting better scores.
The answer is, yes, yes they can, and that result underlines one of the central flaws of test-driven accountability-- if you give students a test that is a pointless exercise in answer-clicking, many will not make any effort to try, and your results are useless crap. The fantasy that BS Tests produce meaningful data is a fantasy deserves to die.
As for the secondary question raised by these studies-- should we start paying students for test performance-- we already know a thousand reasons that such extrinsic rewarding for performance tasks is a Very Bad Idea. So let me leave you with one of the most-linked pieces of work on this blog, Daniel Pink's "Drive"
Subscribe to:
Posts (Atom)