I've been home for about a week and I am just about back up to speed. There's a lot to read this time around. As always, I encourage you to share wildly whatever you like here.
What Are the Main Reasons Teachers Call It Quits
NPR takes a look at why some folks are getting out of the teaching biz. No surprises here, but nice to see NPR catching on
LA Unified Takes a Hard Look at Charter Schools
Charters have taken it on the chin in LA, and there's a definite shift in attitude there.
A Public Education
Friend of this blog Phyllis Bush ran an op-ed this week that gets to the heart of what does and does not make a public education.
State-Run Kids: Suleika's Story
Here's a moving story of what the charter mess in New Jersey looks like to the families and children of the city.
Black Children Deserve the Stability That Neighborhood Schools Offer
Andre Perry absolutely nails it in discussing one of the worst effects of charter schools-- the loss of a stabilizing institutions for a community
What I Miss
Friend of this blog Mary Holden (it's nice to have all these friends) has been writing an honest and personal account of her departure from the classroom. Here's her look back at what she misses.
King of the Castle
Jennifer Berkshire (Edushyster) takes a look at the infamous Massachusetts charter that makes its teachers pay to leave.
Wall Street Firms Make Money from Pension Funds, Spend It On Charters
Actual reporter (you know-- the old fashioned type who actually goes out and finds thing out) David Sirota reports the maddening but predictable news that public teacher pension funds are helping fund the attack on public education.
The Vivisection of Literature
Another examination of how the study of literature has been beaten up in the rush to High Standards
The Absurd Defense of Standards Post-Common Core
Jane Robbins takes a quick look at how some folks are in a Kentucky spitting match over the Core
NAACP President: Why We Should Pause the Expansion of Charter Schools
Since they made the charteristas all sad, the NAACP has had lots of folks trying to tell them what happened, why it happened, and what they should really do. Here's the president of NAACP to explain what they did and why they did it.
Sunday, October 30, 2016
Saturday, October 29, 2016
How (Not) To Grade Schools
Bellwether Education Partners is a right-tilted thinky tank from the same basic neighborhood as the Fordham Institute. Chad Aldeman is one of their big guns, and this month he's out with Grading Schools: How States Should Define “School Quality” Under the Every Student Succeeds Act. It's a pretty thing with thirty-two pages of thoughts about how to implement school accountability under ESSA, and I've read the whole thing so that you don't have to. Let's take a look under the hood.
Introduction
Aldeman offers a few thoughts to start that give a hint about where he might be headed. School evaluation has been too rigid and rule-bound. We've focused too much on student test scores instead of student growth. But the window is now open for a "new conversation," which kind of presumes that there was an old conversation, and I suppose for people in the thinky tank world it might seem as if there were a conversation, but from out here the actual education field, school accountability has been imposed from the top down with deliberate efforts to silence any attempts at conversation.
In other words, the news that school accountability has been too rigid and rules-bound is only news to people who have steadfastly ignored the voices of actual teachers, who called that one from the very first moment that No Child Left Behind raised its rigid, inflexible, and not-very-smart head.
So to have this "new conversation," policy folks should brace themselves for a certain amount of "Told you so" or "No kidding" or even "No shit, Sherlock." Or alternately, as this new conversation is probably going to resemble the old one insofar as actual teacher voices will be once again excluded, something along the lines of, "Remember what happened the last time you ignored us?"
What Is Accountability and Why Does It Matter?
Alderman acknowledges that accountability covers a wide range of functions, from transparency for the general public on one end to rewards and punishments by government on the other end. He posits that somewhere in the middle that "accountability can act as a tool for improvement through goal-setting, performance benchmarking, and re-evaluation." And he also notes that accountability measures are state government's way of signalling what it values.
So accountability can be very many things. Who is it for?
Well, teachers and school leaders, who are supposed to be able to use the data to do a better job. And parents, too. And also the political leaders who are responsible for the oversight of public tax dollars. And on top of that, ESSA requires states to grade schools in order to stack rank and target some for some manner of fixing, including targeting the bottom five percent.
Aldfeman barrels on, pretending that meeting that last set of ESSA mandated stack-ranking, school-grading requirements will meet all the various versions of accountability that he has listed. He suggests in passing that we're really talking about different degrees of transparency for different groups of accountability viewers, but that's not really true either.
Neither Aldeman or, for that matter, the feds have seriously or realistically addressed the problems that come when you try to create an instrument that measures all things for all audiences. This is bananas, and it's why the entire accountability system continues to be built on a foundation of sand and silly putty. The instrument that tells a parent how their child is doing is not the same as the instrument that tells a teacher how to tweak instruction, and neither is the same as the instrument that tells the state and federal government if the school is doing a good job, and none of those are the same as an instrument used to stack ran all the schools in the state (and, it should also be noted, none of those functions are best done by a Big Standardized Test, and yet policymakers seem unable to let go of the assumption that the BS Tests are good for anything).
It's like weighing the entrees at a restaurant as a way of determining customer satisfaction, chef quality and efficiency, how well the restaurant is managed, compliance with health code regulations, reviews for the AAA guide, and the stability of the building in which the restaurant is housed. It's simply nuts.
Aldeman cites assorted research that is all based on the assumption that narrow poorly-written standardized math and reading tests are actually measuring something useful. They are not. Virtually all of the data generated by these tests is junk, and as their use becomes more widespread and students become more weary of them, the data becomes junkier and junkier.
Bottom line-- real accountability requires a wide range of instruments for a wide range of audiences, and we have not remotely solved that challenge. Not, let me note, that it isn't a challenge worth solving. But as long as we base the whole system on the BS Tests, we will not be remotely in the right neighborhood.
How Should States Select Accountability Measures
Again, Aldeman is working from some bad assumptions about what the system is for. Can you spot the key word in this sentence?
The trick, then, is to design accountability systems in which schools are competing on measures that truly matter
A competition system is not a measuring system. If I tell you that Chris is the tallest kid in class and Pat is the shortest, you still have no idea of Chris's or Pat's actual height.
Aldeman gets his next point right-- an accountability system should be simple, clear and fair. Well, partly right. His idea of "fair" is that the system only measures things that schools actually have control over. So he's skipped one other key attribute-- the accountability system needs to be accurate and measure what it actually says it measures. So, for instance, we should stop saying "student achievement" when we actually mean "student score on a single narrow standardized math and reading test that has never really passed tests for validity and reliability."
Aldeman notes the four required elements per ESSA:
1) "Achievement rates" aka "test scores."
2) Some other "valid and reliable" academic indicator. The word "other" assumes facts not in evidence.
3) Progress in achieving English language proficiency
4) Some other indicator of school quality or success
Aldeman offers a chart in which some possible elements are judged against qualities like simplicity, fairness, disagregatability, and giving guidance to the school. So measuring grit or other personal qualities is iffy because measuring and teaching it are iffy. Teacher and student surveys get a thumbs up for measuring stuff, but thumbs down for being actionable, though I think a good student or staff survey would provide a school with very specific issues to address.
Aldeman says to avoid redundant measures and reminds us that ESSA doesn't put a maximum limit on measures to be used.
How Can States Design School Ratings Systems That Are Simple, Clear, and Fair?
A fake subheading that simply covers an introduction that says, "And now I will tell you how." It does include a fun sidebar about how K-2 should be included in the accountability system. Aldeman notes that leaving them out previously was because of things like the unsolved challenge of how to assess the littles; he does not offer any new insights about that issue that have turned up since NCLB days, and in fact subjecting the littles to any kind of formal or standardized assessment is a truly, deeply indefensible policy notion, and serves as nothing more than a clear-cut example of putting the desires of policy-makers and data-grubbers over the needs of small children.
Incorporating Student Achievement
Of course, by "student achievement," we just mean "test scores." Aldeman recommends we start out with a simple performance scale index for points. He suggests five performance levels, with emphasis on proficiency because "proficiency is, after all, a benchmark for future success in college and careers." Which-- no, no it's not. There isn't an iota of data anywhere to connect a proficiency level on the BS Tests with college and career success, particularly because the proficiency rating is a normed ranking, so it moves every year depending on the mass of scores and the cut scores set annually by state testocrats.
So we're talking about using the test scores, which are junk, after they have been run through a normed scale, which adds more junk.
Using Growth as the "Other" Academic Indicator
Aldeman pays tribute to the "growth mindset" as a worthy stance for schools, though we are once again talking only about growth as it applies to standardized test scores. If the student grew in some other way, nobody cares.
The problem with coming up with a measure of student growth is, of course, that nobody has successfully done it yet. Aldeman mentions several models.
* Without using the words "value-added," Aldeman nods to the model that uses obtuse, opaque, and unproven mumbo-jumbo to make the claim that student performance can be statistically stripped from other characteristics. Aldeman suggests this is disqualified because it is neither simple nor understandable; he might also mention that it is baloney that has been debunked by all manner of authorities.
* Aldeman mentions the student percentiles model, a stack-ranking competitive model that compares a student's test score to the score of other students who had a similar score last year. Like all such normed models, this one involves goal posts that move every year, and like all percentile-based models, it guarantees the exact same distribution year after year. No amount of school quality will raise all students to the top 25%.
* Aldeman favors a transitional matrix, judging schools on how many students move from one group to another (say, below basic to basic). This is also a bad idea. Aldeman has elsewhere shown sensitivity to the unintended consequences of some of these policy choices, so I'm not sure how he misses the obvious implications here. A school's best strategy will be to invest its energy on students who are near a threshhold and not those for whom there's no real hope of enough improvement.
Creating an Overall Index and Incorporating Subgroup Results
Aldeman wants to use the two indicators we've got so far and average them for an overall index, and this is the score by which we'll "flag" the bottom 5%. These indexes would also be computed for subgroups so that schools can also be flagged for failing to close their achievement gaps.
To be clear, this approach assumes that identifying schools for improvement is an important lever at the state’s disposal. That’s intentional, because there are positive effects associated with the mere act of notifying schools that they need to improve. That’s especially true for accountability systems bearing consequences for schools, but it’s even true in systems relying purely on information and transparency.
In other words, threats work. At least, they work on raising test scores (and he's got some research from reformster research rock star Eric Hanushek to back it up). This is a deeply irresponsible policy idea, ignoring completely the question of what schools give up and get rid of in order to raise their test scores. Cutting recess, phys ed, art, music, etc. In my own district I have seen schools strip student schedules so that middle school students with low test scores spent their entire day in English and math class, with no history, art, science or other non-tested subjects.
This is the test-centered school at its worst. This is a lousy idea.
Incorporating Other Measures of School Success Into Final School Ratings
Here Aldeman brings out the English model of school inspections, in which trained and experienced educators visit the school for an extended inspection, both detailed and holistic, of how the school works, how well it ticks, how well it serves students, and how well it matches the notion of what a good school should be.
This is a good idea.
Though I can imagine that for schools that have been "flagged" because of test scores, the inspection visit might be a bit harrowing.
I would offer one editing suggestion to Aldeman for his system. Keep the school inspection system and get rid of everything else.
Yes, yes, ESSA has kept us beholden to the BS Testing system. But any sensible, realistic, useful accountability system is going to shrink the use of the BS Test down to the absolute minimum the feds will let the state get away with. Making the test scores the foundation of the rest of the accountability is the absolute wrong way to go.
Conclusion
Aldeman notes that ESSA somehow focuses less attention on punishing "failing" schools than on actually helping them, which, maybe, depending on how you read it. It would be worth it for the feds and states to back away from that, since they have shown absolutely no aptitude for turning around failing schools.
There is one other huge hole in Aldeman's plan, and that is the space where we should find the voice of the community in which the school is located. He has dodged one of the big accountability questions, which is this-- if the community in which a school is located is happy with their school, exactly what reason is there for the state and federal bureaucrats to get involved? I remain puzzled that the right-leaning policy folks continue to remain uninterested in local control of schools.
Introduction
Aldeman offers a few thoughts to start that give a hint about where he might be headed. School evaluation has been too rigid and rule-bound. We've focused too much on student test scores instead of student growth. But the window is now open for a "new conversation," which kind of presumes that there was an old conversation, and I suppose for people in the thinky tank world it might seem as if there were a conversation, but from out here the actual education field, school accountability has been imposed from the top down with deliberate efforts to silence any attempts at conversation.
In other words, the news that school accountability has been too rigid and rules-bound is only news to people who have steadfastly ignored the voices of actual teachers, who called that one from the very first moment that No Child Left Behind raised its rigid, inflexible, and not-very-smart head.
So to have this "new conversation," policy folks should brace themselves for a certain amount of "Told you so" or "No kidding" or even "No shit, Sherlock." Or alternately, as this new conversation is probably going to resemble the old one insofar as actual teacher voices will be once again excluded, something along the lines of, "Remember what happened the last time you ignored us?"
What Is Accountability and Why Does It Matter?
Alderman acknowledges that accountability covers a wide range of functions, from transparency for the general public on one end to rewards and punishments by government on the other end. He posits that somewhere in the middle that "accountability can act as a tool for improvement through goal-setting, performance benchmarking, and re-evaluation." And he also notes that accountability measures are state government's way of signalling what it values.
So accountability can be very many things. Who is it for?
Well, teachers and school leaders, who are supposed to be able to use the data to do a better job. And parents, too. And also the political leaders who are responsible for the oversight of public tax dollars. And on top of that, ESSA requires states to grade schools in order to stack rank and target some for some manner of fixing, including targeting the bottom five percent.
Aldfeman barrels on, pretending that meeting that last set of ESSA mandated stack-ranking, school-grading requirements will meet all the various versions of accountability that he has listed. He suggests in passing that we're really talking about different degrees of transparency for different groups of accountability viewers, but that's not really true either.
Neither Aldeman or, for that matter, the feds have seriously or realistically addressed the problems that come when you try to create an instrument that measures all things for all audiences. This is bananas, and it's why the entire accountability system continues to be built on a foundation of sand and silly putty. The instrument that tells a parent how their child is doing is not the same as the instrument that tells a teacher how to tweak instruction, and neither is the same as the instrument that tells the state and federal government if the school is doing a good job, and none of those are the same as an instrument used to stack ran all the schools in the state (and, it should also be noted, none of those functions are best done by a Big Standardized Test, and yet policymakers seem unable to let go of the assumption that the BS Tests are good for anything).
It's like weighing the entrees at a restaurant as a way of determining customer satisfaction, chef quality and efficiency, how well the restaurant is managed, compliance with health code regulations, reviews for the AAA guide, and the stability of the building in which the restaurant is housed. It's simply nuts.
Aldeman cites assorted research that is all based on the assumption that narrow poorly-written standardized math and reading tests are actually measuring something useful. They are not. Virtually all of the data generated by these tests is junk, and as their use becomes more widespread and students become more weary of them, the data becomes junkier and junkier.
Bottom line-- real accountability requires a wide range of instruments for a wide range of audiences, and we have not remotely solved that challenge. Not, let me note, that it isn't a challenge worth solving. But as long as we base the whole system on the BS Tests, we will not be remotely in the right neighborhood.
How Should States Select Accountability Measures
Again, Aldeman is working from some bad assumptions about what the system is for. Can you spot the key word in this sentence?
The trick, then, is to design accountability systems in which schools are competing on measures that truly matter
A competition system is not a measuring system. If I tell you that Chris is the tallest kid in class and Pat is the shortest, you still have no idea of Chris's or Pat's actual height.
Aldeman gets his next point right-- an accountability system should be simple, clear and fair. Well, partly right. His idea of "fair" is that the system only measures things that schools actually have control over. So he's skipped one other key attribute-- the accountability system needs to be accurate and measure what it actually says it measures. So, for instance, we should stop saying "student achievement" when we actually mean "student score on a single narrow standardized math and reading test that has never really passed tests for validity and reliability."
Aldeman notes the four required elements per ESSA:
1) "Achievement rates" aka "test scores."
2) Some other "valid and reliable" academic indicator. The word "other" assumes facts not in evidence.
3) Progress in achieving English language proficiency
4) Some other indicator of school quality or success
Aldeman offers a chart in which some possible elements are judged against qualities like simplicity, fairness, disagregatability, and giving guidance to the school. So measuring grit or other personal qualities is iffy because measuring and teaching it are iffy. Teacher and student surveys get a thumbs up for measuring stuff, but thumbs down for being actionable, though I think a good student or staff survey would provide a school with very specific issues to address.
Aldeman says to avoid redundant measures and reminds us that ESSA doesn't put a maximum limit on measures to be used.
How Can States Design School Ratings Systems That Are Simple, Clear, and Fair?
A fake subheading that simply covers an introduction that says, "And now I will tell you how." It does include a fun sidebar about how K-2 should be included in the accountability system. Aldeman notes that leaving them out previously was because of things like the unsolved challenge of how to assess the littles; he does not offer any new insights about that issue that have turned up since NCLB days, and in fact subjecting the littles to any kind of formal or standardized assessment is a truly, deeply indefensible policy notion, and serves as nothing more than a clear-cut example of putting the desires of policy-makers and data-grubbers over the needs of small children.
Incorporating Student Achievement
Of course, by "student achievement," we just mean "test scores." Aldeman recommends we start out with a simple performance scale index for points. He suggests five performance levels, with emphasis on proficiency because "proficiency is, after all, a benchmark for future success in college and careers." Which-- no, no it's not. There isn't an iota of data anywhere to connect a proficiency level on the BS Tests with college and career success, particularly because the proficiency rating is a normed ranking, so it moves every year depending on the mass of scores and the cut scores set annually by state testocrats.
So we're talking about using the test scores, which are junk, after they have been run through a normed scale, which adds more junk.
Using Growth as the "Other" Academic Indicator
Aldeman pays tribute to the "growth mindset" as a worthy stance for schools, though we are once again talking only about growth as it applies to standardized test scores. If the student grew in some other way, nobody cares.
The problem with coming up with a measure of student growth is, of course, that nobody has successfully done it yet. Aldeman mentions several models.
* Without using the words "value-added," Aldeman nods to the model that uses obtuse, opaque, and unproven mumbo-jumbo to make the claim that student performance can be statistically stripped from other characteristics. Aldeman suggests this is disqualified because it is neither simple nor understandable; he might also mention that it is baloney that has been debunked by all manner of authorities.
* Aldeman mentions the student percentiles model, a stack-ranking competitive model that compares a student's test score to the score of other students who had a similar score last year. Like all such normed models, this one involves goal posts that move every year, and like all percentile-based models, it guarantees the exact same distribution year after year. No amount of school quality will raise all students to the top 25%.
* Aldeman favors a transitional matrix, judging schools on how many students move from one group to another (say, below basic to basic). This is also a bad idea. Aldeman has elsewhere shown sensitivity to the unintended consequences of some of these policy choices, so I'm not sure how he misses the obvious implications here. A school's best strategy will be to invest its energy on students who are near a threshhold and not those for whom there's no real hope of enough improvement.
Creating an Overall Index and Incorporating Subgroup Results
Aldeman wants to use the two indicators we've got so far and average them for an overall index, and this is the score by which we'll "flag" the bottom 5%. These indexes would also be computed for subgroups so that schools can also be flagged for failing to close their achievement gaps.
To be clear, this approach assumes that identifying schools for improvement is an important lever at the state’s disposal. That’s intentional, because there are positive effects associated with the mere act of notifying schools that they need to improve. That’s especially true for accountability systems bearing consequences for schools, but it’s even true in systems relying purely on information and transparency.
In other words, threats work. At least, they work on raising test scores (and he's got some research from reformster research rock star Eric Hanushek to back it up). This is a deeply irresponsible policy idea, ignoring completely the question of what schools give up and get rid of in order to raise their test scores. Cutting recess, phys ed, art, music, etc. In my own district I have seen schools strip student schedules so that middle school students with low test scores spent their entire day in English and math class, with no history, art, science or other non-tested subjects.
This is the test-centered school at its worst. This is a lousy idea.
Incorporating Other Measures of School Success Into Final School Ratings
Here Aldeman brings out the English model of school inspections, in which trained and experienced educators visit the school for an extended inspection, both detailed and holistic, of how the school works, how well it ticks, how well it serves students, and how well it matches the notion of what a good school should be.
This is a good idea.
Though I can imagine that for schools that have been "flagged" because of test scores, the inspection visit might be a bit harrowing.
I would offer one editing suggestion to Aldeman for his system. Keep the school inspection system and get rid of everything else.
Yes, yes, ESSA has kept us beholden to the BS Testing system. But any sensible, realistic, useful accountability system is going to shrink the use of the BS Test down to the absolute minimum the feds will let the state get away with. Making the test scores the foundation of the rest of the accountability is the absolute wrong way to go.
Conclusion
Aldeman notes that ESSA somehow focuses less attention on punishing "failing" schools than on actually helping them, which, maybe, depending on how you read it. It would be worth it for the feds and states to back away from that, since they have shown absolutely no aptitude for turning around failing schools.
There is one other huge hole in Aldeman's plan, and that is the space where we should find the voice of the community in which the school is located. He has dodged one of the big accountability questions, which is this-- if the community in which a school is located is happy with their school, exactly what reason is there for the state and federal bureaucrats to get involved? I remain puzzled that the right-leaning policy folks continue to remain uninterested in local control of schools.
Friday, October 28, 2016
GA: Ed Consultant Slams Takeover Amendment
In Georgia, reformsters are pushing hard for Amendment 1, a constitutional amendment that would institute a state-level takeover district, modeled after the pioneering Achievement School District in Tennessee.
Dr. David K. Lerch is a Georgia resident and ran his own educational consulting firm for over three decades. He has worked all over the country, writing grants and overseeing programs (e.g. Pueblo hired him to evaluate their STEM programs).
Lerch has presumably seen plenty in the ed field; he earned his Master's Degree in Public School Administration from the University of Virginia back in 1967. By 1984 he was forming the National Association of Magnet School Development and was touting magnets as a path to desegregation and what we now call educational equity. He was also saying the kinds of things that charter fans would chime in on decades later:
Parents want neighborhood schools until they find a program they support and then they will send a child halfway across the county if the education program is attractive.
Lerch now works for the Juliana Group, Inc, a Savannah-based business that specializes in selling furniture for Montessori schools.
In short, Lerch is not a long-time hard-core supporter for traditional public education. However, when a letter-writer to the Savannah Morning News wrote to warn against Amendment 1, Lerch felt moved to back her up.
I can add some first-hand experiences validating her timely concern about what will happen with the loss of local control of schools and the resulting loss of millions of state and federal revenue.
I served as a consultant to school districts in two states, Louisiana and Michigan, where the governors set up takeover districts identical to that proposed by Gov. Nathan Deal.
The legislative amendment in Louisiana’s constitution (Recovery School District) provided for the same type of state control. While I was working with East Baton Rouge Parish School District, the state took over Istrouma High School, operated it for five years and returned it to the district without students showing any measurable academic success.
Then the school board had to spend over a $21 million of local funds to repair the facility.
I also worked with Michigan’s Education Achievement Authority (EAA), which was set up by Governor Snyder as a model of Louisiana’s Recovery School District.
I helped them obtain a $35 million federal grant for teacher training and support in 15 of 60 schools that were scheduled for operation by EAA. After only four years of state control, and massive evidence of EAA’s failure cited by education experts and the federal government monitoring their grant, Governor Snyder decided to shut down the agency and turn EAA’s schools into charter schools.
Those two failures are, of course, on top of the failure of Tennessee's ASD. (see here, here and here). And just in case you have doubts:
Why anyone would duplicate a state controlled takeover district that has proven to be a failure in two states is beyond belief. If you don’t believe the controversy caused by the takeover districts similar to OSD, read the February 2016 document “State Takeover of Low-Performing Schools – A Record of Academic Failure, Financial Mismanagement & Student Harm.”
It is available on the Internet and will shake you to the core about Amendment 1.
He's correct. That report is available on the internet, and it is yet more evidence that state-run takeover turnaround districts have failed-- and not just marginally, but spectacularly and totally-- every time they have been attempted. Georgia has ample evidence and ample warning. Here's hoping that Georgia voters get the message.
Dr. David K. Lerch is a Georgia resident and ran his own educational consulting firm for over three decades. He has worked all over the country, writing grants and overseeing programs (e.g. Pueblo hired him to evaluate their STEM programs).
Lerch has presumably seen plenty in the ed field; he earned his Master's Degree in Public School Administration from the University of Virginia back in 1967. By 1984 he was forming the National Association of Magnet School Development and was touting magnets as a path to desegregation and what we now call educational equity. He was also saying the kinds of things that charter fans would chime in on decades later:
Parents want neighborhood schools until they find a program they support and then they will send a child halfway across the county if the education program is attractive.
Lerch now works for the Juliana Group, Inc, a Savannah-based business that specializes in selling furniture for Montessori schools.
In short, Lerch is not a long-time hard-core supporter for traditional public education. However, when a letter-writer to the Savannah Morning News wrote to warn against Amendment 1, Lerch felt moved to back her up.
I can add some first-hand experiences validating her timely concern about what will happen with the loss of local control of schools and the resulting loss of millions of state and federal revenue.
I served as a consultant to school districts in two states, Louisiana and Michigan, where the governors set up takeover districts identical to that proposed by Gov. Nathan Deal.
The legislative amendment in Louisiana’s constitution (Recovery School District) provided for the same type of state control. While I was working with East Baton Rouge Parish School District, the state took over Istrouma High School, operated it for five years and returned it to the district without students showing any measurable academic success.
Then the school board had to spend over a $21 million of local funds to repair the facility.
I also worked with Michigan’s Education Achievement Authority (EAA), which was set up by Governor Snyder as a model of Louisiana’s Recovery School District.
I helped them obtain a $35 million federal grant for teacher training and support in 15 of 60 schools that were scheduled for operation by EAA. After only four years of state control, and massive evidence of EAA’s failure cited by education experts and the federal government monitoring their grant, Governor Snyder decided to shut down the agency and turn EAA’s schools into charter schools.
Those two failures are, of course, on top of the failure of Tennessee's ASD. (see here, here and here). And just in case you have doubts:
Why anyone would duplicate a state controlled takeover district that has proven to be a failure in two states is beyond belief. If you don’t believe the controversy caused by the takeover districts similar to OSD, read the February 2016 document “State Takeover of Low-Performing Schools – A Record of Academic Failure, Financial Mismanagement & Student Harm.”
It is available on the Internet and will shake you to the core about Amendment 1.
He's correct. That report is available on the internet, and it is yet more evidence that state-run takeover turnaround districts have failed-- and not just marginally, but spectacularly and totally-- every time they have been attempted. Georgia has ample evidence and ample warning. Here's hoping that Georgia voters get the message.
CA: Is the Fox Guarding the Henhouse?
The Los Angeles Unified School District put away their charter rubber stamp, and it has touched off a wave of hand wringing and baloney shoveling.
Earlier this month, the LAUSD board pulled the plug on five charters. Three of them were Magnolia schools, part of the Gulen charter web of schools allegedly tied to the reclusive cleric who is also an exiled political leader from Turkey allegedly tied to this year's coup attempt. The Magnolia chain has been accused of significant financial shenanigans, The other two were Celerity schools, a chain that has such a spotted record that even reformy John Deasy has cast a wary eye in their direction. Oversight and transparency, two important qualities that charter schools generally do very badly, were cited as issues with the five.
But the unexpected move by the board to hold any charters accountable for anything ever has stirred some folks up.
Here's a charter-friendly look at the "issue" from KPCC, the Southern California Public Radio station, that opens with the exactly wrong question:
Is the Los Angeles Unified School District able to give a fair shake to the charter schools it authorizes and oversees even though the district loses money every time a student leaves to attend a charter?
And follows it up with this mis-statement of the issue:
On Tuesday, board members addressed the underlying concern the California Charter Schools Association and others have raised in the wake of their vote: that letting L.A. Unified review such requests from charter schools — especially in an environment where the district and charters compete for funding — is letting the fox guard the henhouse.
Emphasis mine. Because I wouldn't frame the situation by suggesting that the school board is somehow out to steal money it's not entitled to.
Instead of "letting the fox guard te henhouse," let's say "requiring the elected representatives of the taxpayers to oversee how those taxpayers' dollars are used."
Some members of the board expressed frustration that the California system allows unhappy charters to next ask the county to authorize them. Board member Richard Vladovic noted that the district would save a lot of money if charters were authorized and supervised by the state. He neglected to add having the charters financed by the state as well.
Other board members clearly get the backwardness of the system:
Charter school petitioners “who are turned down will always have a complaint,” said school board vice president George McKenna. “Their opinion will always be that they were wronged, that we weren’t fair, that the burden is on [L.A. Unified] to prove their guilt, not on them to prove their innocence.”
Yes, in California (and several other states), we've got a system in which charters feel entitled to open and stay open, drawing on public tax dollars as long as they're inclined.
There really isn't anything like this. If I want to pave the driveway to my private business, I can't demand state highway tax dollars to finance the driveway and expect to get those dollars unless someone can prove I've done something really terrible. If I want to start my own private security force, I can't bill the Department of Defense and expect them to shoulder the burden of proving why I shouldn't be paid public tax dollars.
But somehow California charters feel entitled to public tax dollars and will hold onto them until someone can pry the pursestrings out of their chartery fingers. This is not the fox guarding the henhouse; this the fox moving into the henhouse and getting indignant when the farmer shows up with an eviction notice.
Earlier this month, the LAUSD board pulled the plug on five charters. Three of them were Magnolia schools, part of the Gulen charter web of schools allegedly tied to the reclusive cleric who is also an exiled political leader from Turkey allegedly tied to this year's coup attempt. The Magnolia chain has been accused of significant financial shenanigans, The other two were Celerity schools, a chain that has such a spotted record that even reformy John Deasy has cast a wary eye in their direction. Oversight and transparency, two important qualities that charter schools generally do very badly, were cited as issues with the five.
But the unexpected move by the board to hold any charters accountable for anything ever has stirred some folks up.
Here's a charter-friendly look at the "issue" from KPCC, the Southern California Public Radio station, that opens with the exactly wrong question:
Is the Los Angeles Unified School District able to give a fair shake to the charter schools it authorizes and oversees even though the district loses money every time a student leaves to attend a charter?
And follows it up with this mis-statement of the issue:
On Tuesday, board members addressed the underlying concern the California Charter Schools Association and others have raised in the wake of their vote: that letting L.A. Unified review such requests from charter schools — especially in an environment where the district and charters compete for funding — is letting the fox guard the henhouse.
Emphasis mine. Because I wouldn't frame the situation by suggesting that the school board is somehow out to steal money it's not entitled to.
Instead of "letting the fox guard te henhouse," let's say "requiring the elected representatives of the taxpayers to oversee how those taxpayers' dollars are used."
Some members of the board expressed frustration that the California system allows unhappy charters to next ask the county to authorize them. Board member Richard Vladovic noted that the district would save a lot of money if charters were authorized and supervised by the state. He neglected to add having the charters financed by the state as well.
Other board members clearly get the backwardness of the system:
Charter school petitioners “who are turned down will always have a complaint,” said school board vice president George McKenna. “Their opinion will always be that they were wronged, that we weren’t fair, that the burden is on [L.A. Unified] to prove their guilt, not on them to prove their innocence.”
Yes, in California (and several other states), we've got a system in which charters feel entitled to open and stay open, drawing on public tax dollars as long as they're inclined.
There really isn't anything like this. If I want to pave the driveway to my private business, I can't demand state highway tax dollars to finance the driveway and expect to get those dollars unless someone can prove I've done something really terrible. If I want to start my own private security force, I can't bill the Department of Defense and expect them to shoulder the burden of proving why I shouldn't be paid public tax dollars.
But somehow California charters feel entitled to public tax dollars and will hold onto them until someone can pry the pursestrings out of their chartery fingers. This is not the fox guarding the henhouse; this the fox moving into the henhouse and getting indignant when the farmer shows up with an eviction notice.
Thursday, October 27, 2016
Reflect now. Now!! NOW!!!
One of the fully screwed-up features of modern standardized assessments is the time frame.
A standardized test is the only place where students are told, "Starting from scratch, read this, reflect on it, answer questions about it, and do it all in the next fifteen minutes." We accept the accelerated time line as a normal feature of assessment, but why?
Never ever in a college course was a student handed a book for the first time and told, "Read this book and write an intelligent, thoughtful paper about the text. Hand it in sixty minutes from now."
Reflective, thoughtful, deep, even close reading, the kind of reading that reformsters insist they want, takes time. The text has to be read and considered carefully. Theories about the ideas, the themes, the characters, the author's use of language, the thoughtful consideration of the various elements of the writing-- those all need time to percolate, to simmer, to be mulled by the reader. Those of us who teach literature and reading in high school never have to tell our students, "Hurry up and zip through that faster." Most commonly we have to find ways to encourage our students to slow down, pay attention, really think about what they're reading instead of trying to race to the end.
A reader's relationship with a text, like any good relationship, takes time. It may start with a certain slow grudging acquaintance of necessity, or it may start with an instant spark of attraction, but either way, if the relationship is going to have any depth or quality, time and care will have to be invested. Standardized tests are the "hit it and quit it" of the reading world.
The reasons that we test this way are obvious. Test manufacturers want a short, closed test period so that no test items can "leak," though, of course, some of the best reflection on reading comes through discussion and sharing. English teachers have adopted reading circles for a reason. Test manufacturers also want to keep the testing experience uniform, which means a relatively short, set time (the longer the test lasts, the more variables creep in). But it's important to note that none of the reasons that we test this way have anything to do with more effectively testing the skills we say we want to test.
There's a whole other discussion to be had about trying to treat reading skills as discrete abilities that exist and can be measured in a vacuum without any concern about the content being read. They can't, but even if they could, none of the skills we say we want in readers are tested by the instant quicky test method. We say we want critical thinking, deep reading, and reflection beyond simple recall and fact-spitting, but none of that fits with the cold-reading and instant analysis method used in tests. We test as if we want to train students to cold read and draw conclusions quickly, in an isolated brief period.
This is nuts. It is a skill set that pretty much nobody is looking for, an ability favored by no-one, and yet, it is a fundamental part of the Big Standardized Test. No-- I take that back. This is a set of skills that is useful if you want to train a bunch of people to read and follow directions quickly and compliantly. That's about it.
Real reading takes time. Real reflection takes time. Both are best served by a rich environment that includes other thoughtful readers and resources to enrich the experience. To write any sort of thoughtful, deep, or thorough reflection on that reading also takes time.
If policymakers were serious about building critical thinking, deep reading skills, and thoughtful responses to the text, they would not consider BS Tests like the PARCC for even five minutes. It is one more area where stated intent and actual actions are completely out of alignment.
A standardized test is the only place where students are told, "Starting from scratch, read this, reflect on it, answer questions about it, and do it all in the next fifteen minutes." We accept the accelerated time line as a normal feature of assessment, but why?
Never ever in a college course was a student handed a book for the first time and told, "Read this book and write an intelligent, thoughtful paper about the text. Hand it in sixty minutes from now."
Reflective, thoughtful, deep, even close reading, the kind of reading that reformsters insist they want, takes time. The text has to be read and considered carefully. Theories about the ideas, the themes, the characters, the author's use of language, the thoughtful consideration of the various elements of the writing-- those all need time to percolate, to simmer, to be mulled by the reader. Those of us who teach literature and reading in high school never have to tell our students, "Hurry up and zip through that faster." Most commonly we have to find ways to encourage our students to slow down, pay attention, really think about what they're reading instead of trying to race to the end.
A reader's relationship with a text, like any good relationship, takes time. It may start with a certain slow grudging acquaintance of necessity, or it may start with an instant spark of attraction, but either way, if the relationship is going to have any depth or quality, time and care will have to be invested. Standardized tests are the "hit it and quit it" of the reading world.
The reasons that we test this way are obvious. Test manufacturers want a short, closed test period so that no test items can "leak," though, of course, some of the best reflection on reading comes through discussion and sharing. English teachers have adopted reading circles for a reason. Test manufacturers also want to keep the testing experience uniform, which means a relatively short, set time (the longer the test lasts, the more variables creep in). But it's important to note that none of the reasons that we test this way have anything to do with more effectively testing the skills we say we want to test.
There's a whole other discussion to be had about trying to treat reading skills as discrete abilities that exist and can be measured in a vacuum without any concern about the content being read. They can't, but even if they could, none of the skills we say we want in readers are tested by the instant quicky test method. We say we want critical thinking, deep reading, and reflection beyond simple recall and fact-spitting, but none of that fits with the cold-reading and instant analysis method used in tests. We test as if we want to train students to cold read and draw conclusions quickly, in an isolated brief period.
This is nuts. It is a skill set that pretty much nobody is looking for, an ability favored by no-one, and yet, it is a fundamental part of the Big Standardized Test. No-- I take that back. This is a set of skills that is useful if you want to train a bunch of people to read and follow directions quickly and compliantly. That's about it.
Real reading takes time. Real reflection takes time. Both are best served by a rich environment that includes other thoughtful readers and resources to enrich the experience. To write any sort of thoughtful, deep, or thorough reflection on that reading also takes time.
If policymakers were serious about building critical thinking, deep reading skills, and thoughtful responses to the text, they would not consider BS Tests like the PARCC for even five minutes. It is one more area where stated intent and actual actions are completely out of alignment.
The Death of Testing Fantasies
It is one of the least surprising research findings ever, confirmed now by at least two studies-- students would do better on the Big Standardized Test if they actually cared about the results.
One of the great fantasies of the testocrats is their belief that the Big Standardized Tests provide useful data. That fantasy is predicated on another fantasy-- that students actually try to do their best on the BS Test. Maybe it's a kind of confirmation bias. Maybe it's a kind of Staring Into Their Own Navels For Too Long bias. But test manufacturers and the policy wonks who love them have so convinced themselves that these tests are super-important and deeply valuable that they tend to believe that students think so, too.
Somehow they imagine a roomful of fourteen-year-olds, faced with a long, tedious standardized test, saying, "Well, this test has absolutely no bearing on any part of my life, but it's really important to me that bureaucrats and policy mavens at the state and federal level have the very best data to work from, so I am going to concentrate hard and give my sincere and heartfelt all to this boring, confusing test that will have no effect on my life whatsoever." Right.
This is not what happens. I often think that we would get some serious BS Test reform in this country if testocrats and bureaucrats and test manufacturers had to sit in the room with the students for the duration of the BS Tests. As I once wrote, if the students don't care, the data aren't there.
There are times when testocrats seem to sense this, though their response is often silly. For instance, back when Pennsylvania was using the PSSA test as our BS Test, state officials decided that students would take the test more seriously if a good score won them a shiny gold sticker on their diploma.
The research suggests that something more than a sticker may be needed. Some British research suggests that cash rewards for good test performance can raise test scores in poor, underperforming students. And then we've got this new, unpublished working paper from researchers John List (University of Chicago), Jeffrey Livngston (Bentley University) and Susan Neckermann (University of Chicago) which asks via title the key question-- "Do Students Show What They Know on Standardized Tests?" Here's the abstract, in all its stilted academic-languaged glory:
Standardized tests are widely used to evaluate teacher performance. It is thus crucial that they accurately measure a student’s academic achievement. We conduct a field experiment where students, parents and tutors are incentivized based partially on the results of standardized tests that we constructed. These tests were designed to measure the same skills as the official state
standardized tests; however, performance on the official tests was not incented. We find substantial improvement on the incented tests but no improvement on the official tests, calling into question whether students show what they know when they have no incentive to do so.
I skimmed through the full paper, though I admit I just didn't feel incented to examine it carefully because this paper is destined to be published in the Journal of Blindingly Obvious Conclusions. Basically, the researchers paid students to try harder on one pointless test, but found that this did not inspire the students to try harder on other pointless tests for free.
A comparable experiment would be for a parent to pay their teenage daughter to clean up her room, then wait to see if she decided to clean the living room, too. There is some useful information here (finding out if she actually knows how to clean a room), but what we already know about motivation (via both science and common sense) tells us that paying her to clean her room actually makes it less likely that she will clean the living room for free.
And my analogy is not perfect because she actually lives in her room and uses the living room, so she has some connection to the cleaning task. Perhaps it would improve my analogy to make it about two rooms in some stranger's home.
The study played with the results of different rewards for the student lab rats, again, with unsurprising results ("The effects are eliminated completely however when the reward amount is small or
payment is delayed by a month").
More problematically, the study authors do not seem to have fully understood what they were doing as witnessed by what they believed was their experimental design--
The experiment is designed to evaluate whether these incentives successfully encourage
knowledge acquisition, then measure whether this acquisition results in higher ISAT scores.
Using a system developed by Discovery Education, the organization which creates the ISAT, we
created “probe” tests which are designed to assess the same skills and knowledge that the official
standardized tests examine.
No. The experiment was designed, whether you grokked it or not, to determine if students could be bribed to try harder on the tests, thereby getting better scores.
The answer is, yes, yes they can, and that result underlines one of the central flaws of test-driven accountability-- if you give students a test that is a pointless exercise in answer-clicking, many will not make any effort to try, and your results are useless crap. The fantasy that BS Tests produce meaningful data is a fantasy deserves to die.
As for the secondary question raised by these studies-- should we start paying students for test performance-- we already know a thousand reasons that such extrinsic rewarding for performance tasks is a Very Bad Idea. So let me leave you with one of the most-linked pieces of work on this blog, Daniel Pink's "Drive"
One of the great fantasies of the testocrats is their belief that the Big Standardized Tests provide useful data. That fantasy is predicated on another fantasy-- that students actually try to do their best on the BS Test. Maybe it's a kind of confirmation bias. Maybe it's a kind of Staring Into Their Own Navels For Too Long bias. But test manufacturers and the policy wonks who love them have so convinced themselves that these tests are super-important and deeply valuable that they tend to believe that students think so, too.
Somehow they imagine a roomful of fourteen-year-olds, faced with a long, tedious standardized test, saying, "Well, this test has absolutely no bearing on any part of my life, but it's really important to me that bureaucrats and policy mavens at the state and federal level have the very best data to work from, so I am going to concentrate hard and give my sincere and heartfelt all to this boring, confusing test that will have no effect on my life whatsoever." Right.
This is not what happens. I often think that we would get some serious BS Test reform in this country if testocrats and bureaucrats and test manufacturers had to sit in the room with the students for the duration of the BS Tests. As I once wrote, if the students don't care, the data aren't there.
There are times when testocrats seem to sense this, though their response is often silly. For instance, back when Pennsylvania was using the PSSA test as our BS Test, state officials decided that students would take the test more seriously if a good score won them a shiny gold sticker on their diploma.
The research suggests that something more than a sticker may be needed. Some British research suggests that cash rewards for good test performance can raise test scores in poor, underperforming students. And then we've got this new, unpublished working paper from researchers John List (University of Chicago), Jeffrey Livngston (Bentley University) and Susan Neckermann (University of Chicago) which asks via title the key question-- "Do Students Show What They Know on Standardized Tests?" Here's the abstract, in all its stilted academic-languaged glory:
Standardized tests are widely used to evaluate teacher performance. It is thus crucial that they accurately measure a student’s academic achievement. We conduct a field experiment where students, parents and tutors are incentivized based partially on the results of standardized tests that we constructed. These tests were designed to measure the same skills as the official state
standardized tests; however, performance on the official tests was not incented. We find substantial improvement on the incented tests but no improvement on the official tests, calling into question whether students show what they know when they have no incentive to do so.
I skimmed through the full paper, though I admit I just didn't feel incented to examine it carefully because this paper is destined to be published in the Journal of Blindingly Obvious Conclusions. Basically, the researchers paid students to try harder on one pointless test, but found that this did not inspire the students to try harder on other pointless tests for free.
A comparable experiment would be for a parent to pay their teenage daughter to clean up her room, then wait to see if she decided to clean the living room, too. There is some useful information here (finding out if she actually knows how to clean a room), but what we already know about motivation (via both science and common sense) tells us that paying her to clean her room actually makes it less likely that she will clean the living room for free.
And my analogy is not perfect because she actually lives in her room and uses the living room, so she has some connection to the cleaning task. Perhaps it would improve my analogy to make it about two rooms in some stranger's home.
The study played with the results of different rewards for the student lab rats, again, with unsurprising results ("The effects are eliminated completely however when the reward amount is small or
payment is delayed by a month").
More problematically, the study authors do not seem to have fully understood what they were doing as witnessed by what they believed was their experimental design--
The experiment is designed to evaluate whether these incentives successfully encourage
knowledge acquisition, then measure whether this acquisition results in higher ISAT scores.
Using a system developed by Discovery Education, the organization which creates the ISAT, we
created “probe” tests which are designed to assess the same skills and knowledge that the official
standardized tests examine.
No. The experiment was designed, whether you grokked it or not, to determine if students could be bribed to try harder on the tests, thereby getting better scores.
The answer is, yes, yes they can, and that result underlines one of the central flaws of test-driven accountability-- if you give students a test that is a pointless exercise in answer-clicking, many will not make any effort to try, and your results are useless crap. The fantasy that BS Tests produce meaningful data is a fantasy deserves to die.
As for the secondary question raised by these studies-- should we start paying students for test performance-- we already know a thousand reasons that such extrinsic rewarding for performance tasks is a Very Bad Idea. So let me leave you with one of the most-linked pieces of work on this blog, Daniel Pink's "Drive"
Wednesday, October 26, 2016
More Bogus Research from Rocketship
When you need some "research" to pump up your ad copy, what better than to just hire it yourself. In fact, why not hire the guys who have been pumping it out reliably for you for years.
Rocketship Academy is patting itself on the back over a new "research" report that it commissioned from good, old SRI International, a group that just happens to be heavily invested in technology-based education. SRI used to stand for Stanford Research Institute, but separated from Stanford in 1970 and changed its name in 1977 (which suggests the split was plenty amicable). That was long before Preston Smith tacked his Teach for America credits onto his resume and looked to enter the lucrative world of edubusiness, but since then, SRI has teamed up with Rocketship to show the awesomeness of the Rocketship Academy product.
Way back in 2011, SRI did a super-duper study of the K-5 Rocketship to show that the Dreambox program (Dreambox because it's what you bury dreams in?) raised scores on the NWEA MAP test by a couple of points. Dreambox is a company that Reed "Elected School Boards Stink" Hastings (Netflix) bought for the Charter School Growth Fund, a fund that also invests in Success Academy and Rocketship Academies. We could play "Follow The Incestuous Privatization Ties" all day, but we'll pass for the moment. We'll also let the usefulness of NWEA results ride for a moment (though you may recognize the name from the successful boycott of the test by Seattle teachers). SRI was back with another glowing report in 2014 and were touting further awesomeness in 2015.
It's almost as if SRI was a corporate partner of Rocketship rather than an "independent nonprofit research center," and indeed, way back in 2010 we find Rocketship then-CEO John Danner (now CEO of Zeal-- more in a second) explaining in an interview that they are teaming up with SRI
Next year, with SRI’s help we’re going to instrument Rocketship to be a test lab where we can measure the effectiveness of every online curricula for elementary schools.
Funny side note. Zeal started out as one more adaptive instructional software start-up, with money from NewSchools Venture fund and other folks interested in a good ROI on their edubiz dollars.Visit their site today and you'll find them headlining "live, on-demand coaching" from "real coaches." So I guess Danner has lost a little of his faith in soft-ware based education.
Fine. Whatever. What about that New Research?
So Rocketship's use of SRI as an independent evaluator is about as fishy as the "volunteer" pulled out of the audience by the bad magician at the company picnic. How about the actual research? Does it look plausible anyway?
Short answer: not so much.
What the research claims is that Rocketship middle schoolers have gained a full year of learning over their peers.
After controlling for demographic differences and other key variables, SRI found that Rocketship alumni outperformed their classmates on NWEA MAP, a nationally norm referenced assessment, by approximately one year of academic growth in both math and reading. This independent evidence from a world-renowned research institute is a powerful validation of our pioneering personalized learning model. Our incredibly dedicated teachers, deeply engaged parents and purposeful approach to technology are delivering significant gains for our Rocketeers that extend into middle school.
Let me pause a moment to nod at my usual ridicule of the notion of measuring learning growth in years, and especially in parts of years. Which part of the year? Did the student gain an April or a September? The notion that a year of learning represents some sort of homogenous tofu-like spread of steady increase only make sense to someone who doesn't know many human beings. It's a statistical construct, like saying that one car is stolen every fifteen minutes, as if car theft is a regular ticking event by which we can measure time. "How long have we been trapped in this dungeon, Chris?" "I'm not sure, but I think Pat's learning has grown about this much, so I figure around three months."
But let's look at the experimental design.
The study population included Rocketship alumni enrolled in one of seven participating charter middle schools in the San Jose area between the 2012–13 and 2015–16 school years, totaling 625 students. The comparison group of non-Rocketship peers was made up of students who did not attend any Rocketship elementary school and who attended the participating charter middle schools in the same grade level and in the same academic years as the Rocketship alumni in the study, totaling 1,294 students.
There's some other baloney about controlling for demographic characteristics "using propensity score weighting and multiple linear regression" (I just hope the reset the turginator on the framistan before they did all that), but the bottom line here is that they took a bunch of Rocketship middle schoolers and compared the ones who had attended Rosketship elementary school to the ones that hadn't.
From this we can conclude that students who attended Rocketship Academy elementary schools were better prepared for Rocketship Academy middle schools than students who did not.
Other research also indicates that at most Rocketship Acdemies, the sun rose to the East of the school. Early working papers also suggest that water is wet.
Also, grit!
Just in case you think Rocketship is only focused on the quality of their test prep for a standardized math and reading test, they'd like you to know that they are also looking into the hearts of Rocketshippiteers.
Of course, test scores are only one indicator of future success. That is why we embed our core core values, socio-emotional curriculum, and positive behavior interventions and supports throughout our Rocketeers’ learning experience. And while these skills are harder to measure, they still matter tremendously, especially for the student population we serve.
Emphasis mine. And the emphasis is just in bolding because I don't have an emoticon for a dropping jaw. "Especially for the student population we serve"??!! You mean poor minority deficient defective kids? Good behavior, grit, socio-emotional qualities-- the rich white kids don't need that? On the one hand, there's a bit of a point here. We have living proof that in our society, a person born into privilege can display every socially undesirable quality known to humankind and still have a shot at being both successful and also Presidential material. On the other hand, it kind of looks like Rocketship needs to work on its savior complex and its deficit thinking model for approaching non-white, non-wealthy students. But then, wealthy kids are not the Rocketship market, nor are the privileged crowd lined up for a chance to partake of this educational awesomeness. The whole model reeks of a low-cost edu-product that's good enough for Those People. Every once in a while they just forget to avoid saying so out loud.
Also, where Smith wrote "hard to measure," I think that's a typo. "Impossible to measure" is undoubtedly what he actually meant.
Bottom Line?
It's more charter marketeering fluffed up with science-flavored PR filler. It's dishonest and not very useful in adding to a real conversation about meeting educational needs or evaluating the actual impact of charters in general and the Rocketship blended plunk-kidsd-in-front-of-computers model in particular. I'm sure we'll see this thing passed around in the weeks ahead. Do not be fooled.
Rocketship Academy is patting itself on the back over a new "research" report that it commissioned from good, old SRI International, a group that just happens to be heavily invested in technology-based education. SRI used to stand for Stanford Research Institute, but separated from Stanford in 1970 and changed its name in 1977 (which suggests the split was plenty amicable). That was long before Preston Smith tacked his Teach for America credits onto his resume and looked to enter the lucrative world of edubusiness, but since then, SRI has teamed up with Rocketship to show the awesomeness of the Rocketship Academy product.
Way back in 2011, SRI did a super-duper study of the K-5 Rocketship to show that the Dreambox program (Dreambox because it's what you bury dreams in?) raised scores on the NWEA MAP test by a couple of points. Dreambox is a company that Reed "Elected School Boards Stink" Hastings (Netflix) bought for the Charter School Growth Fund, a fund that also invests in Success Academy and Rocketship Academies. We could play "Follow The Incestuous Privatization Ties" all day, but we'll pass for the moment. We'll also let the usefulness of NWEA results ride for a moment (though you may recognize the name from the successful boycott of the test by Seattle teachers). SRI was back with another glowing report in 2014 and were touting further awesomeness in 2015.
It's almost as if SRI was a corporate partner of Rocketship rather than an "independent nonprofit research center," and indeed, way back in 2010 we find Rocketship then-CEO John Danner (now CEO of Zeal-- more in a second) explaining in an interview that they are teaming up with SRI
Next year, with SRI’s help we’re going to instrument Rocketship to be a test lab where we can measure the effectiveness of every online curricula for elementary schools.
Funny side note. Zeal started out as one more adaptive instructional software start-up, with money from NewSchools Venture fund and other folks interested in a good ROI on their edubiz dollars.Visit their site today and you'll find them headlining "live, on-demand coaching" from "real coaches." So I guess Danner has lost a little of his faith in soft-ware based education.
Fine. Whatever. What about that New Research?
So Rocketship's use of SRI as an independent evaluator is about as fishy as the "volunteer" pulled out of the audience by the bad magician at the company picnic. How about the actual research? Does it look plausible anyway?
Short answer: not so much.
What the research claims is that Rocketship middle schoolers have gained a full year of learning over their peers.
After controlling for demographic differences and other key variables, SRI found that Rocketship alumni outperformed their classmates on NWEA MAP, a nationally norm referenced assessment, by approximately one year of academic growth in both math and reading. This independent evidence from a world-renowned research institute is a powerful validation of our pioneering personalized learning model. Our incredibly dedicated teachers, deeply engaged parents and purposeful approach to technology are delivering significant gains for our Rocketeers that extend into middle school.
Let me pause a moment to nod at my usual ridicule of the notion of measuring learning growth in years, and especially in parts of years. Which part of the year? Did the student gain an April or a September? The notion that a year of learning represents some sort of homogenous tofu-like spread of steady increase only make sense to someone who doesn't know many human beings. It's a statistical construct, like saying that one car is stolen every fifteen minutes, as if car theft is a regular ticking event by which we can measure time. "How long have we been trapped in this dungeon, Chris?" "I'm not sure, but I think Pat's learning has grown about this much, so I figure around three months."
But let's look at the experimental design.
The study population included Rocketship alumni enrolled in one of seven participating charter middle schools in the San Jose area between the 2012–13 and 2015–16 school years, totaling 625 students. The comparison group of non-Rocketship peers was made up of students who did not attend any Rocketship elementary school and who attended the participating charter middle schools in the same grade level and in the same academic years as the Rocketship alumni in the study, totaling 1,294 students.
There's some other baloney about controlling for demographic characteristics "using propensity score weighting and multiple linear regression" (I just hope the reset the turginator on the framistan before they did all that), but the bottom line here is that they took a bunch of Rocketship middle schoolers and compared the ones who had attended Rosketship elementary school to the ones that hadn't.
From this we can conclude that students who attended Rocketship Academy elementary schools were better prepared for Rocketship Academy middle schools than students who did not.
Other research also indicates that at most Rocketship Acdemies, the sun rose to the East of the school. Early working papers also suggest that water is wet.
Also, grit!
Just in case you think Rocketship is only focused on the quality of their test prep for a standardized math and reading test, they'd like you to know that they are also looking into the hearts of Rocketshippiteers.
Of course, test scores are only one indicator of future success. That is why we embed our core core values, socio-emotional curriculum, and positive behavior interventions and supports throughout our Rocketeers’ learning experience. And while these skills are harder to measure, they still matter tremendously, especially for the student population we serve.
Emphasis mine. And the emphasis is just in bolding because I don't have an emoticon for a dropping jaw. "Especially for the student population we serve"??!! You mean poor minority deficient defective kids? Good behavior, grit, socio-emotional qualities-- the rich white kids don't need that? On the one hand, there's a bit of a point here. We have living proof that in our society, a person born into privilege can display every socially undesirable quality known to humankind and still have a shot at being both successful and also Presidential material. On the other hand, it kind of looks like Rocketship needs to work on its savior complex and its deficit thinking model for approaching non-white, non-wealthy students. But then, wealthy kids are not the Rocketship market, nor are the privileged crowd lined up for a chance to partake of this educational awesomeness. The whole model reeks of a low-cost edu-product that's good enough for Those People. Every once in a while they just forget to avoid saying so out loud.
Also, where Smith wrote "hard to measure," I think that's a typo. "Impossible to measure" is undoubtedly what he actually meant.
Bottom Line?
It's more charter marketeering fluffed up with science-flavored PR filler. It's dishonest and not very useful in adding to a real conversation about meeting educational needs or evaluating the actual impact of charters in general and the Rocketship blended plunk-kidsd-in-front-of-computers model in particular. I'm sure we'll see this thing passed around in the weeks ahead. Do not be fooled.
Subscribe to:
Posts (Atom)