While the new education law is on the books, it still remains to follow ESSA:The Law with its even-more-important sequel ESSA: The Regulations. It's the body of regulations that determines exactly what the law means, how exactly the law will influence day to day life in Educationland.
So the writing of ESSA regulations is Really Important, and it's an especially big deal this time because Secretary of Education John King has signaled that he would really like to use the regulations to basically re-write the law. This has been a considerable source of conflict between King and members of Congress like Lamar Alexander and John Kline. Congress wrote ESSA with a specific bi-partisan intent to de-power the USED; the USED is doing its damndest to write itself back into power with the regulatory structure.
For instance, although ESSA absolutely acknowledges the right of parents to opt their children out of the Big Standardized Test, the proposed regulations demand that states label any school with more than 5% opt outs a failure, and to punish that "failing" school strictly and severely.
And while ESSA leaves it up to states to develop a rating system for schools, the proposed regulations demand that the rating system be a single grade system, just like Florida's crappy letter grade set up. Both ideas are lousy ones, but the feds are proposing an exceptionally lousy and simplistic system that has already been tried and found useless in several states.
Plus-- and this is a particularly weaselly example-- the law says that schools can be judged on four factors, and that one of those four factors doesn't have to be based on test scores or graduation rates. The proposed regulations say that the state must show research linking the fourth factor to student achievement (aka test score) or graduation rate, basically re-writing the clear intent of the law.
"I might listen to you this much"
As with any proposed regulations, there is a period for open comments. You can comment on the proposed regulations themselves, and of course, as always, you can send word to your elected reprsentatives. We should all make use of it. Yes, I know-- John King has a history of steadfastly and absolutely ignoring input from parents and teachers, and is right now signalling that he also intends to ignore the input of the elected members of Congress, so he's probably not going to look at the comments section on the regulations, slap his forehead and exclaim, "Now I see it! This changes everything! How could I have been so foolish!!" But I'm going to comment anyway. Here's why.
First, Congress is armed and ready to fight, and if one of them says, "Hey, show me what your comment section looks like," I want to be sure that Congressperson is greeted and subsequently armed with a stack of spirited opposition.
Second, I don't want King to able to say, "Well, nobody complained, so I guess everyone loves it." The truth doesn't always change the course of events, but that doesn't mean that it shouldn't be out there, front and center and fully visible, maybe for now, maybe for later, but out there.
So here are some things you can do, because the Network for Public Education wants to make this as simple as possible.
You can use this handy Action Network page to send a letter to your own elected representative. You can compose letter(s) of your own if you wish, but if you get word-tied and uncertain, this handy form will do most of the work for you. Remember, Congresspersons often simplify this stuff to the number of letters for and against, so don't worry that your arguments aren't original or clever enough. Just speak up. And do this soon-- King is scheduled to go in front of the Senate on Wednesday.
And also go to the comments page for the regulations, where you can leave your comments and thoughts about the proposed regulations. Not sure what to say, or don't have time to craft something? Once again, NPE has your back. At the bottom of this post I will include NPE's cut-and-paste objections. Simply copy and paste them onto the form, or copy and paste some of them, or copy-paste-and-rewrite what they've got here.
So even if you only have a few minutes today, you have plenty of time to speak up. These are the rules that we are going to have to live by for the foreseeable future (at least until President Trump unleashes the apocalypse), and now is the time to say something. Grumbling in the teachers lounge a year from now won't help, but speaking up today can. Do it.
(for the response website-- cut and paste the text below here, or revise as you wish)
I oppose the following proposed regulations as contrary to the
language and spirit of ESSA and because they will impose damaging and
overly prescriptive mandates on our public schools.
In each case, the US Department of Education is foisting its own
preferences while tying the hands of states, districts, parents, and
educators to devise their own accountability systems, as ESSA was
supposed to encourage. Specifically:
1. Draft regulation 200.15: This would force states to intervene
aggressively and/or fail schools in which more than 5% of students chose
not to take the state tests. This violates the provision in ESSA
recognizing “a State or local law regarding the decision of a parent to
not have the parent’s child participate in the academic assessments”
Recommendation: This regulation should be deleted. States
should be able to exercise their right to determine what measures should
be taken if students opt out, free of federal intrusion.
2. Draft regulation 200.13
The law requires states to create a growth score as an indicator for
elementary and middle schools. Secretary King has inserted “based on the reading/language arts and mathematics assessments”
into the regulation. This would prevent states from creating their own
measures of student learning across the curriculum, based on factors
other than standardized test scores.
Recommendation: The language “based on the reading/language arts and mathematics assessments” should be deleted from the regulation so that states have the freedom to devise their own measures of student growth.
3. Draft regulation 200.14
The law requires that there be four accountability
indicators. The fourth is a school quality indicator that is not based
on test scores or graduation rates. States have the freedom to include
school climate data, parent engagement, or other factors related to
school quality. The proposed regulation insists that such measures
prove by research how they are linked to achievement or graduation
rates, therefore restricting what states can include.
Recommendation: This regulation should be amended by allowing
states to encourage improvements in school climate, safety, engagement,
or other factors that may or may not be directly linked to academic
achievement, but are important in their own right.
4. Draft regulation 200.17
Proposed 200.17 would require that the test scores and graduation
rates of any subgroup (such as students with an IEP or disadvantaged
students) of at least 30 students be measured for accountability
purposes. Both NCLB and the ESSA leave the decision of minimum subgroup
size for the states to decide. The regulations argue that group size of
30 is sufficient to provide a fair and reliable rating, but this claim
has no basis in research. It should be noted that with a group size of
30, even 2 absent students will push the school below the 95%
participation requirement.
Recommendation: The minimum group size should be decided by states, as the law
requires, after consultation with researchers, given the high-stakes
consequences for schools.
5. Draft regulation 200.18
This would require that each school receive a single “summative”
grade or rating, derived from combining at least three of the four
indicators used to assess its performance. Yet imposing a single grade
on schools has been shown in states and districts across the nation to
be overly simplistic, unreliable and unfair, and is nowhere mentioned in
the law. This is why it has been severely criticized in Florida, for
example, and why NYC has moved away from such a system. The proposed
regulations go further and forbid states from boosting a school’s rating
if it has made substantial improvement on the 4th or non-academic category.
By doing so, the US Department of Education is again undermining the
right of each state to determine its own rating system, and whether it
chooses to provide a full or narrow picture of school performance.
Recommendation: DoE should allow states to retain the
authority given to them by ESSA to create their own rating systems, and
to determine their own weighting of various factors. The federal
government should be prevented from requiring that schools be labelled
with a single grade, just because that happens to be its own policy
preference.
Sunday, June 26, 2016
ICYMI: Some Must-Reads for June
And not a word here about Brexit.
The Importance of Parent Voice
Talking about the co-opting of language and parent voice in Nashville and elsewhere.
The Reading Rules We Would Never Follow As Adults
Those rules we impose on student readers that, as adults, we would never stand for, and what that tells us about the authenticity of reading instruction.
School Reform Is Really about Land Development
Somehow this sat and stewed for about a month before it got traction. It is an absolute must-read. If you only read two pieces on this list, this should be one of them.
What's Wrong with Christie's Wrongheaded School Aid Plan?
The spectacle of Tom Moran actually calling Christie really wrong.
Failure
You can't go wrong with Alfie Kohn, who may not blog often, but every time it's well thought out and deeply important.
CBE and ALEC Preparing Students for the Gig Economy
Competency based education is perfect grooming for the gig economy, where nobody ever has a steady job.
America's Not-So-Broken Education System
This would be the other must-read post from the week. Jack Schneider puts the whole picture in perspective and goes back to the fundamental flawed premise of reform.
North Carolina: The Ongoing Destruction of Public Education
Every so often, it's worth taking a moment to just step back and take in the full breadth of North Carolina's continuing attack on public schools and the teachers who work in them
When You Dial 911 and Wall Street Answers
Not directly about education, this looks at how Wall Street is taking over basic services like health care, and the miserable side effects for people who depend on those services. It will all seem distressingly familiar.
No Words Can Charm a Computer
A student writes a letter to the editor that beautifully outlines why computer-based scoring is a stupid idea.
Politicians Say They Care About Education: Now Public School Advocates Are Putting Them To the Test
The education planks that should be in every party's platform
Vivian Connell: Her Last Post
Of all the voices that Diane Ravitch has amplified, none has been more moving and heart-wrenching than Vivian Connell, the teacher blogging about her long fight with ALS. This week she made her final post.
The Importance of Parent Voice
Talking about the co-opting of language and parent voice in Nashville and elsewhere.
The Reading Rules We Would Never Follow As Adults
Those rules we impose on student readers that, as adults, we would never stand for, and what that tells us about the authenticity of reading instruction.
School Reform Is Really about Land Development
Somehow this sat and stewed for about a month before it got traction. It is an absolute must-read. If you only read two pieces on this list, this should be one of them.
What's Wrong with Christie's Wrongheaded School Aid Plan?
The spectacle of Tom Moran actually calling Christie really wrong.
Failure
You can't go wrong with Alfie Kohn, who may not blog often, but every time it's well thought out and deeply important.
CBE and ALEC Preparing Students for the Gig Economy
Competency based education is perfect grooming for the gig economy, where nobody ever has a steady job.
America's Not-So-Broken Education System
This would be the other must-read post from the week. Jack Schneider puts the whole picture in perspective and goes back to the fundamental flawed premise of reform.
North Carolina: The Ongoing Destruction of Public Education
Every so often, it's worth taking a moment to just step back and take in the full breadth of North Carolina's continuing attack on public schools and the teachers who work in them
When You Dial 911 and Wall Street Answers
Not directly about education, this looks at how Wall Street is taking over basic services like health care, and the miserable side effects for people who depend on those services. It will all seem distressingly familiar.
No Words Can Charm a Computer
A student writes a letter to the editor that beautifully outlines why computer-based scoring is a stupid idea.
Politicians Say They Care About Education: Now Public School Advocates Are Putting Them To the Test
The education planks that should be in every party's platform
Vivian Connell: Her Last Post
Of all the voices that Diane Ravitch has amplified, none has been more moving and heart-wrenching than Vivian Connell, the teacher blogging about her long fight with ALS. This week she made her final post.
Saturday, June 25, 2016
CBE: Personalized Education & The Indexing Problem
There are plenty of reasons not to like Competency Based Education, which can be found these days shambling about under the nom de guerre "personalized education." It's an appealing name, as it evokes images of a student with her own personal tutor and guide, her own educational concierge. Instead, it's actually a student strapped to a personal computer screen watching a parade of adaptive software unspooling before her. As I said, there are lots of reasons not to like this, but we can skip past the philosophical issues for a moment and consider some of the technical challenges of a truly adaptive, personalized piece of educational software.
Let's talk about the indexing problem.
For over a year, my daughter worked for a start-up company that was creating a huge searchable database of art. Her job, along with several other art history degreed folks, was to index and tag all the artwork in the world. The concept would be that one could search from artist to artist, finding the creators and creations that connected in some meaningful way to the stuff you already knew and liked.
Lots of websites have tried to crack the recommendation code. Amazon tells you that if you bought this, you might like to buy that. Netflix will try to tell you what you might like to watch. Pandora will tell you if you want to listen to this artist, you probably also want to hear that artist. And iTunes genius will take one song and build you a whole playlist of songs that you'll want to hear with it.
All of these recommendation systems depend on a massive system of indexing and tags. If the software thinks that you probably like "Good Vibrations" because you like sixties pop music, it may rustle up some Monkees, but if it believes you like the song because of its acid rock qualities, you may be recommended some Iron Butterfly. If it thinks the salient quality is vocal harmonies, you may get a Mitch Miller record next, but if it thinks the theremin is the key, you may find yourself listening to Bernard Herrman's score for The Day the Earth Stood Still. How the software indexes and tags things (and how it weights those tags) makes a huge difference in what it thinks you want next.
If you've worked with any of these programs, you know they all share one quality-- they don't work super-well. Some are trainable-- if you have a few spare hours or days, you can sit and rate everything you've ever bought from Amazon to give the software a better idea where your preferences lie. But even software that tries to learn about you can be problematic. I regularly research charter schools for this blog, and so my browsers are convinced that I really want to see charter school ads, because it doesn't know the difference between positive and negative attention. I once spent a day trying to hunt down Peter Fox's Schuttel Deinen Speck including lyrics (don't ask) and for a week Google was certain I wanted all my search results to include German language entries.
The problem can be complicated by library size. Pandora does not have an infinite supply of music, and some of the music they do have is more expensive than other parts of their library. So my wife's Sara Bareilles "channel" also includes old Louis Armstrong recordings (the 20s jazz Armstrong, not the 50s easy listening Armstrong). When your iTunes has to try to come up with a good playlist based on just the music in your iPod, the challenge becomes even huger.
And this is before we run all of this past the weirdly specific tastes and interests of individuals. Speaking of Louis Armstrong, he was perhaps the top jazz cat of his generation, and yet, one of his favorite bands to go hear was the exceptionally square Guy Lombardo, who in fact could sell out Harlem's Savoy Ballroom. The connections that bind together an individual's tastes and preferences are often mysterious and elusive.
Companies have spent millions of dollars trying to solve the problem-- Netflix famously offered a prize to anyone who could improve their recommendation engine.
So what does this mean for personalized adaptive education?
First, to come up with the right personalized recommendation for the student, the bigger the library of possible assignments and modules, the better. After all, if we're saying, "Well, Pat, you've finished Module A, so let's check the software's recommendation and see if you should do Module B or Module C" isn't very personalized. True personalization calls for a near-infinite number of possible paths. If we've just got ten or twelve possible paths, that's not personalization-- it's just tracking.
But once we have hundreds of modules containing hundreds of cyber-worksheets or adapted learning activities or whatever we're going to call them, we need an absolute kick-ass indexing system, and we need an analytical engine that can figure out what the indexing system is telling us. What are the important qualities in Module A that tell us which module Pat should do next? Every indexing tag is another variable-- how will we determine which variables are the important ones? In the reading module, was it the vocabulary and if so, which words) or was it the topic-- and if it was the topic, what about the topic? Was the reading about race cars effective or ineffective because Pat likes cars, or because Pat likes things that go fast, or because Pat has an uncle who both races cars and hunts elk? Because knowing would be the difference between a next module about fast jets or classic Studebakers or hunting elk. Did Pat respond to the sentence structure or the paragraph length. For that matter, given that Knewton seriously promised us we would know what to eat for breakfast the day of a math test, do we have to cross-index Module A against what Pat ate and who Pat played with? Have we even talked about the effect of typography on Pat's reading interest and comprehension (I'm not joking)? Did Pat respond to the voice of the writer, and would most respond to another piece by that same writer regardless of any other factors-- and how exactly will we index the different qualities of a writer's voice?
I'm just warming up, but you see my point. To really truly index each module and each assignment within each module would, done well, take a gazillion person-hours and be complicated beyond belief-- then to come up with an artificially intelligent engine that can sort through all of those index tags and cross-reference them against the gazillion other pieces of content in its vast library.
Of course, you know who's really good at analyzing a human beings tastes and preferences and sorting the important details from the less important ones? That's right-- another human being.
You can hire my daughter and some other experts to spend over a year to catalog and index and just generally feed stuff into a big computer program-- or you could sit down with her and she could ask you some questions and personally come up with some recommendations for you. You can let genius put together playlists for you, or you can let someone who knows music and knows your tastes make you a playlist (which lacks the romance of a mix tape, but hey-- technology marches on).
You could try to create an enormous library of instructional modules with a gigantic and complex web of indexing and analytics. Or you could create a semi-large-ish library of units and a pretty small, superficial indexing system and just kind of half-ass the whole thing, then try to cover it with a whiz-bang sales pitch.
Or you could just hire a competent, well-trained, knowledgeable professional human to be the classroom teacher.
I recommend that option, personally.
Let's talk about the indexing problem.
For over a year, my daughter worked for a start-up company that was creating a huge searchable database of art. Her job, along with several other art history degreed folks, was to index and tag all the artwork in the world. The concept would be that one could search from artist to artist, finding the creators and creations that connected in some meaningful way to the stuff you already knew and liked.
Lots of websites have tried to crack the recommendation code. Amazon tells you that if you bought this, you might like to buy that. Netflix will try to tell you what you might like to watch. Pandora will tell you if you want to listen to this artist, you probably also want to hear that artist. And iTunes genius will take one song and build you a whole playlist of songs that you'll want to hear with it.
All of these recommendation systems depend on a massive system of indexing and tags. If the software thinks that you probably like "Good Vibrations" because you like sixties pop music, it may rustle up some Monkees, but if it believes you like the song because of its acid rock qualities, you may be recommended some Iron Butterfly. If it thinks the salient quality is vocal harmonies, you may get a Mitch Miller record next, but if it thinks the theremin is the key, you may find yourself listening to Bernard Herrman's score for The Day the Earth Stood Still. How the software indexes and tags things (and how it weights those tags) makes a huge difference in what it thinks you want next.
If you've worked with any of these programs, you know they all share one quality-- they don't work super-well. Some are trainable-- if you have a few spare hours or days, you can sit and rate everything you've ever bought from Amazon to give the software a better idea where your preferences lie. But even software that tries to learn about you can be problematic. I regularly research charter schools for this blog, and so my browsers are convinced that I really want to see charter school ads, because it doesn't know the difference between positive and negative attention. I once spent a day trying to hunt down Peter Fox's Schuttel Deinen Speck including lyrics (don't ask) and for a week Google was certain I wanted all my search results to include German language entries.
The problem can be complicated by library size. Pandora does not have an infinite supply of music, and some of the music they do have is more expensive than other parts of their library. So my wife's Sara Bareilles "channel" also includes old Louis Armstrong recordings (the 20s jazz Armstrong, not the 50s easy listening Armstrong). When your iTunes has to try to come up with a good playlist based on just the music in your iPod, the challenge becomes even huger.
And this is before we run all of this past the weirdly specific tastes and interests of individuals. Speaking of Louis Armstrong, he was perhaps the top jazz cat of his generation, and yet, one of his favorite bands to go hear was the exceptionally square Guy Lombardo, who in fact could sell out Harlem's Savoy Ballroom. The connections that bind together an individual's tastes and preferences are often mysterious and elusive.
Companies have spent millions of dollars trying to solve the problem-- Netflix famously offered a prize to anyone who could improve their recommendation engine.
So what does this mean for personalized adaptive education?
First, to come up with the right personalized recommendation for the student, the bigger the library of possible assignments and modules, the better. After all, if we're saying, "Well, Pat, you've finished Module A, so let's check the software's recommendation and see if you should do Module B or Module C" isn't very personalized. True personalization calls for a near-infinite number of possible paths. If we've just got ten or twelve possible paths, that's not personalization-- it's just tracking.
But once we have hundreds of modules containing hundreds of cyber-worksheets or adapted learning activities or whatever we're going to call them, we need an absolute kick-ass indexing system, and we need an analytical engine that can figure out what the indexing system is telling us. What are the important qualities in Module A that tell us which module Pat should do next? Every indexing tag is another variable-- how will we determine which variables are the important ones? In the reading module, was it the vocabulary and if so, which words) or was it the topic-- and if it was the topic, what about the topic? Was the reading about race cars effective or ineffective because Pat likes cars, or because Pat likes things that go fast, or because Pat has an uncle who both races cars and hunts elk? Because knowing would be the difference between a next module about fast jets or classic Studebakers or hunting elk. Did Pat respond to the sentence structure or the paragraph length. For that matter, given that Knewton seriously promised us we would know what to eat for breakfast the day of a math test, do we have to cross-index Module A against what Pat ate and who Pat played with? Have we even talked about the effect of typography on Pat's reading interest and comprehension (I'm not joking)? Did Pat respond to the voice of the writer, and would most respond to another piece by that same writer regardless of any other factors-- and how exactly will we index the different qualities of a writer's voice?
I'm just warming up, but you see my point. To really truly index each module and each assignment within each module would, done well, take a gazillion person-hours and be complicated beyond belief-- then to come up with an artificially intelligent engine that can sort through all of those index tags and cross-reference them against the gazillion other pieces of content in its vast library.
Of course, you know who's really good at analyzing a human beings tastes and preferences and sorting the important details from the less important ones? That's right-- another human being.
You can hire my daughter and some other experts to spend over a year to catalog and index and just generally feed stuff into a big computer program-- or you could sit down with her and she could ask you some questions and personally come up with some recommendations for you. You can let genius put together playlists for you, or you can let someone who knows music and knows your tastes make you a playlist (which lacks the romance of a mix tape, but hey-- technology marches on).
You could try to create an enormous library of instructional modules with a gigantic and complex web of indexing and analytics. Or you could create a semi-large-ish library of units and a pretty small, superficial indexing system and just kind of half-ass the whole thing, then try to cover it with a whiz-bang sales pitch.
Or you could just hire a competent, well-trained, knowledgeable professional human to be the classroom teacher.
I recommend that option, personally.
PA: Cybers Are Delusional
It's been little more than a week since the bricks and mortar portion of the charter school industry took a big, hard swipe at their cyber-siblings. As you may recall, three major charter school groups released a "report" that was basically a blueprint for how to slap the cyber-schools with enough regulation to make them finally behave. The report was rough, noting all of the worst findings about cybers-- how they achieve no learning and actually destabilize many students.
The cyber-school industry was not amused. K12, one of the biggest chains in the largely for-profit sector, fired back with its own press release that managed to be feisty without really addressing any of the criticisms.
But in Pennsylvania, one of the Big Three of free range cyber-school activity (Ohio and California are the other two), cybers are trying a different approach.
In what the Philly Inquirer calls an "unprecedented" move, nine of the thirteen PA cyber chains sent a letter to PA Secretary of Education Pedro Rivera saying, "Hey, can we talk?"
The letter does not exactly acknowledge the cyber school record of abject failure in PA.
"What we are proposing is an open and honest discussion on what virtual education can and cannot do, dig deeper into the data and recommendations relative to Pennsylvania, and change whatever needs to be changed to make Pennsylvania the national model for high-quality and cost-effective virtual education," Joanne Barnett, CEO of the Pennsylvania Virtual Charter School. "It's time to stop the combative nature of discourse relative to public education and work together for the benefit of the students, parents, and taxpayers."
In other words, now that we are losing this fight, we would like to call a truce.
The nine cyber chains represent about 35,000 or the around 36,000 cyber students in PA. Of course, exact numbers are always difficult, as one of the classic cyber games is to play hot potato with students, keeping them long enough count for getting paid by the state, but not so long that they hurt the test numbers (or cost more money). My guidance counselor friends tell me that there are days in the year where guidance counselors and cyber-school officials literally sit at their computers and furiously pass students back and forth, like a sort of reverse ebay.
Not that it helps much. In Pennsylvania, not a single cyber school met the benchmarks for academic performance.
Despite the huge influence of charter lobbyists in Pennsylvania, cyber school operators have been sweating. At the astroturf site, pacyberfamilies.org, you can read the frantic concern that cyber money might be cut by the state. And pressure the rein in the cybers is coming form local districts across the state, where cash-strapped school systems are forking over huge truckloads of cash to the cybers under one of the most generous-to-charters financial set-ups in the nation. Local schools are seeing teachers laid off, schools closed, and programs shut down, and local taxpayers are finally seeing the direct links between what they're losing and the huge payments to the cyber schools which do not even deliver and are, in fact, failing so thoroughly that even their fellow charters are deserting them.
But while cybers are signalling that they're willing to sit down and talk over some stuff, they are not signalling that they actually believe or accept any of their reported failures. At its site, the Pennsylvania Coalition of Public [sic] Charter Schools indicates that "none of the data in the report is new," and Dr. Reese Flurie, CEO of Commonwealth Charter Academy says that the national data is too general to make state policy decisions. Well, maybe, though since the Big Three have half the cyber students in the nation, I'm not sure data about national cyber-charters is all that general compared to Pennsylvania.
And in that same piece, we find the line "Cyber charters are doing a good job of serving a student population that would otherwise fall through the cracks in the traditional system," which is a pretty thought for which there is not a shred of evidence. I will, as always, note that there are specific students for whom cybers can be a blessing. But after a decade of aggressively courting every other sort of student, those students who can benefit from cybers are a teeny tiny fraction of the business model.
"There is always room for improvement" says the PCPCS, as if that's just one of those facts of life and not an insight into their own huge and numerous failures. Pennsylvania cybers do not have "room for improvement"-- they are costly and spectacular failures that do not educate students, strip local school districts of resources, and are far more concerned about turning a profit than actually doing the job they are set up to do. They don't need to discuss tweaking. They need to explain why their continued existence should be allowed.
They'd like to cut a deal, but they'd like to avoid admitting anything in the process, keeping their money, their market share, and their illusions. Perhaps they're hoping that a willingness to talk will get them out of being forcibly reformed by the state, but if their opening position is, "Yeah, we're doing a great job. We just have to tweak a few things. Please keep writing those big checks" then they are not just trying some business maneuvering-- they are delusional. They are standing in the town square, the entire citizenry pointing and laughing at their flabby nakedness, as they try to deal with the situation by hollering, "Look, every outfit needs a little work, and we'll be happy to sit down with a tailor and discuss tweaking the outfit, but we still insist that our new clothes are splendid."
The cyber-school industry was not amused. K12, one of the biggest chains in the largely for-profit sector, fired back with its own press release that managed to be feisty without really addressing any of the criticisms.
But in Pennsylvania, one of the Big Three of free range cyber-school activity (Ohio and California are the other two), cybers are trying a different approach.
In what the Philly Inquirer calls an "unprecedented" move, nine of the thirteen PA cyber chains sent a letter to PA Secretary of Education Pedro Rivera saying, "Hey, can we talk?"
The letter does not exactly acknowledge the cyber school record of abject failure in PA.
"What we are proposing is an open and honest discussion on what virtual education can and cannot do, dig deeper into the data and recommendations relative to Pennsylvania, and change whatever needs to be changed to make Pennsylvania the national model for high-quality and cost-effective virtual education," Joanne Barnett, CEO of the Pennsylvania Virtual Charter School. "It's time to stop the combative nature of discourse relative to public education and work together for the benefit of the students, parents, and taxpayers."
In other words, now that we are losing this fight, we would like to call a truce.
The nine cyber chains represent about 35,000 or the around 36,000 cyber students in PA. Of course, exact numbers are always difficult, as one of the classic cyber games is to play hot potato with students, keeping them long enough count for getting paid by the state, but not so long that they hurt the test numbers (or cost more money). My guidance counselor friends tell me that there are days in the year where guidance counselors and cyber-school officials literally sit at their computers and furiously pass students back and forth, like a sort of reverse ebay.
Not that it helps much. In Pennsylvania, not a single cyber school met the benchmarks for academic performance.
Despite the huge influence of charter lobbyists in Pennsylvania, cyber school operators have been sweating. At the astroturf site, pacyberfamilies.org, you can read the frantic concern that cyber money might be cut by the state. And pressure the rein in the cybers is coming form local districts across the state, where cash-strapped school systems are forking over huge truckloads of cash to the cybers under one of the most generous-to-charters financial set-ups in the nation. Local schools are seeing teachers laid off, schools closed, and programs shut down, and local taxpayers are finally seeing the direct links between what they're losing and the huge payments to the cyber schools which do not even deliver and are, in fact, failing so thoroughly that even their fellow charters are deserting them.
But while cybers are signalling that they're willing to sit down and talk over some stuff, they are not signalling that they actually believe or accept any of their reported failures. At its site, the Pennsylvania Coalition of Public [sic] Charter Schools indicates that "none of the data in the report is new," and Dr. Reese Flurie, CEO of Commonwealth Charter Academy says that the national data is too general to make state policy decisions. Well, maybe, though since the Big Three have half the cyber students in the nation, I'm not sure data about national cyber-charters is all that general compared to Pennsylvania.
And in that same piece, we find the line "Cyber charters are doing a good job of serving a student population that would otherwise fall through the cracks in the traditional system," which is a pretty thought for which there is not a shred of evidence. I will, as always, note that there are specific students for whom cybers can be a blessing. But after a decade of aggressively courting every other sort of student, those students who can benefit from cybers are a teeny tiny fraction of the business model.
"There is always room for improvement" says the PCPCS, as if that's just one of those facts of life and not an insight into their own huge and numerous failures. Pennsylvania cybers do not have "room for improvement"-- they are costly and spectacular failures that do not educate students, strip local school districts of resources, and are far more concerned about turning a profit than actually doing the job they are set up to do. They don't need to discuss tweaking. They need to explain why their continued existence should be allowed.
They'd like to cut a deal, but they'd like to avoid admitting anything in the process, keeping their money, their market share, and their illusions. Perhaps they're hoping that a willingness to talk will get them out of being forcibly reformed by the state, but if their opening position is, "Yeah, we're doing a great job. We just have to tweak a few things. Please keep writing those big checks" then they are not just trying some business maneuvering-- they are delusional. They are standing in the town square, the entire citizenry pointing and laughing at their flabby nakedness, as they try to deal with the situation by hollering, "Look, every outfit needs a little work, and we'll be happy to sit down with a tailor and discuss tweaking the outfit, but we still insist that our new clothes are splendid."
MD: State Super Gets Writing Lesson
Les Perelman is one of my heroes. For years he has poked holes in the junk science that is computer-graded writing, bringing some sanity and clarity to a field clogged with silly puffery.
We are all indebted to Fred Klonsky for publishing an exchange between Perelman (retired Director of Writing Across the Curriculum at MIT) and Jack Smith, the Maryland State Superintendent of Schools. Maryland is one of the places where the PARCC test now uses computer grading for a portion of the test results. This is a bad idea, although Smith has no idea why. I'm going to touch on some highlights here in hopes of enticing you to head on over and read the whole thing.
The exchange begins with a letter from Smith responding to Perelman's concerns. It seems entirely possible that Smith created the letter by cutting and pasting from PARCC PR materials.
In response to a question about how many tests will be computer scored, Smith notes that "PARCC was built to be a digital assessment, integrating the latest technology in order to drive better, smarter feedback for teachers, parents and students. Automated scoring drives effective and efficient scoring" which means faster and cheaper. Also, more consistent. No word on whether the feedback will actually be good or useful (spoiler alert: no), but at least it will be fast and cheap.
In responding to other points, Smith repeats the marketing claim that computer scoring has proven to be as accurate as human scoring, that there's a whole report of "proof of concept," and that report includes three whole pages of end notes, so there's your research basis.
Perelman's restraint in responding to all this baloney is, as always, admirable. Here are the points of his response to Smith, filtered through my own personal lack of restraint.
1) Maryland's long use of computers to grade short answers is not germane. Short answer scoring just means scanning for key words, while scoring an entire essay requires reading for qualities that computers are as yet incapable of identifying. Read about Perelman's great work with his gibberish-generating program BABEL.
2) Studies have shown that computers grade as well as humans-- as long as you are comparing the computer scoring to then work of humans who have been trained to grade essays like a machine. Perelman observes that real research would compare the computer's work to the work of expert readers, not the $12/hour temps that Pearson and PARCC use.
3) The research is bunk. The three pages of references are not outside references, but mostly the product of the same vendor that is trying to sell the computer grading system.
4) Perelman argues that no major test is using computers to grade writing. I'm not really sure how much longer that argument is going to hold up.
5) The software can be gamed. BABEL (a software program that creates gibberish essays that receive high scores from scoring software) is a fine example, but students need not be that ambitious. Write many words, long sentences, and use as many big words as you can think of. I can report that this actually works with the $12/hour scorers who are trained to score like machines as well. For years, my department achieved mid-nineties proficiency on the state writing test, and we did it by teaching students to fill up the page, write neatly, and use big words (we liked "plethora" a lot). We also taught them that this was lousy writing, but it would make the state happy. Computer scoring works just as well, and can be just as easily gamed. If one of your school's goals is to teach students about going through ridiculous motions in order to satisfy clueless bureaucrats, then I guess this is worthwhile. If you want to teach them to write well, it's all a huge waste of time.
6) There's evidence that the software has built-in cultural bias, which is unsurprising because all software reflects the biases of its writers.
It remains to be seen if any of these arguments penetrate State Superintendent Smith's consciousness. I suppose the good news is that it's relatively easy to teach students to game the system. The bad news, of course, is that system is built on a foundation of baloney and junk science.
It angers me because teaching writing is a personal passion, and this sort of junk undermines it tremendously. It pretends that good writing can be reduced to a simple algorithm, and it does it for the basest of reasons. After all, we already know how to properly assess writing-- you hire a bunch of professionals. Pennsylvania used to do that, but then they sub-contracted the whole business out to a company that wanted a cheaper process.
And that's the thing. The use of computer assessment for writing is not about better writing or better feedback-- the software is incapable of providing anything but the most superficial feedback. The use of computer assessment for writing is about getting the job done cheaply and without the problems that come with hiring meat widgets. This is education reform at its absolute worst-- let's do a lousy job so that we can save a buck.
We are all indebted to Fred Klonsky for publishing an exchange between Perelman (retired Director of Writing Across the Curriculum at MIT) and Jack Smith, the Maryland State Superintendent of Schools. Maryland is one of the places where the PARCC test now uses computer grading for a portion of the test results. This is a bad idea, although Smith has no idea why. I'm going to touch on some highlights here in hopes of enticing you to head on over and read the whole thing.
The exchange begins with a letter from Smith responding to Perelman's concerns. It seems entirely possible that Smith created the letter by cutting and pasting from PARCC PR materials.
In response to a question about how many tests will be computer scored, Smith notes that "PARCC was built to be a digital assessment, integrating the latest technology in order to drive better, smarter feedback for teachers, parents and students. Automated scoring drives effective and efficient scoring" which means faster and cheaper. Also, more consistent. No word on whether the feedback will actually be good or useful (spoiler alert: no), but at least it will be fast and cheap.
In responding to other points, Smith repeats the marketing claim that computer scoring has proven to be as accurate as human scoring, that there's a whole report of "proof of concept," and that report includes three whole pages of end notes, so there's your research basis.
Perelman's restraint in responding to all this baloney is, as always, admirable. Here are the points of his response to Smith, filtered through my own personal lack of restraint.
1) Maryland's long use of computers to grade short answers is not germane. Short answer scoring just means scanning for key words, while scoring an entire essay requires reading for qualities that computers are as yet incapable of identifying. Read about Perelman's great work with his gibberish-generating program BABEL.
2) Studies have shown that computers grade as well as humans-- as long as you are comparing the computer scoring to then work of humans who have been trained to grade essays like a machine. Perelman observes that real research would compare the computer's work to the work of expert readers, not the $12/hour temps that Pearson and PARCC use.
3) The research is bunk. The three pages of references are not outside references, but mostly the product of the same vendor that is trying to sell the computer grading system.
4) Perelman argues that no major test is using computers to grade writing. I'm not really sure how much longer that argument is going to hold up.
5) The software can be gamed. BABEL (a software program that creates gibberish essays that receive high scores from scoring software) is a fine example, but students need not be that ambitious. Write many words, long sentences, and use as many big words as you can think of. I can report that this actually works with the $12/hour scorers who are trained to score like machines as well. For years, my department achieved mid-nineties proficiency on the state writing test, and we did it by teaching students to fill up the page, write neatly, and use big words (we liked "plethora" a lot). We also taught them that this was lousy writing, but it would make the state happy. Computer scoring works just as well, and can be just as easily gamed. If one of your school's goals is to teach students about going through ridiculous motions in order to satisfy clueless bureaucrats, then I guess this is worthwhile. If you want to teach them to write well, it's all a huge waste of time.
6) There's evidence that the software has built-in cultural bias, which is unsurprising because all software reflects the biases of its writers.
It remains to be seen if any of these arguments penetrate State Superintendent Smith's consciousness. I suppose the good news is that it's relatively easy to teach students to game the system. The bad news, of course, is that system is built on a foundation of baloney and junk science.
It angers me because teaching writing is a personal passion, and this sort of junk undermines it tremendously. It pretends that good writing can be reduced to a simple algorithm, and it does it for the basest of reasons. After all, we already know how to properly assess writing-- you hire a bunch of professionals. Pennsylvania used to do that, but then they sub-contracted the whole business out to a company that wanted a cheaper process.
And that's the thing. The use of computer assessment for writing is not about better writing or better feedback-- the software is incapable of providing anything but the most superficial feedback. The use of computer assessment for writing is about getting the job done cheaply and without the problems that come with hiring meat widgets. This is education reform at its absolute worst-- let's do a lousy job so that we can save a buck.
Friday, June 24, 2016
OK: An Example for All of Us
Oklahoma has taken its share of lumps in the ed debates. Their legislature is not quite as determined to burn public education to the ground as are the legislatures of North Carolina or Florida. It's not quite as committed to cashing in on the charter revolution as Ohio. But Oklahoma remains in the grip of reformster baloney, and teachers are tired and frustrated. The word 'frustrated" comes up rather a lot. And teachers are doing something with that frustration.
Word has been spreading since April-- teachers are running for elected office.
Meet, for instance, Kevin McDonald, an English teacher at Edmond Memorial High School.
“Its becoming apparent to more and more educators that to be heard we need to be in the conversation, not outside of the conversation trying to talk at people,”says McDonald.
“Teaching is what I want to do,” he said, “But I’ve come to a point where my ability to teach is being compromised by legislative decisions.”
So he's running for Senator Clark Jolley's seat. Jolley is a third-term congressman who won his last election with 79% of the vote. He serves as a member of the education committee and chairman of appropriations. It is entirely possible that he is not ripe to be unseated. But at a bare minimum, challenging the GOP senator has given McDonald a chance to put school finances in the election discussion.
And finances are a touchy subject in Oklahoma. Well, the Common Core was one source of many bunched-up panties, and they pushed back hard on VAM, but they have had a bad several years when it comes to teacher pay and oddly enough, teacher recruitment and retention. John Croisant, a sixth grade in Tulsa, cites it as a reason for running for an open House seat.
“For me, it’s personal,” said 39-year-old Republican John Croisant, a sixth-grade geography teacher in Tulsa Public Schools who said he’s seen several of his colleagues leave Oklahoma to take teaching jobs in neighboring states for more money. “It’s not that we don’t want to teach. They’re going across the border and they’re able to make $10,000 more each year for their families.”
The teachers have held public sessions to promote their candidacies and their issues, and they have been assertive about spreading the word about candidates. For instance, if you want to run down some information about the education-positive candidates, you can search through the pages of noted OK blog Blue Cereal Education for a host of candidate profiles (marked with the hashtag #OKElections16).
It's a lesson for all of us. As much as teachers tend to shy away from politics and dream of just closing the door and ignoring the world outside, it's politics that set the rules that increasingly intrude on our classrooms. Simply making a contribution to the political action committee of the union is not enough (or, in the case of some unions and some races, not even helpful). We have to speak up. We have to promote the folks that stand for what we value. We have to do our homework and make hard choices (perfect candidates, it turns out, show up as often as perfect humans).
Oklahoma's primaries are next week. My best wishes to the education candidates-- I hope they do well. But even if they don't do well, they have already done good by making the education discussion part of the political discussion. That in itself is an achievement; as we've seen in the last year, no matter how important we think education issues are, getting politicians to talk about them is like trying to get my labrador retriever to talk about existential angst and third world monetary policy.
We have complained for decades that education discussions are being held without any teachers in the room, and we are right to complain. But it is not enough to keep waiting for our invitation to arrive-- we need to get out there and shoulder our way into the arena. Thank you to the teachers of Oklahoma who have worked to do that.
Word has been spreading since April-- teachers are running for elected office.
Meet, for instance, Kevin McDonald, an English teacher at Edmond Memorial High School.
“Its becoming apparent to more and more educators that to be heard we need to be in the conversation, not outside of the conversation trying to talk at people,”says McDonald.
“Teaching is what I want to do,” he said, “But I’ve come to a point where my ability to teach is being compromised by legislative decisions.”
So he's running for Senator Clark Jolley's seat. Jolley is a third-term congressman who won his last election with 79% of the vote. He serves as a member of the education committee and chairman of appropriations. It is entirely possible that he is not ripe to be unseated. But at a bare minimum, challenging the GOP senator has given McDonald a chance to put school finances in the election discussion.
And finances are a touchy subject in Oklahoma. Well, the Common Core was one source of many bunched-up panties, and they pushed back hard on VAM, but they have had a bad several years when it comes to teacher pay and oddly enough, teacher recruitment and retention. John Croisant, a sixth grade in Tulsa, cites it as a reason for running for an open House seat.
“For me, it’s personal,” said 39-year-old Republican John Croisant, a sixth-grade geography teacher in Tulsa Public Schools who said he’s seen several of his colleagues leave Oklahoma to take teaching jobs in neighboring states for more money. “It’s not that we don’t want to teach. They’re going across the border and they’re able to make $10,000 more each year for their families.”
The teachers have held public sessions to promote their candidacies and their issues, and they have been assertive about spreading the word about candidates. For instance, if you want to run down some information about the education-positive candidates, you can search through the pages of noted OK blog Blue Cereal Education for a host of candidate profiles (marked with the hashtag #OKElections16).
It's a lesson for all of us. As much as teachers tend to shy away from politics and dream of just closing the door and ignoring the world outside, it's politics that set the rules that increasingly intrude on our classrooms. Simply making a contribution to the political action committee of the union is not enough (or, in the case of some unions and some races, not even helpful). We have to speak up. We have to promote the folks that stand for what we value. We have to do our homework and make hard choices (perfect candidates, it turns out, show up as often as perfect humans).
Oklahoma's primaries are next week. My best wishes to the education candidates-- I hope they do well. But even if they don't do well, they have already done good by making the education discussion part of the political discussion. That in itself is an achievement; as we've seen in the last year, no matter how important we think education issues are, getting politicians to talk about them is like trying to get my labrador retriever to talk about existential angst and third world monetary policy.
We have complained for decades that education discussions are being held without any teachers in the room, and we are right to complain. But it is not enough to keep waiting for our invitation to arrive-- we need to get out there and shoulder our way into the arena. Thank you to the teachers of Oklahoma who have worked to do that.
School Accountability Camps
Now that ESSA has opened the door (maybe, kind of) to new approaches to school accountability. What are the possibilities?
Mike Petrilli of the Fordham Institute has outlinbed four possibilities in a list calculated to help us all conclude that only one of the possibilities is really legit. The list apparently grows out of the Fordham contest to design a compatibility system; I entered that competition but was not a finalist, though I swear, I'm not bitter. Someday some rich benefactor will give me a stack of money, and I'll start my own thinky tank, and then I'll have ed system design competitions all the time. so there.
Anyway. let's look at the four camps that Petrilli sees. He labels them by their supposed slogans.
Every School is A-OK!
Petrilli says that the proponents of this view are the teachers unions and "other educator groups," and it does not bode well for his list that he opens with a Gigantic Person of Straw. I read a lot of writing by a lot of people, and I'll be damned if I can think of a single person who says that every school is a-okay. So we can skip past this camp because there is not a single tent pitched there.
Attack the Algorithms
Petrilli says this camp is anti-test, but pro-accountability. "It seeks a system that uses as much human judgment as possible and captures a full, vivid, multifaceted picture of school quality." These tend to base their system on school inspection. It's more common in Europe than the US, which could be in part because the last decade or two of US law has specifically centered on bone-headed high-stakes test-based formulae that do their best to root out any trace of human judgment (except, of course, the human judgment that produced the tests, the scoring of the tests, and the algorithms for crunching the test results). I won't lie. This is my camp.
Living in the Scholars’ Paradise
Yeah, Petrilli doesn't have his poker face on, either.
This approach uses sophisticated, rigorous models to evaluate schools’ impact on student achievement, making sure not to conflate factors (like student demographics or prior achievement) that are outside of schools’ control.
He's wrong, and he's wrong because he is focused on the single output of student test scores. He's willing (at last) to admit that there are factors that influence the test scores that are outside the school's control, but he's still just counting test scores, and that's an unacceptably low bar for what measuring what schools do. Actually, it's not a bar at all. It's like judging pole vaulters by checking their foot placement at the point of lift-off and ignoring everything else. And, to flog the simile further, under our current testing regimen, it's like judging that foot placement by comparing it to an idea about foot placement cooked up by someone whose only expertise is that several years ago they watched a couple of pole vaulters on television.
Petrilli also wants to include, in fact give most attention to, value-added measures, and at this stage of the game, backing VAM as a good measure of school quality makes even less sense than arguing that Donald Trump should be President because of his great statesmanlike qualities. VAM is simply indefensible as a measure of school or teacher quality. Petrilli has displayed flexibility in the past; why he remains devoted to this slice of junk science is a mystery to me.
NCLB Extended, Not Ended
NCLB is gone but not forgotten. Or maybe it’s not exactly gone, in the mind of folks who yearn for Uncle Sam to mandate accountability models that obsess about achievement gaps and give failing grades to any school with low proficiency rates for any subgroups.
As always, I tip my hat to Petrilli's ability to craft a sentence to stay polite and yet still make clear that he thinks someone is full of shit. Game recognizes game.
Anyway, his portrayal of this camp is interesting in the context of the ongoing reformster battle over whose reform it is, exactly, anyway. Here he says that those lefty civil rights types just want to measure raw achievement scores so as to highlight schools that are failing non-wealthy students of color. As with category #1, I'm not sure these are people who actually exist.
Tents He Forgot To Pitch
One of the critical questions that is repeatedly overlooked in accountability discussions is:
Accountable to whom?
In other contexts, reformsters will talk about being accountable to parents, that in their perfect world parents get all sorts of information that allows them to select the most awesomest school available. Yet none of Petrilli's camps, least of all his preferred one, spends any time at all asking parents what they want the schools to be held accountable for. And everything we know about parents (and personally, I'd like to consult all non-parental taxpayers as well, because, public schools) tells us that "high scores on standardized tests" does not top anybody's list. And look-- reformsters already know that, which is why they insist on talking about "student achievement" when they mean "test scores." They know that if we just told folks about "test scores" they would be unimpressed and unmoved, so we call it "student achievement," which brings to mind a whole wide spectrum of accomplishments and skills-- even though test scores is all that's really on the plate.
And of course, since we're talking about ESSA, what we're really talking about is accountability to the feds and (depending on how far rogue the USED is willing to go) the states. So while talk about accountability to parents sounds nice, that's not really what we're trying to do anyway. We can say that the bureaucrats at the capital are fine proxies and watchdogs for parent and taxpayer interests, but let's get real-- the suits at the capital have a whole list of political concerns (which are in turn tangled up with all sorts of lobbying and money concerns) that have nothing to do with what Mom and Dad are concerned about when they send Chris and Pat off to school.
Bottom line-- the school inspection, human-based accountability gets you the kind of information that parents and taxpayers (who are, by and large, human beings) want and find useful. The scholar's paradise is really a bureaucrat and policy wonk's paradise that gives primacy to the interests, concerns and policy goals of the wonks and bureaucrats over the interests and concerns of parents and taxpayers.
Ask a parent (or taxpayer)-- Do you want to know everything you'd like to know about your local school, or would you like to be confident that the bureaucrats at the department of education know everything they want to know? I have a thought about what the answer would be.
And underlining all of this? Congress and the USED still have to finishing arguing about what the department is supposed to (or allowed to) hold accountable for. Can't wait to see where that leaves us.
Mike Petrilli of the Fordham Institute has outlinbed four possibilities in a list calculated to help us all conclude that only one of the possibilities is really legit. The list apparently grows out of the Fordham contest to design a compatibility system; I entered that competition but was not a finalist, though I swear, I'm not bitter. Someday some rich benefactor will give me a stack of money, and I'll start my own thinky tank, and then I'll have ed system design competitions all the time. so there.
Anyway. let's look at the four camps that Petrilli sees. He labels them by their supposed slogans.
Every School is A-OK!
Petrilli says that the proponents of this view are the teachers unions and "other educator groups," and it does not bode well for his list that he opens with a Gigantic Person of Straw. I read a lot of writing by a lot of people, and I'll be damned if I can think of a single person who says that every school is a-okay. So we can skip past this camp because there is not a single tent pitched there.
Attack the Algorithms
Petrilli says this camp is anti-test, but pro-accountability. "It seeks a system that uses as much human judgment as possible and captures a full, vivid, multifaceted picture of school quality." These tend to base their system on school inspection. It's more common in Europe than the US, which could be in part because the last decade or two of US law has specifically centered on bone-headed high-stakes test-based formulae that do their best to root out any trace of human judgment (except, of course, the human judgment that produced the tests, the scoring of the tests, and the algorithms for crunching the test results). I won't lie. This is my camp.
Living in the Scholars’ Paradise
Yeah, Petrilli doesn't have his poker face on, either.
This approach uses sophisticated, rigorous models to evaluate schools’ impact on student achievement, making sure not to conflate factors (like student demographics or prior achievement) that are outside of schools’ control.
He's wrong, and he's wrong because he is focused on the single output of student test scores. He's willing (at last) to admit that there are factors that influence the test scores that are outside the school's control, but he's still just counting test scores, and that's an unacceptably low bar for what measuring what schools do. Actually, it's not a bar at all. It's like judging pole vaulters by checking their foot placement at the point of lift-off and ignoring everything else. And, to flog the simile further, under our current testing regimen, it's like judging that foot placement by comparing it to an idea about foot placement cooked up by someone whose only expertise is that several years ago they watched a couple of pole vaulters on television.
Petrilli also wants to include, in fact give most attention to, value-added measures, and at this stage of the game, backing VAM as a good measure of school quality makes even less sense than arguing that Donald Trump should be President because of his great statesmanlike qualities. VAM is simply indefensible as a measure of school or teacher quality. Petrilli has displayed flexibility in the past; why he remains devoted to this slice of junk science is a mystery to me.
NCLB Extended, Not Ended
NCLB is gone but not forgotten. Or maybe it’s not exactly gone, in the mind of folks who yearn for Uncle Sam to mandate accountability models that obsess about achievement gaps and give failing grades to any school with low proficiency rates for any subgroups.
As always, I tip my hat to Petrilli's ability to craft a sentence to stay polite and yet still make clear that he thinks someone is full of shit. Game recognizes game.
Anyway, his portrayal of this camp is interesting in the context of the ongoing reformster battle over whose reform it is, exactly, anyway. Here he says that those lefty civil rights types just want to measure raw achievement scores so as to highlight schools that are failing non-wealthy students of color. As with category #1, I'm not sure these are people who actually exist.
Tents He Forgot To Pitch
One of the critical questions that is repeatedly overlooked in accountability discussions is:
Accountable to whom?
In other contexts, reformsters will talk about being accountable to parents, that in their perfect world parents get all sorts of information that allows them to select the most awesomest school available. Yet none of Petrilli's camps, least of all his preferred one, spends any time at all asking parents what they want the schools to be held accountable for. And everything we know about parents (and personally, I'd like to consult all non-parental taxpayers as well, because, public schools) tells us that "high scores on standardized tests" does not top anybody's list. And look-- reformsters already know that, which is why they insist on talking about "student achievement" when they mean "test scores." They know that if we just told folks about "test scores" they would be unimpressed and unmoved, so we call it "student achievement," which brings to mind a whole wide spectrum of accomplishments and skills-- even though test scores is all that's really on the plate.
And of course, since we're talking about ESSA, what we're really talking about is accountability to the feds and (depending on how far rogue the USED is willing to go) the states. So while talk about accountability to parents sounds nice, that's not really what we're trying to do anyway. We can say that the bureaucrats at the capital are fine proxies and watchdogs for parent and taxpayer interests, but let's get real-- the suits at the capital have a whole list of political concerns (which are in turn tangled up with all sorts of lobbying and money concerns) that have nothing to do with what Mom and Dad are concerned about when they send Chris and Pat off to school.
Bottom line-- the school inspection, human-based accountability gets you the kind of information that parents and taxpayers (who are, by and large, human beings) want and find useful. The scholar's paradise is really a bureaucrat and policy wonk's paradise that gives primacy to the interests, concerns and policy goals of the wonks and bureaucrats over the interests and concerns of parents and taxpayers.
Ask a parent (or taxpayer)-- Do you want to know everything you'd like to know about your local school, or would you like to be confident that the bureaucrats at the department of education know everything they want to know? I have a thought about what the answer would be.
And underlining all of this? Congress and the USED still have to finishing arguing about what the department is supposed to (or allowed to) hold accountable for. Can't wait to see where that leaves us.
Subscribe to:
Posts (Atom)