Why shouldn't Bill Gates spend his money terraforming the education landscape? Why shouldn't rich guys use their power and influence to promote the issues that they care about? Haven't rich powerful guys always done so?
These are not easy questions to answer. After all, Rockefeller, Carnegie and others made hugely important contributions to the American landscape, legacies that have continue to benefit Americans long after these dead white guys had moved on to Robber Baron Heaven.
How is Gates different? This post by Mercedes Schneider (whose blog you should already be following), helped me see one significant difference.
Rockefeller and Carnegie (the dead white guy philanthropists I'm most familiar with) helped invent modern philanthropy by discovering some basic issues. Mostly, they discovered that when people hear you want to give away money, the wold beats a path to your door. So they set up various entities whose job was to accept, filter and respond to the applications for big bucks that various groups sent to them, based on a set of criteria that the rich guys developed out of A) their own set of concerns and B) the opinions of knowledgeable people in their fields. That's how Rockefeller, a white guy who believed in homeopathic medicine, ended up revolutionizing the study of medical science and building a higher education system for African-Americans.
This is not how the Gates Foundation does business.
Where classic philanthropy says, "Come make your pitch and if we like your work, we will help support you," the Gates Foundation says, "We have a project we want to launch.Let's go shopping for someone to do that for us."
From the Gates Foundation Grantseeker FAQ:
Q. How do I apply for a grant from the foundation?
A. We do not make grants outside our funding priorities. In
general, we directly invite proposals by directly contacting
organizations.
There is also this:
Q: Who makes decisions on investments and when?
A: As part of its operating model, the foundation
continues delegate decision making on grants and contracts to leaders
across the organization. With our new process, decision makers are
identified at the early stage of an investment. Check-in points are
built in to help ensure that decision makers are informed about and can
raise questions during development, rather than holding all questions
until the end.
I know it says "investments," but we're still on the foundations Grantseeker FAQ page, in the section that talks about how various data and progress reports will be used along the way as grant recipients complete whatever project Gates is funding.
We pick the project, we approach the people we want to have do it, we bankroll it, and we supervise it until completion. The Gates Foundation model looks less like a philanthropy and more like corporate subcontracts.
This model explains a few issues about the Gates approach.
Why do so many edu-groups funded by Gates seem to have no existence outside of doing Gates work? Because Gates isn't looking to find people already running proven programs that can use a financial boost, but instead is looking to sow money and reap groups doing exactly what Gates wants to have done. "I've got a gabillion dollars here to give to a group that will pilot and promote an unproven educational technique! I'd like to pay you guys to set that up for us?"
Occasionally Gates does work with a pre-existing group, but often this is a matter of shopping for someone who can provide brand recognition, like AFT or NEA. But those "grants" are still predicated on "I have a project I want you to do for us" and not "Let me help support the good work you're already doing."
This is far different from Rockefeller's "I've got a gabillion dollars to spend promoting Black education in the South. Find me some people who are doing good work in the field that I can help expand with this money."
The Gates Foundation model is astroturf philanthropy.
Look, if you're a rich guy who loves anchovy pizza and you want to use your clout, that's fine. If you open the door for successful anchovy pizza makers to apply for grants so they can expand, that's super. But if you decide that you are going to fund a whole new anchovy pizza plant, and hire health department inspectors to get all other pizza makers condemned, and hire consultants to flood the media with bogus reports about the healthful effects of anchovy pizza, and create other consulting firms to push legislation outlawing everything except anchovies on pizza-- if you do all that, you are not a philanthropist. You're just a guy using money and power to make people do what you want them to.
Rockefeller, Carnegie and the rest were not saints, and it's arguable whether their philanthropic benefits offset their robber baronical misbehavior. But when it came to running a corporate-based oligarchy, they were small-timers compared to the folks at the Gates.
Monday, February 24, 2014
Sunday, February 23, 2014
What's Not To Love About Pre-K
One of the most recent ed-issues du jour is Pre-K. There's a great deal of political and public support for earlier childhood education these days, but I find much of it far more troubling than encouraging. While the data on the success of pre-K programs could be called mixed, there are a motivations behind the current push that indicate it should be feared and resisted.
Investment Opportunity
One of the appeals of Pre-K for investors is that there is no pre-existing institution that has to be bulldozed first.
Turning public education into an investment opportunity has been a long, arduous process. Discrediting public schools, buying up enough political clout to dismantle the public system, aggressive marketing to steal public ed "customers"-- it has taken a lot of time to break down a cherished American institution in order to create investment opportunities.
But the Pre-K landscape is only occupied by a handful of relative lightweights. It's the difference between building your new Mega-Mart on an empty lot and having to condemn and clear a residential neighborhood. Easy pickings!
Brand Extension
Yes, I see what you did there. We've stopped calling it Pre-School because that would indicate that it isn't going actually going to be school. But that's not where the push is going.
Instead, we have politicians deciding that since Kindergartner's are having trouble meeting the developmentally inappropriate standards of CCSS, the problem must be that they aren't "ready" for kindergarten. So we have the spectacle of people seriously suggesting that what four-year-olds need is some rigorous instruction, and of course THAT means that we'll need to give those four-year-olds standardized tests in order to evaluate how well the program is going.
It's like some sort of unholy alliance between people who won't be happy until they're selling eduproduct to every child in this country and people who won't be happy until we've made certain that no child in this country is ever wasting time playing and enjoying life.
More Pipeline
The Big Data machine needs more data. Right now we can only plug your child in when she reaches age five. Oh, but if we could only get our hands on those children sooner. Even a year sooner would be an improvement. Pre-K programs will allow more data collection and fatter file for each child.
Don't you want to know what career your four-year-old is best suited for? Don't you want to be certain that your four-year-old is on track for college? The let us add another link to the Big Data Pipeline.
There's no question that, done correctly, Pre-K can be a Good Thing. Anecdotally, I tell friends who are obsessing over it that I could never look at my eleventh grade classroom and tell you which students had pre-school and which did not. But, still, putting a small child in a rich environment to play and socialize and learn a few things couldn't hurt.
However, I'm convinced that a vast number of the people currently pushing Pre-K have no intention whatsoever of doing things right. Instead, what many politicians and thought leaders and hedgucators are supporting is an extension of CCSS/reformy stuff baloney to four-year-olds.
So support Pre-K if you wish, but be damn sure that the people you're agreeing with are people you are actually agreeing with.
Investment Opportunity
One of the appeals of Pre-K for investors is that there is no pre-existing institution that has to be bulldozed first.
Turning public education into an investment opportunity has been a long, arduous process. Discrediting public schools, buying up enough political clout to dismantle the public system, aggressive marketing to steal public ed "customers"-- it has taken a lot of time to break down a cherished American institution in order to create investment opportunities.
But the Pre-K landscape is only occupied by a handful of relative lightweights. It's the difference between building your new Mega-Mart on an empty lot and having to condemn and clear a residential neighborhood. Easy pickings!
Brand Extension
Yes, I see what you did there. We've stopped calling it Pre-School because that would indicate that it isn't going actually going to be school. But that's not where the push is going.
Instead, we have politicians deciding that since Kindergartner's are having trouble meeting the developmentally inappropriate standards of CCSS, the problem must be that they aren't "ready" for kindergarten. So we have the spectacle of people seriously suggesting that what four-year-olds need is some rigorous instruction, and of course THAT means that we'll need to give those four-year-olds standardized tests in order to evaluate how well the program is going.
It's like some sort of unholy alliance between people who won't be happy until they're selling eduproduct to every child in this country and people who won't be happy until we've made certain that no child in this country is ever wasting time playing and enjoying life.
More Pipeline
The Big Data machine needs more data. Right now we can only plug your child in when she reaches age five. Oh, but if we could only get our hands on those children sooner. Even a year sooner would be an improvement. Pre-K programs will allow more data collection and fatter file for each child.
Don't you want to know what career your four-year-old is best suited for? Don't you want to be certain that your four-year-old is on track for college? The let us add another link to the Big Data Pipeline.
There's no question that, done correctly, Pre-K can be a Good Thing. Anecdotally, I tell friends who are obsessing over it that I could never look at my eleventh grade classroom and tell you which students had pre-school and which did not. But, still, putting a small child in a rich environment to play and socialize and learn a few things couldn't hurt.
However, I'm convinced that a vast number of the people currently pushing Pre-K have no intention whatsoever of doing things right. Instead, what many politicians and thought leaders and hedgucators are supporting is an extension of CCSS/reformy stuff baloney to four-year-olds.
So support Pre-K if you wish, but be damn sure that the people you're agreeing with are people you are actually agreeing with.
Friday, February 21, 2014
Testing Resistance & Reform Spring: Three Simple Goals
There's a new coalition in the ed world, one that you should be hearing more about. Here's the meat from their first press release:
Widespread resistance to the
overuse and misuse of standardized testing is exploding across the nation.
Testing Resistance & Reform Spring (TRRS) is an alliance of organizations
that have come together to expand these efforts in order to win local, state
and national policy changes: Less testing, more learning.
To ensure that assessment
contributes to all students having full access to an equitable, high-quality
education, we unite around three goals:
1) Stop high-stakes use of
standardized tests;
2)
Reduce the number of standardized exams, saving time and money for real
learning; and
3) Replace multiple-choice tests with
performance-based assessments and evidence of learning from students’ ongoing
classwork (“multiple measures”).
There's a lot to love about this. Let me look at those three goals:
STOP HIGH STAKES USE OF STANDARDIZED TESTS
There is no justification for this use of standardized tests. There never has been. The high stakes use of the test exists for only one purpose-- to force students and teachers to take the tests seriously. Making these tests high stakes is the last desperate action of a speaker who can't get the crowd to9 listen, so he finally threatens to shoot them if they won't shut up.
REDUCE THE NUMBER OF STANDARDIZED EXAMS
Is there seriously anybody who doesn't think this is a good idea? Other than, of course, the people who make money selling exam programs to schools. This year, because we have moved PA's Big Test from 11th to 10th grade at my school, I will get to teach my students an entire unit more than I have been able to include since we started testing. They will get at least two week's worth of additional education.
There are reformers claiming that we need to lengthen the school day or the school year. But we can just as easily put more hours back into education by wasting less time on costly, time-consuming tests.
REPLACE BUBBLE TESTS WITH REAL ASSESSMENT
Fans of the High Stakes Testing sometimes speak as if there would be no measuring of students at all if not for the big bubble tests. But of course classroom teachers are already doing constant, complex, nuanced assessment that is directly tied to what is being taught. Is it so crazy to suggest that we could just use it?
TRRS has an action website and an impressive list of members, including Fair Test, United Opt Out, Parents Across America, Save Our Schools, and the Network for Public Education. It has a clear mission, and as more parents get to meet PARCC, SBA, and their bastard cousins, more communities are realizing that the mega-testing program cannot stand as is.
When people are up to no good, or simply don't know what they're talking about, you get twisted overblown jargonized gobbledygook. Compare the rhetoric of testing fans to the three simple goals laid out above. The time has come to make this happen. Proponents have said, "Well, don't tell us what you're against. What are you for?" There it is. Plain and simple. Come join the resistance.
Standardized Testing Sucks
I am not a testing scientist. There are bloggers and writers and people who frequent the comments section of Diane Ravitch's blog who can dissect the science and the stats and the proper creation and forming and parsing of testing and testlettes and testicles (okay, maybe not those). I'm not one of those people; Mercedes Schneider has undoubtedly forgotten more about testing that I ever learned in the first place.
But I do believe standardized testing, testing that operates on a level beyond the local, sucks. And I don't just mean that it is unkind or obnoxious or oppressive. I mean that it just doesn't work. It does not do what it sets out to do.
Years and years ago, Pennsylvania launched state-wide testing. Not the PSSAs, but the PSAs. One of the first to be rolled out was the PSA writing test. Students in fifth, eighth, and eleventh grade across the state responded to a nifty prompt. These were all gathered up, and the state assembled a Holiday Inn's worth of Real Live Teachers to score papers for a weekend.
I was there for two of those years. It was kind of awesome in a way that only an English teacher could find awesome. We received some training on the kind of holistic rubric scoring that we all now know and-- well, know. And then we sat at tables and powered through. In exchange, we received a free weekend at a nice hotel with food and a chance to meet other teachers from across the state (one year we also received a "I scored 800 times in Harrisburg" pin-- again, English teacher geek awesomeness).
But the PSAs ran up against a problem from the get-go-- students recognized that there was no reason to take them seriously.
And so the state started looking for ways to FORCE students to take the state tests seriously. Make schools count them as grades. Give cool diploma stickers to the best scorers. Make the tests graduation requirements. And hire a company, not actual teachers, to score the test. Students of history will note that these ideas never quite went away.
But when you have to force somebody to take you seriously, when you have to threaten or bully people into treating something as if it's important, you've already acknowledged that there is no good reason for them to take you seriously. And that is why standardized testing sucks.
I am not opposed to data collection and assessment. I do it all the time in my room, both formally and informally. I don't test very much; mostly my students do what we're now calling performance tasks-- anything from writing papers to designing websites to standing up and presenting to the class. My students generally do these without much fuss, and I think that's because they can see the point. Sometimes they can see me design the task in front of them ("Our discussion of the novel headed off in this direction, so let's make the paper assignment about this idea...").
My students know an inauthentic bogus bullshit assessment task when they see one. They know the SAT is bogus, but they have been led to believe it holds their future ransom, so they do it anyway (and we know that after all these years of development, it still doesn't predict college success better than high school grades-- do PARCC and SBA really think they'll do better). And the state has tried to place the High Stakes Test between students and graduation so that students will take the test seriously, but they still recognize it as inauthentic malarkey. If you hold someone hostage and agree to release her if she kisses you, you are a fool to turn around and claim that the kiss is proof that she loves you.
Standardized testing is completely inauthentic assessment, and students know that. The young ones may blame themselves, but students of all ages see that there is no connection between the testing and their education, their lives, anything or anyone at all in their real existence. Standardized test are like driving down a highway on vacation where every five miles you have to stop, get out of the car, and make three basketball shot attempts from the free throw line-- annoying, intrusive, and completely unrelated to the journey you're on. If someone stands at the free throw line and threatens you with a beating if you miss, it still won't make you conclude that the requirement is not stupid and pointless.
And so the foundation of all this data generation, all this evaluation, all this summative formative bibbitive bobbitive boobosity, is a student performing an action under duress that she sees as stupid and pointless and disconnected from anything real in life. What are the odds that this task under these conditions truly measures anything at all? And on that tissue-thin foundation, we build a whole structure of planning students's futures, sculpting instruction, evaluating teachers. There is nothing anywhere that comes close in sheer hubritic stupidity.
To make matters worse, the structure that we've built is built of bad tests. Even if students somehow decided these tests were Really Important, the data collected would still be bad because the tests themselves are poorly-designed untested unvalidated abominations.
It is great to see the emergence of Testing Resistance & Reform Spring, a new coalition of some of the strongest voices in education on the testing issue. They've come out in favor of three simple steps:
These three goals are an essential part of taking back our public schools and dislodging the most toxic of the reformy stuff that has infected education over the past decade. It's a movement that deserves widespread support. Let's get back to assessment that really means something.
But I do believe standardized testing, testing that operates on a level beyond the local, sucks. And I don't just mean that it is unkind or obnoxious or oppressive. I mean that it just doesn't work. It does not do what it sets out to do.
Years and years ago, Pennsylvania launched state-wide testing. Not the PSSAs, but the PSAs. One of the first to be rolled out was the PSA writing test. Students in fifth, eighth, and eleventh grade across the state responded to a nifty prompt. These were all gathered up, and the state assembled a Holiday Inn's worth of Real Live Teachers to score papers for a weekend.
I was there for two of those years. It was kind of awesome in a way that only an English teacher could find awesome. We received some training on the kind of holistic rubric scoring that we all now know and-- well, know. And then we sat at tables and powered through. In exchange, we received a free weekend at a nice hotel with food and a chance to meet other teachers from across the state (one year we also received a "I scored 800 times in Harrisburg" pin-- again, English teacher geek awesomeness).
But the PSAs ran up against a problem from the get-go-- students recognized that there was no reason to take them seriously.
And so the state started looking for ways to FORCE students to take the state tests seriously. Make schools count them as grades. Give cool diploma stickers to the best scorers. Make the tests graduation requirements. And hire a company, not actual teachers, to score the test. Students of history will note that these ideas never quite went away.
But when you have to force somebody to take you seriously, when you have to threaten or bully people into treating something as if it's important, you've already acknowledged that there is no good reason for them to take you seriously. And that is why standardized testing sucks.
I am not opposed to data collection and assessment. I do it all the time in my room, both formally and informally. I don't test very much; mostly my students do what we're now calling performance tasks-- anything from writing papers to designing websites to standing up and presenting to the class. My students generally do these without much fuss, and I think that's because they can see the point. Sometimes they can see me design the task in front of them ("Our discussion of the novel headed off in this direction, so let's make the paper assignment about this idea...").
My students know an inauthentic bogus bullshit assessment task when they see one. They know the SAT is bogus, but they have been led to believe it holds their future ransom, so they do it anyway (and we know that after all these years of development, it still doesn't predict college success better than high school grades-- do PARCC and SBA really think they'll do better). And the state has tried to place the High Stakes Test between students and graduation so that students will take the test seriously, but they still recognize it as inauthentic malarkey. If you hold someone hostage and agree to release her if she kisses you, you are a fool to turn around and claim that the kiss is proof that she loves you.
Standardized testing is completely inauthentic assessment, and students know that. The young ones may blame themselves, but students of all ages see that there is no connection between the testing and their education, their lives, anything or anyone at all in their real existence. Standardized test are like driving down a highway on vacation where every five miles you have to stop, get out of the car, and make three basketball shot attempts from the free throw line-- annoying, intrusive, and completely unrelated to the journey you're on. If someone stands at the free throw line and threatens you with a beating if you miss, it still won't make you conclude that the requirement is not stupid and pointless.
And so the foundation of all this data generation, all this evaluation, all this summative formative bibbitive bobbitive boobosity, is a student performing an action under duress that she sees as stupid and pointless and disconnected from anything real in life. What are the odds that this task under these conditions truly measures anything at all? And on that tissue-thin foundation, we build a whole structure of planning students's futures, sculpting instruction, evaluating teachers. There is nothing anywhere that comes close in sheer hubritic stupidity.
To make matters worse, the structure that we've built is built of bad tests. Even if students somehow decided these tests were Really Important, the data collected would still be bad because the tests themselves are poorly-designed untested unvalidated abominations.
It is great to see the emergence of Testing Resistance & Reform Spring, a new coalition of some of the strongest voices in education on the testing issue. They've come out in favor of three simple steps:
1) Stop high-stakes
use of standardized tests;
2)
Reduce the number of standardized exams, saving time and money for real
learning; and
3) Replace multiple-choice tests with
performance-based assessments and evidence of learning from students’ ongoing
classwork (“multiple measures”).
These three goals are an essential part of taking back our public schools and dislodging the most toxic of the reformy stuff that has infected education over the past decade. It's a movement that deserves widespread support. Let's get back to assessment that really means something.
Up Against the Data Wall
This picture has been scooting around twitter, just the most recently egregious example of one of the more odious techniques attached to the CCSS/testing regime-- the Data Wall.
The data wall is a logical extension of Reformy Stuff's complete misunderstanding of how tests work and how human beings are motivated. A Data Wall makes perfect sense if you believe A) students are primarily Data Generation Units and B) human beings are best motivated by shame and bullying.
The Data Walls were inevitable. After all, we're well past the point where we decided that generating a bunch of cool numbers with badly designed invalid junk tests and then publishing those numbers in the newspaper would be a most excellent way to motivate teachers. Why would we not want to do the same with students?
Sure, everything we actually know about human motivation says that this is wrong. And the technique of combining useless tests, bad data, and public shaming has not yet produced any useful results in any of the school systems where it has been tried with teachers.
But we've learned that one of the SOP's of the Masters of Reforming Our Nation's Schools is that when something you really believe clashes with reality, it is time to bash reality in the face. If your latest technique failed, then you don't need to adjust-- you just need to fail harder.
Most of the examples that we have seen of this practice show at least a passing respect for privacy issues, or at least the lawyers who make money suing over them. And a while back somebody had a minor internet hit with a Data Wall about the educational qualifications of Gates, Duncan and Rhee (spoiler alert: none). But these things won't go away. A look at some of these terrible public displays of student results can and should be read over at Edusanity.'And Valerie Strauss addressed the wrongness of it all last week.
Maybe, as the MoRONS usually would have us believe, just haven't pushed it rigorously enough (because, you know, of our unaccountable urges to coddle six-year-olds). Are there ways we could make Data Walls even betterer?? Sure-- here are some thoughts--
Data Dress Codes. If you are Below Basic, you must wear the Below Basic uniform, a sort of middling grey. Basic students may add black and white to the palatte. Proficient students may wear primary colors, and Advanced students can have a full range of colors, including tie-dye.
Data Recess. If you are Proficient or Advanced, you can play a base or pitch in playground softball. Below Basic sit a special Below Basic Bench. Basic students play left field.
But hey-- if rigorous shaming toward excellence is good for kids, why not apply it to adults as well.
In Congress, we could have a giant data wall charting which legislators have passed the most bills. Or, since data walls often post meaningless junk data, lets post things like gallons of coffee used per office. Lets go to law firms and put a big chart in the lobby showing billable hours per lawyer. Let's make banksters start using transparent accounting-- so transparent that the accounting of each firm is posted ten stories high on the side of office buildings.
Let's bring this into homes. At the end of each street, we can post data about each couple that lives on the block-- how much they make, how many times they make love per month, what they eat for each meal, how many times they've been ill, and from what, and lets collect the data from every source we can, including gossip and bad guesses.
I mean, hell, we could just record all that information, every personal scrap of data, no matter how stupid, insignificant, personal, private, meaningless, important, whatever, from whatever source- no matter how unreliable-- and place that data in the cloud, to follow the people around for every day of their lives, visible to all sorts of people who get to decide things like employment and health insurance.
Oh, no, wait. We're already working on that.
Suddenly I get it. Data walls aren't just an indefensible abuse of children. They aren't just a way to make school a bit more hostile and unpleasant, a way to shame and bully the most fragile members of our society. They're also a way to acclimate children to a brave new world where inBloom et al track their data from cradle to grave and make it available to all sorts of folks. Where privacy is a commodity that only the rich can afford.
Data walls are deeply and profoundly wrong. There is no excusable reason on God's Green Earth for them to exist. They may represent a small battle in the larger reformy stuff war, but they are a direct assault on our students, and they should stop, now, today.
The data wall is a logical extension of Reformy Stuff's complete misunderstanding of how tests work and how human beings are motivated. A Data Wall makes perfect sense if you believe A) students are primarily Data Generation Units and B) human beings are best motivated by shame and bullying.
The Data Walls were inevitable. After all, we're well past the point where we decided that generating a bunch of cool numbers with badly designed invalid junk tests and then publishing those numbers in the newspaper would be a most excellent way to motivate teachers. Why would we not want to do the same with students?
Sure, everything we actually know about human motivation says that this is wrong. And the technique of combining useless tests, bad data, and public shaming has not yet produced any useful results in any of the school systems where it has been tried with teachers.
But we've learned that one of the SOP's of the Masters of Reforming Our Nation's Schools is that when something you really believe clashes with reality, it is time to bash reality in the face. If your latest technique failed, then you don't need to adjust-- you just need to fail harder.
Most of the examples that we have seen of this practice show at least a passing respect for privacy issues, or at least the lawyers who make money suing over them. And a while back somebody had a minor internet hit with a Data Wall about the educational qualifications of Gates, Duncan and Rhee (spoiler alert: none). But these things won't go away. A look at some of these terrible public displays of student results can and should be read over at Edusanity.'And Valerie Strauss addressed the wrongness of it all last week.
Maybe, as the MoRONS usually would have us believe, just haven't pushed it rigorously enough (because, you know, of our unaccountable urges to coddle six-year-olds). Are there ways we could make Data Walls even betterer?? Sure-- here are some thoughts--
Data Dress Codes. If you are Below Basic, you must wear the Below Basic uniform, a sort of middling grey. Basic students may add black and white to the palatte. Proficient students may wear primary colors, and Advanced students can have a full range of colors, including tie-dye.
Data Recess. If you are Proficient or Advanced, you can play a base or pitch in playground softball. Below Basic sit a special Below Basic Bench. Basic students play left field.
But hey-- if rigorous shaming toward excellence is good for kids, why not apply it to adults as well.
In Congress, we could have a giant data wall charting which legislators have passed the most bills. Or, since data walls often post meaningless junk data, lets post things like gallons of coffee used per office. Lets go to law firms and put a big chart in the lobby showing billable hours per lawyer. Let's make banksters start using transparent accounting-- so transparent that the accounting of each firm is posted ten stories high on the side of office buildings.
Let's bring this into homes. At the end of each street, we can post data about each couple that lives on the block-- how much they make, how many times they make love per month, what they eat for each meal, how many times they've been ill, and from what, and lets collect the data from every source we can, including gossip and bad guesses.
I mean, hell, we could just record all that information, every personal scrap of data, no matter how stupid, insignificant, personal, private, meaningless, important, whatever, from whatever source- no matter how unreliable-- and place that data in the cloud, to follow the people around for every day of their lives, visible to all sorts of people who get to decide things like employment and health insurance.
Oh, no, wait. We're already working on that.
Suddenly I get it. Data walls aren't just an indefensible abuse of children. They aren't just a way to make school a bit more hostile and unpleasant, a way to shame and bully the most fragile members of our society. They're also a way to acclimate children to a brave new world where inBloom et al track their data from cradle to grave and make it available to all sorts of folks. Where privacy is a commodity that only the rich can afford.
Data walls are deeply and profoundly wrong. There is no excusable reason on God's Green Earth for them to exist. They may represent a small battle in the larger reformy stuff war, but they are a direct assault on our students, and they should stop, now, today.
Wednesday, February 19, 2014
DVR Corrects Course
Dennis Van Roekel today let loose on the NEA Today website with what represents a big set of admissions for him, and what for many of us wins a Captain Obvious merit badge. Regarding the CCSS:
I am sure it won’t come as a surprise to hear that in far too many states, implementation has been completely botched
Well, the "no surprise to hear part" is pretty obvious. And we've been saying the rest for a while. So how big a shift does today's commentary actually represent.
The opening paragraphs can be dismissed, I think, as face-saving revisionist history. New lipstick on the same ugly damn pig. "The CCSS came out and educators leapt forward like good soldiers, embracing the standards with joy blah blah blah but it turns out the bureaucrats muffed the implementation, and you know, we told them not to do that!!" Okay, fine. That, combined with the note of "like a good life long learner, I've been listening to teachers and learning what it looks like on the ground" is probably the closest we'll get to an apology, and I'm okay with that. Politics. It's what's for breakfast, and he still washes it down with the koolaid.
But gone is the factoid about widespread teacher support. Now we're talking about widespread teacher non-preparation for the core, and the non-support teachers are getting with implementation. It sounds a lot like the standard "The standards are swell; it's just an installation problem" so far, but somewhat feistier than in the past.
A few grafs later, he arrives at the sixty million dollar question:
Where do we go from here?
DVR acknowledges that lots of folks want NEA to call for scrapping the standards. And it would be easy to go along with the critics on the left and the right (one bonus point for admitting they all exist), but we don't want to go backwards. Specifically, we don't want to go back to the bad old days of NCLB and teaching to tests and bad bubbling.
DVR, you do know that there were schools before NCLB. We could go back a mere fourteen years and find ourselves back in the age of authentic assessment, an approach that had potential but was snuffed out by NCLB. So, minus one point for ignoring the full range of options.
He moves on to some specifics. Work with teachers. Stop giving old bubble tests that don't match the new standards. Involve teachers in developing some of this stuff.
And in fact the whole thing would be way too weak to mean much (other than DVR is sliding one step closer to living on the same reality as the rest of us), except for one thing. And I am going to hold DVR to that one thing, because if we get that, none of the rest matters.
DVR has a list of seven items NEA wants from "policymakers" (DVR first artfully sidesteps the issue of whether it's states, feds, or corporations that are driving this bus), and at the number one spot, we find this:
1. Governors and chief state school officers should set up a process to work with NEA and our state education associations to review the appropriateness of the standards and recommend any improvements that might be needed.
Can we just tattoo that across the sky? Paint it on DVR's face?
The other six are just arble-garble about testing and proper field-testing and accountability and probably ploughing the road for NEA's Helmsley-fund financed partnership with PARCC and SBA, but I don't care and I'm willing to ignore it, because if we get a do-over on the standards, if we get a state-level method of revising the standards to suit that state with teachers in an actual position to affect the process-- I would do the kind of happy dance that would embarrass grandchildren that aren't even born yet. Rewrite the standards? With the states, not the USDOE? I have to say, I don't hate that idea.
There will be a ton of parsing of DVR's release today, but for me, that one point is the bombshell. Because the standards are the foundation of everything else. And, done correctly, everything else must wait for the standards to be finished and fixed. I have no illusions about the likelihood of that happening easily or even at all. I'm just happy that my national union has even just one thing on the table that I can support. There's an awful lot of platitudinous baloney on this new plate, but for the moment, I'm going to ignore it and focus on the yummy chocolate chip cookie that I can see.
I am already reading the cries that it is too weak and too late, and there's absolutely no question that it's both. But at this point, there are only two options-- being too late, or staying too wrong. You can't fix Too Late. Absent a time machine, DVR can't undo his ongoing period of wrong-headed quackery. At this point the best we could get would be Too Late But Absolutely Right. Too Late But Slightly Less Wrong isn't perfect, but it's still better than Still Dead Wrong And Unwilling To Talk About It. Sometimes better is all you get.
UPDATE : Well, it took DVR about a week to backtrack on this and walk back the most interesting and worthwhile parts. Here's the scoop on that.
Tuesday, February 18, 2014
TNTP Enters the Evaluation Game
The New Teacher Project was a Michelle Rhee spin-off from TFA. While TFA is all about shiny new 22-year-old temps, TNTP has thrown its focus toward recruiting more mature candidates looking to change careers (people who have actually held a job). TNTP has long indicated that it believes that some teachers are better than others, and that public education needs a reliable tool for spotting the winners. This has been most thoroughly expressed in their two-- I don't know-- research projects? PR pieces? Prespecti? Ad campaign programs? The Widget Effect and The Irreplaceables.
TNTP has the same root problem with teacher evaluation as TFA-- they love testing, they love Value-Added, and they already think they know who the Good Teachers are, so the evaluation tool must give an answer that checks out against what they already believe to be true. (This technique is known as The Not Very Scientific Method).
These days TNTP shares TFA's desire to bring diversity to classrooms (which is, if nothing else, a more easily-defensible PR position), and like all good supporters of the status quo, they are determined to fight the status quo.
But today they have taken another step in their quest for the appearance of excellence by releasing the TNTP Core Teaching Rubric. And because it's a snow day in my neck of the woods, I've been perusing this document.
The TNTP Core Teaching Rubric streamlines today’s bloated rubrics to bring the same focus and coherence to classroom observations that the Common Core brings to academic standards.
TNTP's premise is that current rubrics are too big and messy and give the observationator way too much to do, and I can hear Danielson-burdened principals across the country say, "No shinloa, sherlock!" And let me give TNTP credit, because if their goal was to come up with a more light and airy rubric, they have scored a big win.
So, okay. Students engaged? Fine. I know research says there's no actual correlation between engagement and learning, but my teacher intuition agrees with everybody else's-- student engagement is good.
But essential content? We're seriously proposing to evaluate teachers based on whether or not they are covering the CCSS. You're right TNTP-- there is not yet enough micromanaging of classroom teachers. Let's evaluate them on how well they allow themselves to be micromanaged.
"Are all students responsible for doing the thinking in the classroom?" Oh, good lord. I know somewhere in my head that these reformers prefer that teachers not think, but to just come out and say it is.... I don't know. Rude. Still, I think the taxpayers in my district would prefer that students not do ALL the thinking in my classroom. (And just to be clear, no, I didn't misplace the "all." If I say "I'll do the driving" or "She'll do the cooking," that does not indicate a shared task.) Later the document describes this element in terms tat make a little more sense, but that is an ongoing issue as well-- it's a short document, but it lacks internal consistency, as if each page was composed in a separate office.
Demonstration of Learning. And so we've hit all the basic reformer food groups. One part something that's supportable, one part bureaucratic nonsense, one part pedagogical nonsense, and now, one part something so obvious that only someone who knew nothing about teaching would think it needs to be pointed out. Oh, and twelve parts essential elements that have been left out because the creators don't know any better.
"Each performance has three components." We will be checking an essential question, descriptor language, and core teacher skills. The essential questions are close in wording to the descriptions above. The descriptor language is one more five-column rubric breaking all of these areas into specifics. As is typical of these holistic scoring tools, it takes an array of multiple details that allows for 152,633 possible configurations (I'm just roughly estimating here) and crams them into five different scores. For those of us who have been steeped in holistic scoring, it's not really as impossible as it seems.
The core teacher skills part is actually my favorite, because it's where the rubric backslides from its clean and simple lines. In this area, we try to reverse engineer what we think the teacher did in order to get the student behavior. For instance, if all the students demonstrate that they are learning, can we trace that back to teacher core skills of leading instruction, checking for understanding of content, and responding to student misunderstanding. Is it possible that, in keeping with the spirit of CCSS math, a teacher could arrive at the correct result, but not in the correct manner? At any rate, the teacher skills are not supposed to be part of the evaluation, but part of the conversation about the results.
As this is a pilot program, users are invited to "take what you learn from a pilot to inform ongoing training and norming. And please tell us what you learn" at an email address. You're invited to change the language of the rubric to fit your local and reminded that this should be one of "multiple measures of performance." You didn't think we were going to leave student test scores out, did you?
Is there a research basis for this? Why, sure. It's the standard reformy model. In this case, TNTP leans on their experience training teachers for the field, but the formula is the same. We know that these are Excellent Qualities because Excellent Teachers use them, and we can identify those Excellent Teachers because they are the ones using Excellent Qualities. Though it should be noted that only a very few should receive the super-duper seal of excellent excellence, modeled on the winners of TNTP's Fishman Prize (an absolutely awesome name for a prize even though I'm sure the actual trophy is nowhere near as cool as the one I imagine).
So there you have it. Not evil or nefarious. Just kind of sloppy, ill-considered, and generally mediocre. Once we all get our school districts to volunteer to do TNTP's field testing for free, we'll have yet another superlative tool for evaluating teachers into such a state of excellence that they won't know what hit them.
TNTP has the same root problem with teacher evaluation as TFA-- they love testing, they love Value-Added, and they already think they know who the Good Teachers are, so the evaluation tool must give an answer that checks out against what they already believe to be true. (This technique is known as The Not Very Scientific Method).
These days TNTP shares TFA's desire to bring diversity to classrooms (which is, if nothing else, a more easily-defensible PR position), and like all good supporters of the status quo, they are determined to fight the status quo.
But today they have taken another step in their quest for the appearance of excellence by releasing the TNTP Core Teaching Rubric. And because it's a snow day in my neck of the woods, I've been perusing this document.
The TNTP Core Teaching Rubric streamlines today’s bloated rubrics to bring the same focus and coherence to classroom observations that the Common Core brings to academic standards.
TNTP's premise is that current rubrics are too big and messy and give the observationator way too much to do, and I can hear Danielson-burdened principals across the country say, "No shinloa, sherlock!" And let me give TNTP credit, because if their goal was to come up with a more light and airy rubric, they have scored a big win.
The rubric scores teachers across four areas. They are:
· STUDENT ENGAGEMENT: Are all students engaged in the work of
the lesson from start to finish?
· ESSENTIAL CONTENT: Are all students working with content aligned
to the appropriate standards for their subject and grade?
· ACADEMIC OWNERSHIP: Are all students responsible for doing
the thinking in this classroom?
· DEMONSTRATION OF
LEARNING: Do all
students demonstrate that they are learning?
So, okay. Students engaged? Fine. I know research says there's no actual correlation between engagement and learning, but my teacher intuition agrees with everybody else's-- student engagement is good.
But essential content? We're seriously proposing to evaluate teachers based on whether or not they are covering the CCSS. You're right TNTP-- there is not yet enough micromanaging of classroom teachers. Let's evaluate them on how well they allow themselves to be micromanaged.
"Are all students responsible for doing the thinking in the classroom?" Oh, good lord. I know somewhere in my head that these reformers prefer that teachers not think, but to just come out and say it is.... I don't know. Rude. Still, I think the taxpayers in my district would prefer that students not do ALL the thinking in my classroom. (And just to be clear, no, I didn't misplace the "all." If I say "I'll do the driving" or "She'll do the cooking," that does not indicate a shared task.) Later the document describes this element in terms tat make a little more sense, but that is an ongoing issue as well-- it's a short document, but it lacks internal consistency, as if each page was composed in a separate office.
Demonstration of Learning. And so we've hit all the basic reformer food groups. One part something that's supportable, one part bureaucratic nonsense, one part pedagogical nonsense, and now, one part something so obvious that only someone who knew nothing about teaching would think it needs to be pointed out. Oh, and twelve parts essential elements that have been left out because the creators don't know any better.
"Each performance has three components." We will be checking an essential question, descriptor language, and core teacher skills. The essential questions are close in wording to the descriptions above. The descriptor language is one more five-column rubric breaking all of these areas into specifics. As is typical of these holistic scoring tools, it takes an array of multiple details that allows for 152,633 possible configurations (I'm just roughly estimating here) and crams them into five different scores. For those of us who have been steeped in holistic scoring, it's not really as impossible as it seems.
The core teacher skills part is actually my favorite, because it's where the rubric backslides from its clean and simple lines. In this area, we try to reverse engineer what we think the teacher did in order to get the student behavior. For instance, if all the students demonstrate that they are learning, can we trace that back to teacher core skills of leading instruction, checking for understanding of content, and responding to student misunderstanding. Is it possible that, in keeping with the spirit of CCSS math, a teacher could arrive at the correct result, but not in the correct manner? At any rate, the teacher skills are not supposed to be part of the evaluation, but part of the conversation about the results.
As this is a pilot program, users are invited to "take what you learn from a pilot to inform ongoing training and norming. And please tell us what you learn" at an email address. You're invited to change the language of the rubric to fit your local and reminded that this should be one of "multiple measures of performance." You didn't think we were going to leave student test scores out, did you?
Is there a research basis for this? Why, sure. It's the standard reformy model. In this case, TNTP leans on their experience training teachers for the field, but the formula is the same. We know that these are Excellent Qualities because Excellent Teachers use them, and we can identify those Excellent Teachers because they are the ones using Excellent Qualities. Though it should be noted that only a very few should receive the super-duper seal of excellent excellence, modeled on the winners of TNTP's Fishman Prize (an absolutely awesome name for a prize even though I'm sure the actual trophy is nowhere near as cool as the one I imagine).
So there you have it. Not evil or nefarious. Just kind of sloppy, ill-considered, and generally mediocre. Once we all get our school districts to volunteer to do TNTP's field testing for free, we'll have yet another superlative tool for evaluating teachers into such a state of excellence that they won't know what hit them.
Subscribe to:
Posts (Atom)