The rising tide of support for computer-centered competency based education is a computer with artificial intelligence (AI), a computer smart enough to be to follow, understand and respond to the behavior and choices of the human students linked up to the system. But this presents problems.
Some are pretty obvious. Just a month ago, Microsoft hooked an AI-powered chatbot up to Twitter and watched in horror as it proceeded to tweet horrible racist comments. That was not the plan, but any AI development has to wrangle with the problem of installing human values into a machine.
How can an AI-driven system "teach" children if it can't be instilled with human values?
There's an interesting discussion of these issues in an article posted at Slate today. It has nothing at all to say about CBE or other computer-driven education systems-- at least not directly-- but much to ponder about the business of creating a computer program that could handle the job.
There are scientists working on it; there have been since the days that Isaac Asimov designed the Three Laws of Robotics, meant to give robots something like a moral center. The article says that these folks want to achieve AI "provably aligned with human values." Which is a hugely reductive statement of the problem, because the first question we have to answer is, "Which human values?"
You may think that there are surely some clear, central core values shared by all humans, but the Slate article reaches back to work by Joseph Henrich that I've discussed here before which suggests that most of what we think of as "normal" is really just the product of our own culture. This extends not just to silly, obvious examples like how to shake hands but, as Henrich shows, actual perception-- what is an optical illusion in some cultures is not one in others.
AI has depended on knowledge based and outward behaviors, but it is hugely limited. As writer Adam Elkus says in the article's opening, "Computers are powerful but frustratingly dumb in their inability to grasp ambiguity and context."
That means that AI often falls back not so much on creating intelligence, but on creating a complex of behaviors that simulate intelligence, but are still just the computer responding stupidly to a series of complicated instructions.
This obviously matters to more than just people in the way of educational AI. One of the challenges of programmers trying to perfect computer-driven cars is the big question-- in an accident involving many people, which people should the AI car most try not to kill? In such an accident, the decision of whose life to try to save will not be made by the car-- it will be made by the programmer who wrote the software that tells the car which individual to "value" most.
An AI teacherbot will implement a complex algorithm, a super-rubric, and those directions will come from programmers who will include their own values, their own beliefs about how that educational moment/issue/response/thingamabob ought to be handled. "Well," you may say, "So will a human teacher. A human teacher will bring biases and views to the classroom as well."
And that's true. But the programmed-in bias of computers is an issues because A) it will most likely be put there by people who are NOT trained, experienced classroom professionals and B) because, like a standardized test, the computer centered CBE program will come wrapped in a mantle of objectivity, a crown of bias-free just-the-facts-ma'm, all of which will merely be an illusion. Furthermore, it will be an illusion that cannot be challenged or modified. As a live human, I can be challenged by my students on a point; they can even convince me to change my mind as we all wrestle with context and ambiguity.
Teaching is a moral act, an act that comes with a heavy moral and ethical context. AI does not currently have that capacity, and may very well never have it. Putting an educational program under the control and guidance of an AI-flavored computer program is putting a classroom in the hands of a sociopath who literally does not know right from wrong but only, at best, a list of rules given to them by someone, rules that it now follows slavishly.
Well, what if we have those rules written by someone who we agree is a highly moral person? Would that satisfy you, you cranky old fart?
My answer is no. Moral and ethical behavior by its very nature must deal with ambiguity and context, and it must be able to change and grow in understanding. It requires wisdom, not just intelligence. When folks push AI computers as a solution to the classroom, they are pretending to have solutions to problems that the leading minds in the computer world have not solved, and even if those solutions existed, we would still have to argue about whether or not they made a teacherbot fit for the classroom.
Friday, April 22, 2016
Duncan Finds a New Platform
Well, you knew it was going to happen. Somewhere, some thinky tank was going to bid on the right to hang Arne Duncan's shingle on their porch and give him the chance to continue his misguided, ill-informed gumflappery. Yes, the Emerson Collective has already hired him to fill a seat, but who would let him keep talking?
The winner? Ha! Trick question-- there are no winners in this transaction. But the thinky tank that will be providing a megaphone for the former Secretary of Education is the Brookings Institute.
The Duncanator will serve as a nonresident senior fellow in the Governance Studies program's Brown Center on Education Policy.
It is, on reflection, a match made in heaven. Brookings, a right-tilted thinky tank that leans heavily on the wisdom of economists, has long been a reliable purveyor of education policy nonsense. They done "research" that poor kids really do suck. They have won my award for "Most Clueless Commentary on Common Core" as well as mis-predicting its future. They have cobbled together weak sauce arguments for annual Big Standardized Tests. And they have scolded the poor for continuing to fornicate.
In short, they have been consistently wrong when it comes to issues of education policy, which makes Arne Duncan a perfect fit.
Duncan will blog (no word on whether or not Brookings is providing him with an intern) and "participate in public events on relevant issues." His first gig? A forum on charter schools. Great.
"The Brown Center is proud to welcome Arne Duncan, who has demonstrated passionate leadership on education and youth development issues throughout his life and career,” said Darrell West, Vice President of Governance Studies at Brookings. “The research and activities of the Brown Center will benefit greatly from his decades of experience shaping and implementing education policy, not just at the federal but at the state and local levels as well. His perspective will help the Brown Center generate fresh ideas and new approaches to the challenges facing American schools and communities.”
If the Brown Center is looking for someone to talk about policies that failed, or the insider mechanics of pissing off Congress so badly that they commit the unprecedented act of rolling back the powers of your department, then Duncan is the man. Big win. Heckuva job, Brookings.
So this is good news for Duncan, who gets to cash in some more on his years of promoting failed policies. But it's a lose for Brookings, which as usual is kind of oblivious and doesn't seem to know that Duncan has few fans on the left or the right (AEI's Frederic Hess tweeted "Swell...another platform for him to offer up self-righteous nastiness. I wonder whose motives he'll question first."). I suppose it's a win for snarky bloggers, who will now have more material to mine on slow days. But it's a lose for everyone who has to continue to be exposed to Duncan's misguided and ill-informed thoughts about education.
The winner? Ha! Trick question-- there are no winners in this transaction. But the thinky tank that will be providing a megaphone for the former Secretary of Education is the Brookings Institute.
The Duncanator will serve as a nonresident senior fellow in the Governance Studies program's Brown Center on Education Policy.
It is, on reflection, a match made in heaven. Brookings, a right-tilted thinky tank that leans heavily on the wisdom of economists, has long been a reliable purveyor of education policy nonsense. They done "research" that poor kids really do suck. They have won my award for "Most Clueless Commentary on Common Core" as well as mis-predicting its future. They have cobbled together weak sauce arguments for annual Big Standardized Tests. And they have scolded the poor for continuing to fornicate.
In short, they have been consistently wrong when it comes to issues of education policy, which makes Arne Duncan a perfect fit.
Duncan will blog (no word on whether or not Brookings is providing him with an intern) and "participate in public events on relevant issues." His first gig? A forum on charter schools. Great.
"The Brown Center is proud to welcome Arne Duncan, who has demonstrated passionate leadership on education and youth development issues throughout his life and career,” said Darrell West, Vice President of Governance Studies at Brookings. “The research and activities of the Brown Center will benefit greatly from his decades of experience shaping and implementing education policy, not just at the federal but at the state and local levels as well. His perspective will help the Brown Center generate fresh ideas and new approaches to the challenges facing American schools and communities.”
If the Brown Center is looking for someone to talk about policies that failed, or the insider mechanics of pissing off Congress so badly that they commit the unprecedented act of rolling back the powers of your department, then Duncan is the man. Big win. Heckuva job, Brookings.
So this is good news for Duncan, who gets to cash in some more on his years of promoting failed policies. But it's a lose for Brookings, which as usual is kind of oblivious and doesn't seem to know that Duncan has few fans on the left or the right (AEI's Frederic Hess tweeted "Swell...another platform for him to offer up self-righteous nastiness. I wonder whose motives he'll question first."). I suppose it's a win for snarky bloggers, who will now have more material to mine on slow days. But it's a lose for everyone who has to continue to be exposed to Duncan's misguided and ill-informed thoughts about education.
MI: Let's Test Kids Into Oblivion
[Update: check the comments or this link for a bit more nuance and background from someone who lives there]
Congratulations, Michigan-- your state superintendent is nuts.
Brian Whiston was in front of state legislators last week to lay out his "vision" for education, and it's genius-- test the little buggers, all of them, into oblivion.
Where did Michigan find State Superintendent Whiston? Well, he was previously head of Dearborn Public Schools. He was a school board member for many years. And apparently he did some student teaching once. Oh, and he's won two awards-- he was Superintendent of the Year in 2014, and in 2007, he was Lobbyist of the Year. Because for part of his career he was a lobbyist for the Oakland school district (during which time he "learned some life lessons" about excessive expenses).
He did an interview with the Detroit Free Press back when he was elevated to the state level last summer, and in that he lays out some of his thoughts about education. These include ideas like model classrooms where the teacher is awesome and all other teachers can be trotted through and told, "See? Do it like this!" Let's imagine the teacher who replies, "Sure. Can I have this batch of students, too?" And he wants you to know that in Dearborn he was firing teachers all over the place, so totally working on that improvement of staff thing.
But his biggest plan of all is Top 10 in 10, Whiston's initiative to put Michigan among the top ten education states within ten years. That would be an impressive achievement, considering how far in the basement Michigan is on indicators like childhood literacy. That big strategic plan focuses on these goals:
Because there's one other thing that Whiston feels is super-important, and he stated that clearly to the Free Press in that big interview. Talking about what he'd do right out of the gate, Whiston mentioned calling a bunch of thinky tanks together to advise him (not, of course, teachers-- who the hell needs to talk to teachers about education), and also this:
Testing is obviously something I'm going to start day one trying to work towards.
Yes, obviously, Big Standardized Tests are necessary. Which brings us to his chat with legislators Wednesday.
Because what Michigan's students need rather than, say, an actual investment of resources in their schools or the removal of the charter school boot from their financial necks or a reality-based attempt to recruit and retain teachers-- what Michigan students need more than all that is more testing.
Mind you, the Michigan Student Test of Educational Progress (M-STEP) is only on Year 2. Also, it's expensive, time-consuming, and roundly criticized for being one more crappy Big Standardized Test. A state House committee voted to cut its funding. But when a BS Test is failing, the only thing to do is test more harder.
Whisten proposes to administer the test twice a year (or maybe even more) to "get a better sense of academic progress, and inform class instruction" says the man who has never been a classroom teacher. And instead of starting in third grade, Whisten believes that "age-appropriate" testing should start in kindergarten.
You know what kind of standardized testing is appropriate in kindergarten? None. None standardized testing is appropriate in kindergarten.
So condolences to you, Michigan. A child-poisoning governor, an entrenched system of replacing democracy with emergency managers-- oh, excuse me-- with CEO's, and a state education superintendent with no classroom experience and a BS Test fetish.
Congratulations, Michigan-- your state superintendent is nuts.
Brian Whiston was in front of state legislators last week to lay out his "vision" for education, and it's genius-- test the little buggers, all of them, into oblivion.
Where did Michigan find State Superintendent Whiston? Well, he was previously head of Dearborn Public Schools. He was a school board member for many years. And apparently he did some student teaching once. Oh, and he's won two awards-- he was Superintendent of the Year in 2014, and in 2007, he was Lobbyist of the Year. Because for part of his career he was a lobbyist for the Oakland school district (during which time he "learned some life lessons" about excessive expenses).
He did an interview with the Detroit Free Press back when he was elevated to the state level last summer, and in that he lays out some of his thoughts about education. These include ideas like model classrooms where the teacher is awesome and all other teachers can be trotted through and told, "See? Do it like this!" Let's imagine the teacher who replies, "Sure. Can I have this batch of students, too?" And he wants you to know that in Dearborn he was firing teachers all over the place, so totally working on that improvement of staff thing.
But his biggest plan of all is Top 10 in 10, Whiston's initiative to put Michigan among the top ten education states within ten years. That would be an impressive achievement, considering how far in the basement Michigan is on indicators like childhood literacy. That big strategic plan focuses on these goals:
- construct a solid and sustainable P-20 system to educate all children for success;
- meet and support the learning needs of ALL children;
- meet and support the professional needs of ALL educators;
- design systems to overcome the disparities experienced by children and schools;
- empower parents and families to actively participate in their child’s education;
- partner with employers to develop a strong, educated, and highly-skilled workforce; and
- leading and lifting Michigan education through greater service from Lansing
Because there's one other thing that Whiston feels is super-important, and he stated that clearly to the Free Press in that big interview. Talking about what he'd do right out of the gate, Whiston mentioned calling a bunch of thinky tanks together to advise him (not, of course, teachers-- who the hell needs to talk to teachers about education), and also this:
Testing is obviously something I'm going to start day one trying to work towards.
Yes, obviously, Big Standardized Tests are necessary. Which brings us to his chat with legislators Wednesday.
Because what Michigan's students need rather than, say, an actual investment of resources in their schools or the removal of the charter school boot from their financial necks or a reality-based attempt to recruit and retain teachers-- what Michigan students need more than all that is more testing.
Mind you, the Michigan Student Test of Educational Progress (M-STEP) is only on Year 2. Also, it's expensive, time-consuming, and roundly criticized for being one more crappy Big Standardized Test. A state House committee voted to cut its funding. But when a BS Test is failing, the only thing to do is test more harder.
Whisten proposes to administer the test twice a year (or maybe even more) to "get a better sense of academic progress, and inform class instruction" says the man who has never been a classroom teacher. And instead of starting in third grade, Whisten believes that "age-appropriate" testing should start in kindergarten.
You know what kind of standardized testing is appropriate in kindergarten? None. None standardized testing is appropriate in kindergarten.
So condolences to you, Michigan. A child-poisoning governor, an entrenched system of replacing democracy with emergency managers-- oh, excuse me-- with CEO's, and a state education superintendent with no classroom experience and a BS Test fetish.
Thursday, April 21, 2016
PA: Funding Follies (Part 15,263)
So, you may recall from last time, the elected capital clown car that is Pennsylvania's state government had sort of passed a budget that included an education spending increase, but had not passed rules on how to spend that extra money. Governor Tom Wolf whipped up his own plan for how to divvy up the money, only his plan didn't so much "divvy it up" as it "dumped most of it on a handful of select school districts" and also technically "ignored the elected legislature and their lawmaking powers." This made it unpopular with very many people. Very many.
Wolf's theory was that some districts were particularly deep in a financial hole (thanks to the last two administrations, though Wolf prefers to blame it on just the last one), we need some restorative budgeting. In other words, if school funding is a race, Wolf wanted everyone else to just kind of sit on the curb and wait while a few people in the back of the pack catch a ride and join up.
The problem-- well, one of the problems-- as some folks tried to tell Wolf in a meeting or two, is that way more school districts are feeling Big Time Hurt than just those who made the Wolf Special Care List (a list which, frankly, looks more like a list of districts that have been pulling notable bad press-- Philly, Chester Uplands, Wilkinsburg-- than a carefully researched collection).
On top of that, as I previously warned/noted/predicted, Pennsylvania is just chock full of people who hate hate HATE having tax dollars yanked out of their pockets and sent off to Philly or other Big Cities. We can argue all day about justice and fairness and intra-state financial support, but the bottom line is that the issue is a guaranteed political turd bomb in Pennsylvania.
And so the House and Senate put together a spending bill of their own, passed it with a veto-proof margin, and sent it off to the governor. As with the budget, he can sign it or just let it become law while he sits in the corner and makes a pouty face.
This is good news for every school district that wasn't on Wolf's list (which is most of them) as they'll see more money-- maybe even enough to help offset the effects of all the borrowing, cutting and finagling that districts had to do to weather the nine-month Harrisburg budget storm.
The only good news for the state is that, miraculously, the nations' most expensive legislators actually managed to work across party lines and accomplish something. The bad news is that now that last year's budget has taken ten months to fully settle, we are already behind on the next budget-- and there isn't the slightest sign that anybody in Harrisburg has learned a thing from this mess that might help with the next mess. Standard and Poor's thinks so too-- even as the long-overdue budget was limping across the finish line, S&P was threatening to downgrade PA's rating even further, rather that lifting it.
Meanwhile, Wolf has managed to make himself almost completely irrelevant to the budgeting process, and Pennsylvania's school funding system, which is fundamentally messed up, remains unaddressed. You can say we're moving forward, if circling the drain is a forward-ish sort of motion.
Wolf's theory was that some districts were particularly deep in a financial hole (thanks to the last two administrations, though Wolf prefers to blame it on just the last one), we need some restorative budgeting. In other words, if school funding is a race, Wolf wanted everyone else to just kind of sit on the curb and wait while a few people in the back of the pack catch a ride and join up.
The problem-- well, one of the problems-- as some folks tried to tell Wolf in a meeting or two, is that way more school districts are feeling Big Time Hurt than just those who made the Wolf Special Care List (a list which, frankly, looks more like a list of districts that have been pulling notable bad press-- Philly, Chester Uplands, Wilkinsburg-- than a carefully researched collection).
On top of that, as I previously warned/noted/predicted, Pennsylvania is just chock full of people who hate hate HATE having tax dollars yanked out of their pockets and sent off to Philly or other Big Cities. We can argue all day about justice and fairness and intra-state financial support, but the bottom line is that the issue is a guaranteed political turd bomb in Pennsylvania.
And so the House and Senate put together a spending bill of their own, passed it with a veto-proof margin, and sent it off to the governor. As with the budget, he can sign it or just let it become law while he sits in the corner and makes a pouty face.
This is good news for every school district that wasn't on Wolf's list (which is most of them) as they'll see more money-- maybe even enough to help offset the effects of all the borrowing, cutting and finagling that districts had to do to weather the nine-month Harrisburg budget storm.
The only good news for the state is that, miraculously, the nations' most expensive legislators actually managed to work across party lines and accomplish something. The bad news is that now that last year's budget has taken ten months to fully settle, we are already behind on the next budget-- and there isn't the slightest sign that anybody in Harrisburg has learned a thing from this mess that might help with the next mess. Standard and Poor's thinks so too-- even as the long-overdue budget was limping across the finish line, S&P was threatening to downgrade PA's rating even further, rather that lifting it.
Meanwhile, Wolf has managed to make himself almost completely irrelevant to the budgeting process, and Pennsylvania's school funding system, which is fundamentally messed up, remains unaddressed. You can say we're moving forward, if circling the drain is a forward-ish sort of motion.
Big Brother in a Box
Are you excited about the prospect of computer-centered competency based education? Are you an administrator whose fondest dream is to sit in your office, managing every aspect of your school by way of a big shiny bank of computer screens? Well, here's just one example of the many companies eager to make a buck help you achieve your vision. Meet Schoolrunner.org.
Schoolrunner promises, well, everything. Time for teachers. Administrator bliss. Power of parents. Student success. Those are all their headlines, not mine. And as we break it down more, the picture becomes at once more vivid and more terrible.
Evidence based academics. Because academics are now based on, I don't know-- tea leaves and palm readings? But Schoolrunner promises "Don't just view results, elicit actionable insight from your academic data." Because we all love to elicit actionable insight.
Track student behavior. We will "log, view and communicate behavioral performance." Simplify attendance. It is possible I'm doing attendance wrong, because I thought it was pretty simple already. Empower your students. Apparently by letting them look at some of their own data files. But wait-- there's more.
Easy-to-consume data. Consume by whom, one wonders, but Schoolrunner promises to "make molehills out of mountains" which doesn't even-- I mean, what does that even mean? Reduce large amounts of data to small meaningless blips?
One system to do it all. One system to find them. One system to bring them all and in the darkness bind them. Put all your data eggs in our special cyber basket!
Configure your goals. Figure out the purpose of everything and lock it into the Big Brother Box.
Above and Beyond School Management. "More than just a management system" is what you have to keep saying to sell this multi-limbed management octopus. Don't try to sell it by declaring, "Now we will control everything." Definitely don't follow with a maniacal laugh. Instead, keep insisting that if you can have centralized monitoring and control of everything everyone does in the district, you will "create the highest level of achievement for your students." Always remember that system domination is For The Children.
If you want to look at a more fleshed-out pitch for this sort of uber-management, Schoolrunners has a lovely "white paper" entitled "Five Ways SMART SCHOOLS Are Using Data To Drive Performance." (I don't know why they yell "SMART SCHOOLS"-- perhaps they're just very excited).
So what are these five golden rings of data enabled awesomeness?
1) Transparency.
The opening example is uncompelling. Apparently, if you keep actual records of student behavior problems, when a parent calls, you can use those specifics to talk to the parent. Also, if you serve food in the cafeteria, students are more likely to find it at lunch time.
They go on to argue that with data transparency, students can tell how they're doing, families receive an "in-depth look into their child's education," teachers can "immediately discern trouble areas for students," and administrators can-- well, let me hold onto that one for a second. Students can use the data, for sure. Parents in some families (you know-- the ones where parents and children don't communicate much) will benefit. The teacher who needs this should not be a teacher. If the answer to, "How is Chris doing in class" is "I won't know until I check the data read-out," I have my doubts about how much the data read-out will really help you.
Administration? Well, administrators "can see the performance of both their students and their staff in real time." Emphasis mine. This suggests that this system means to keep the teachers chained to their computers at all times, so that administrators can see what the teacher is up to. This seems like twelve kinds of a bad idea, showing little trust and reducing teachers to mindless widgets. MIndless widgets make lousy teachers no matter how great a system they're chained to.
2) Culture.
Numbers don’t create culture. If numbers created culture, salons would be run by math books. People create culture. Understanding how and why people make decisions improves
the relationship within your school’s community.
And then they explain how you use the numbers to see if you made the right culture choices or not. So numbers don't create culture, but they must be used to measure and justify it. Baloney.
3) Efficiency
Everyone is familiar with the concept of doing more with less.
Yikes. From that inauspicious opening, they move on to explain that having a super-duper data system frees up teachers from having to spend all their time massaging data. One school used centralized data and that led to a "holistic view of their students at a global level." So, wow. Also, in the end they learned that they could actually do more with less. So I think maybe they meant to say "productivity" instead of "efficiency," which is just as well, because efficiency is actually the enemy of excellence. The most efficient system is one that manages to hit the high side of mediocrity and the low side of cost. This is not a great target for a public school system.
4) Access
If you have data in a computer system, people can see it. That seems to be the point here. Illustration include a school nurse who can look up a policy on vomiting students or can see that a student turns up sick every day at the same time. Because without computers, nobody would ever know these things?
Being able to get "the information you want, when you need it" is a pretty good selling point, but I'm not sure we need Big Brother in a Box to do that.
5) Action
There is no need to rely on gut feeling, intuition, or spidey-sense when you know exactly where your strengths are and how you can leverage those strengths to address the pain-points that have crept into your school. Data generates actionable intelligence.
I'd be more inclined to say that there is no need to rely on some number-crunching data-shoveling program that may or may not have been written by someone who knows what they're doing if instead you can use the sense nature gave you and the ability to pay attention to the carbon-based life forms around you.
"Gut feeling, intuition and spidey-sense" are just dismissive ways to refer to experience, intelligence, sensitivity, emotional intelligence, alertness, and awareness. Can you always use a different perspective and another set of eyes? Absolutely. But if your "gut" is so lousy that you think a computer program would be better, then 1) you should be in another line of work and 2) your "gut" is also not smart enough to make good use of whatever the computer program tells you.
Never trust any system, ever, that sets a goal of removing human judgment for the business of dealing with humans. First, the "removal" is a lie-- any such system merely substitutes the judgment of the system creators for the judgment of the humans on the ground, and therefor 2) you can never remove human judgment from situations that run on human judgment. , so your real question is how to get the best human judgment in play.
Spoiler alert-- the best way is not to try to create a system that makes all educational and behavioral decisions for the classroom teacher while putting the drivers' seat in some office where the school's CEO can sit and manage everything on a big bank of computer screens.
Who already uses this?
Schoolrunner proudly announces that they are "driving student success at the nation's most progressive schools." You may first want to ask exactly how one "drives" student success, and why would one describe the process in a way that seems to reduce the actual student to an inanimate object. But after that, of course, you'll ask, "And which are the nation's most progressive schools, pray tell?'
Well, the listed winners are the Milwaukee Collegiate Academy, Choice Foundation, Achievement School District, KIPP: Houston, and Crescent City Schools. Crescent City and Choice Foundation are both New Orleans charters (Crescent City is actually partnered with RelayGSE, so you know they are super-reformy).
Exactly what about these charters is progressive will remain a mystery for now, but it's easy to see why a system like Schoolrunner would appeal to a charter operator. You don't need highly trained, experienced or skilled teachers at all-- just unpack Big Brother in a Box, sit down at your desk, pull up your dashboard, and you are running a whole school!
This is competency base, computer controlled schooling at its worst. Dehumanizing, one-sixe-fits-all, sterile and yet one more version of school that you will never find the wealthy submitting their own children to.
Schoolrunner promises, well, everything. Time for teachers. Administrator bliss. Power of parents. Student success. Those are all their headlines, not mine. And as we break it down more, the picture becomes at once more vivid and more terrible.
Evidence based academics. Because academics are now based on, I don't know-- tea leaves and palm readings? But Schoolrunner promises "Don't just view results, elicit actionable insight from your academic data." Because we all love to elicit actionable insight.
Track student behavior. We will "log, view and communicate behavioral performance." Simplify attendance. It is possible I'm doing attendance wrong, because I thought it was pretty simple already. Empower your students. Apparently by letting them look at some of their own data files. But wait-- there's more.
Easy-to-consume data. Consume by whom, one wonders, but Schoolrunner promises to "make molehills out of mountains" which doesn't even-- I mean, what does that even mean? Reduce large amounts of data to small meaningless blips?
One system to do it all. One system to find them. One system to bring them all and in the darkness bind them. Put all your data eggs in our special cyber basket!
Configure your goals. Figure out the purpose of everything and lock it into the Big Brother Box.
Above and Beyond School Management. "More than just a management system" is what you have to keep saying to sell this multi-limbed management octopus. Don't try to sell it by declaring, "Now we will control everything." Definitely don't follow with a maniacal laugh. Instead, keep insisting that if you can have centralized monitoring and control of everything everyone does in the district, you will "create the highest level of achievement for your students." Always remember that system domination is For The Children.
If you want to look at a more fleshed-out pitch for this sort of uber-management, Schoolrunners has a lovely "white paper" entitled "Five Ways SMART SCHOOLS Are Using Data To Drive Performance." (I don't know why they yell "SMART SCHOOLS"-- perhaps they're just very excited).
So what are these five golden rings of data enabled awesomeness?
1) Transparency.
The opening example is uncompelling. Apparently, if you keep actual records of student behavior problems, when a parent calls, you can use those specifics to talk to the parent. Also, if you serve food in the cafeteria, students are more likely to find it at lunch time.
They go on to argue that with data transparency, students can tell how they're doing, families receive an "in-depth look into their child's education," teachers can "immediately discern trouble areas for students," and administrators can-- well, let me hold onto that one for a second. Students can use the data, for sure. Parents in some families (you know-- the ones where parents and children don't communicate much) will benefit. The teacher who needs this should not be a teacher. If the answer to, "How is Chris doing in class" is "I won't know until I check the data read-out," I have my doubts about how much the data read-out will really help you.
Administration? Well, administrators "can see the performance of both their students and their staff in real time." Emphasis mine. This suggests that this system means to keep the teachers chained to their computers at all times, so that administrators can see what the teacher is up to. This seems like twelve kinds of a bad idea, showing little trust and reducing teachers to mindless widgets. MIndless widgets make lousy teachers no matter how great a system they're chained to.
2) Culture.
Numbers don’t create culture. If numbers created culture, salons would be run by math books. People create culture. Understanding how and why people make decisions improves
the relationship within your school’s community.
And then they explain how you use the numbers to see if you made the right culture choices or not. So numbers don't create culture, but they must be used to measure and justify it. Baloney.
3) Efficiency
Everyone is familiar with the concept of doing more with less.
Yikes. From that inauspicious opening, they move on to explain that having a super-duper data system frees up teachers from having to spend all their time massaging data. One school used centralized data and that led to a "holistic view of their students at a global level." So, wow. Also, in the end they learned that they could actually do more with less. So I think maybe they meant to say "productivity" instead of "efficiency," which is just as well, because efficiency is actually the enemy of excellence. The most efficient system is one that manages to hit the high side of mediocrity and the low side of cost. This is not a great target for a public school system.
4) Access
If you have data in a computer system, people can see it. That seems to be the point here. Illustration include a school nurse who can look up a policy on vomiting students or can see that a student turns up sick every day at the same time. Because without computers, nobody would ever know these things?
Being able to get "the information you want, when you need it" is a pretty good selling point, but I'm not sure we need Big Brother in a Box to do that.
5) Action
There is no need to rely on gut feeling, intuition, or spidey-sense when you know exactly where your strengths are and how you can leverage those strengths to address the pain-points that have crept into your school. Data generates actionable intelligence.
I'd be more inclined to say that there is no need to rely on some number-crunching data-shoveling program that may or may not have been written by someone who knows what they're doing if instead you can use the sense nature gave you and the ability to pay attention to the carbon-based life forms around you.
"Gut feeling, intuition and spidey-sense" are just dismissive ways to refer to experience, intelligence, sensitivity, emotional intelligence, alertness, and awareness. Can you always use a different perspective and another set of eyes? Absolutely. But if your "gut" is so lousy that you think a computer program would be better, then 1) you should be in another line of work and 2) your "gut" is also not smart enough to make good use of whatever the computer program tells you.
Never trust any system, ever, that sets a goal of removing human judgment for the business of dealing with humans. First, the "removal" is a lie-- any such system merely substitutes the judgment of the system creators for the judgment of the humans on the ground, and therefor 2) you can never remove human judgment from situations that run on human judgment. , so your real question is how to get the best human judgment in play.
Spoiler alert-- the best way is not to try to create a system that makes all educational and behavioral decisions for the classroom teacher while putting the drivers' seat in some office where the school's CEO can sit and manage everything on a big bank of computer screens.
Who already uses this?
Schoolrunner proudly announces that they are "driving student success at the nation's most progressive schools." You may first want to ask exactly how one "drives" student success, and why would one describe the process in a way that seems to reduce the actual student to an inanimate object. But after that, of course, you'll ask, "And which are the nation's most progressive schools, pray tell?'
Well, the listed winners are the Milwaukee Collegiate Academy, Choice Foundation, Achievement School District, KIPP: Houston, and Crescent City Schools. Crescent City and Choice Foundation are both New Orleans charters (Crescent City is actually partnered with RelayGSE, so you know they are super-reformy).
Exactly what about these charters is progressive will remain a mystery for now, but it's easy to see why a system like Schoolrunner would appeal to a charter operator. You don't need highly trained, experienced or skilled teachers at all-- just unpack Big Brother in a Box, sit down at your desk, pull up your dashboard, and you are running a whole school!
This is competency base, computer controlled schooling at its worst. Dehumanizing, one-sixe-fits-all, sterile and yet one more version of school that you will never find the wealthy submitting their own children to.
Wednesday, April 20, 2016
Comparable Measures
So apparently I'm writing a series about teacher evaluation this week. This will stand on its own, but if you want more context, you can work your way backwards starting here.
One of the holy grails of ed reform is comparability. The aim is a score or grade or rating that allows us to say definitively that Hypothetical High School is a better school than Imaginary Academy, that Pat O'Furniture teaching third grade in Iowa is a better teacher that Teachy McTeacherson teaching tenth grade Spanish in Maine.
But we're also looking for evaluations that provide useful information, and there's one of the major problems in the evaluation world these days.
The more comparable a measure is, the less useful it is.
Comparable measures must be reductive. In order to compare the elementary teacher in Iowa and the language teacher in Maine, we have to reduce the measure to elements that both teachers possess. This means that the measure must be simple, and it must ignore most of what makes each teacher unique.
This evaluation problem is mirrored by the challenges of student assessment in a classroom. For an example, let me talk about grading writing assignments. I do a multitude of assignment types in my classroom, but for our purposes, let's focus on one particular type.
Many of my students essays are scored with a modified six traits writing rubric. The rubric breaks writing down into six different qualities; additionally I use a modified rubric that breaks each of the six into two or three sub-categories, for a grand total of fifteen specific characteristics of the writing. Those sub-scores provide a slightly richer assortment of data for the students and for me about where their strengths and weaknesses lie on a particular assignment. But I can't really compare that batch of fifteen scores easily. If I want to compare and rank the "best" writers, I need to combine the scores into raw totals. But those raw totals, while easy to compare, provide little useful information. I can say that Pat "ranks" one point higher than Chris, but that doesn't help either of them improve writing, and the raw comparison doesn't show that while Pat has a strong voice but lousy technical control, Chris is a good technician but cold and boring.
And the most useful feedback and evaluation for both is actually a one-on-one conference with me (hard to squeeze in, but now and then I manage) which involves discussion and give and take and reflection and plans for future approaches. These are exceptionally useful, and completely non-comparable (unless, of course, we apply some reductive tool that "helps" me turn the conference into a score, but then we've lost everything that was useful to the student about the conference.)
But wait, you may say. Doesn't that mean that our traditional grades are also reductive and pretty unuseful to the students. And I will say, yes, you are correct, but let's save that (more radical) discussion for another day.
Comparable measures can be useful, and do have their place when they are used in ways that acknowledge how narrow they are. Need to know which student is tallest or most consistently shows up to class on time? We can do that.
But complex human behaviors can't be reduced to comparison-ready measures without losing most of what matters in the translation. Not only are we talking about a complicated array of many different qualities, but those qualities themselves can cut in both positive and negative ways. It is one of the oldest observations about human character-- a person's greatest strength and most terrible weakness can be both sides of a single coin. I am a pretty solid and dependable guy; I am also pretty dull and unexciting. Two sides of the same coin. If the measurement system only weighs the coins without considering how they turn, we've missed important information.
Teachers teach different students. They teach different material. They teach it in different ways. They bring different strengths and weaknesses to the classroom, and those in turn may be weaknesses or strengths depending on what is in that classroom. We can't evaluate a teacher in isolation from all other factors any more than we can decide whether or not a man is a good husband if he's not in any sort of relationship.
If our goal is to do teacher evaluations that are helpful and useful, that help teachers develop and strengthen and grow their teaching skills, tools, and talents, then we must recognize that any such instrument will not yield easily comparable results. My question to reformsters is simple-- would you rather help Teachy McTeacherson do the very best teaching she can, or do you want to be able to compare her to Pat O'Furniture? Which do you think will best serve the needs of the child? Because you can't do both at once. It's possible (though I have to mull some more) that you can't do both at all. A yardstick can measure consistently, clearly and accurately-- but only in one dimension, and teaching never happens in just one direction.
One of the holy grails of ed reform is comparability. The aim is a score or grade or rating that allows us to say definitively that Hypothetical High School is a better school than Imaginary Academy, that Pat O'Furniture teaching third grade in Iowa is a better teacher that Teachy McTeacherson teaching tenth grade Spanish in Maine.
But we're also looking for evaluations that provide useful information, and there's one of the major problems in the evaluation world these days.
The more comparable a measure is, the less useful it is.
Comparable measures must be reductive. In order to compare the elementary teacher in Iowa and the language teacher in Maine, we have to reduce the measure to elements that both teachers possess. This means that the measure must be simple, and it must ignore most of what makes each teacher unique.
This evaluation problem is mirrored by the challenges of student assessment in a classroom. For an example, let me talk about grading writing assignments. I do a multitude of assignment types in my classroom, but for our purposes, let's focus on one particular type.
Many of my students essays are scored with a modified six traits writing rubric. The rubric breaks writing down into six different qualities; additionally I use a modified rubric that breaks each of the six into two or three sub-categories, for a grand total of fifteen specific characteristics of the writing. Those sub-scores provide a slightly richer assortment of data for the students and for me about where their strengths and weaknesses lie on a particular assignment. But I can't really compare that batch of fifteen scores easily. If I want to compare and rank the "best" writers, I need to combine the scores into raw totals. But those raw totals, while easy to compare, provide little useful information. I can say that Pat "ranks" one point higher than Chris, but that doesn't help either of them improve writing, and the raw comparison doesn't show that while Pat has a strong voice but lousy technical control, Chris is a good technician but cold and boring.
And the most useful feedback and evaluation for both is actually a one-on-one conference with me (hard to squeeze in, but now and then I manage) which involves discussion and give and take and reflection and plans for future approaches. These are exceptionally useful, and completely non-comparable (unless, of course, we apply some reductive tool that "helps" me turn the conference into a score, but then we've lost everything that was useful to the student about the conference.)
But wait, you may say. Doesn't that mean that our traditional grades are also reductive and pretty unuseful to the students. And I will say, yes, you are correct, but let's save that (more radical) discussion for another day.
Comparable measures can be useful, and do have their place when they are used in ways that acknowledge how narrow they are. Need to know which student is tallest or most consistently shows up to class on time? We can do that.
But complex human behaviors can't be reduced to comparison-ready measures without losing most of what matters in the translation. Not only are we talking about a complicated array of many different qualities, but those qualities themselves can cut in both positive and negative ways. It is one of the oldest observations about human character-- a person's greatest strength and most terrible weakness can be both sides of a single coin. I am a pretty solid and dependable guy; I am also pretty dull and unexciting. Two sides of the same coin. If the measurement system only weighs the coins without considering how they turn, we've missed important information.
Teachers teach different students. They teach different material. They teach it in different ways. They bring different strengths and weaknesses to the classroom, and those in turn may be weaknesses or strengths depending on what is in that classroom. We can't evaluate a teacher in isolation from all other factors any more than we can decide whether or not a man is a good husband if he's not in any sort of relationship.
If our goal is to do teacher evaluations that are helpful and useful, that help teachers develop and strengthen and grow their teaching skills, tools, and talents, then we must recognize that any such instrument will not yield easily comparable results. My question to reformsters is simple-- would you rather help Teachy McTeacherson do the very best teaching she can, or do you want to be able to compare her to Pat O'Furniture? Which do you think will best serve the needs of the child? Because you can't do both at once. It's possible (though I have to mull some more) that you can't do both at all. A yardstick can measure consistently, clearly and accurately-- but only in one dimension, and teaching never happens in just one direction.
NPE: Teacher Voices on Teacher Evaluation
The Network for Public Education was founded in 2013 by Diane Ravitch and Anthony Cody as an advocacy group for...well, public education. It has become a powerful networking connection for those of us who are public education advocates, and while it has been vocal in speaking out against education reform balderdash, NPE also has a full positive agenda of things that they support.
They have also produced some reports (including a state by state report card) and a new report released just last week. "Educators on the Impact of Teacher Evaluation is a rarity in the world of reports on the world of education in that it involves the voices of actual classroom teachers. The very first paragraph puts the whole business of teacher evaluation in context with the current state of education:
Teachers choose the teaching profession because of their love of children and their desire to help them grow and blossom as learners. Across the nation, however, far too many educators are leaving the classroom. Headlines report teacher shortages in nearly every state. One factor reported in almost every story is the discouragement teachers feel from a reform movement that is increasing pressure to raise student test scores, while reducing support. This pressure dramatically increased with the inclusion of student test scores in teacher evaluations, with some states using them to account for as much as 50% of evaluation scores. When combined with frameworks, rubrics, and high-stake consequences, the nature of teacher evaluation has dramatically changed, and narratives from educators across the United States document that it has changed for the worse.
NPE commissioned a study, and the researchers they hired eventually received responses from almost 3,000 teachers. Here are some of the findings of the research:
* Nobody much likes VAM or rubric-based data-generators like those based on the work of Danielson and Marzano.
* A whopping 84% of teacher report spending more time on evaluation, bringing teachers closer to those Dilbert-esque office workers who have to stop working on projects in order to create reports to explain why they aren't making more progress on the project.
* Being data driven translates to spending more time with spreadsheets and numbers than with colleagues and humans.
* Over half the respondents reported seeing active bias against veteran teachers. This surprised me, and I guess it shouldn't have, since it makes sense that in the current tight-budget environment, an experienced teacher is an expensive teacher. On top of that, veteran teachers are also more likely to call baloney when they see the next reformy lunch platter headed in.
* New teacher eval systems have been particularly hard on non-white teachers, which would be bad news in the best of times, but even worse news these days when the lack of teachers of color is a serious problem in the US school system.
* Professional development is making things worse. Not a surprise, particularly in states like mine where the rule is that it only counts as a required PD hour if it has something directly to do with raising test scores.
The report makes six recommendations.
1) Stop using student test scores for teacher evaluation. Absolutely.
2) Top-down collaboration is an oxymoron. Don't tie mandated and micromanaged teacher collaboration to evaluation.
3) The observation process should focus on reflection and dialogue as tools for improvement. One of my favorite lines in the report-- The result should be a narrative, not a number.
4) Less paperwork. This is not just a teacher problem. My administrators essentially have to stop doing all their other work for several weeks out of the year just to get their evaluation and observation paperwork done. Forms and forms and forms and forms for me, and ten times that many for them. Again-- do you want us to do our job, or do a bunch of paperwork about what we would be doing for our job if we weren't busy with the paperwork.
5) Take a good hard look at how evaluation systems are affecting veteran teachers and teachers of color.
6) Burn down the entire professional development system. Okay, that's my recommendation. NPE is more restrained-- decouple PD from the evaluation system and attach it to things that actually help teachers do their jobs.
That's the basic outline. There are more details and there are, most of all, actual quotes from actual teachers. I have read so many "reports" and "white papers" and "policy briefs" covering many aspects of education policy over the last few years, and the appearance of a teacher voice is rarer than Donald Trump having a good hair day and displaying humility at the same time. That alone makes this report valuable and useful. I recommend you read the whole thing.
They have also produced some reports (including a state by state report card) and a new report released just last week. "Educators on the Impact of Teacher Evaluation is a rarity in the world of reports on the world of education in that it involves the voices of actual classroom teachers. The very first paragraph puts the whole business of teacher evaluation in context with the current state of education:
Teachers choose the teaching profession because of their love of children and their desire to help them grow and blossom as learners. Across the nation, however, far too many educators are leaving the classroom. Headlines report teacher shortages in nearly every state. One factor reported in almost every story is the discouragement teachers feel from a reform movement that is increasing pressure to raise student test scores, while reducing support. This pressure dramatically increased with the inclusion of student test scores in teacher evaluations, with some states using them to account for as much as 50% of evaluation scores. When combined with frameworks, rubrics, and high-stake consequences, the nature of teacher evaluation has dramatically changed, and narratives from educators across the United States document that it has changed for the worse.
NPE commissioned a study, and the researchers they hired eventually received responses from almost 3,000 teachers. Here are some of the findings of the research:
* Nobody much likes VAM or rubric-based data-generators like those based on the work of Danielson and Marzano.
* A whopping 84% of teacher report spending more time on evaluation, bringing teachers closer to those Dilbert-esque office workers who have to stop working on projects in order to create reports to explain why they aren't making more progress on the project.
* Being data driven translates to spending more time with spreadsheets and numbers than with colleagues and humans.
* Over half the respondents reported seeing active bias against veteran teachers. This surprised me, and I guess it shouldn't have, since it makes sense that in the current tight-budget environment, an experienced teacher is an expensive teacher. On top of that, veteran teachers are also more likely to call baloney when they see the next reformy lunch platter headed in.
* New teacher eval systems have been particularly hard on non-white teachers, which would be bad news in the best of times, but even worse news these days when the lack of teachers of color is a serious problem in the US school system.
* Professional development is making things worse. Not a surprise, particularly in states like mine where the rule is that it only counts as a required PD hour if it has something directly to do with raising test scores.
The report makes six recommendations.
1) Stop using student test scores for teacher evaluation. Absolutely.
2) Top-down collaboration is an oxymoron. Don't tie mandated and micromanaged teacher collaboration to evaluation.
3) The observation process should focus on reflection and dialogue as tools for improvement. One of my favorite lines in the report-- The result should be a narrative, not a number.
4) Less paperwork. This is not just a teacher problem. My administrators essentially have to stop doing all their other work for several weeks out of the year just to get their evaluation and observation paperwork done. Forms and forms and forms and forms for me, and ten times that many for them. Again-- do you want us to do our job, or do a bunch of paperwork about what we would be doing for our job if we weren't busy with the paperwork.
5) Take a good hard look at how evaluation systems are affecting veteran teachers and teachers of color.
6) Burn down the entire professional development system. Okay, that's my recommendation. NPE is more restrained-- decouple PD from the evaluation system and attach it to things that actually help teachers do their jobs.
That's the basic outline. There are more details and there are, most of all, actual quotes from actual teachers. I have read so many "reports" and "white papers" and "policy briefs" covering many aspects of education policy over the last few years, and the appearance of a teacher voice is rarer than Donald Trump having a good hair day and displaying humility at the same time. That alone makes this report valuable and useful. I recommend you read the whole thing.
Subscribe to:
Posts (Atom)