Friday, March 23, 2018

Opening Night

This weekend finally winds down my spring performance season. A couple of weeks ago we presented a two-school joint production of Shrek; tonight is opening night for my school's annual variety show. We call it the "Broadcast," named (as far as anyone knows) after The Big Broadcast, a 1932 variety film that kicked off a series of Big Broadcast movies through the decade.

I've been directing the show for a bunch of years now. It's a pleasure to work with students in an arena that is so directly organized around performance. We have singers, dancing groups, and any number of odd surprises. One year we included a student who had bought a sitar on line and taught himself to play it. This year we've got some bucket drummers and a trio of T Rex dancers. MCs each year develop and write their own sketches to tie the evening together. My main function is to be a glorified traffic cop and get all of this to flow smoothly (I'm also the stage crew adviser).

My other major function is to make sure that each act is the best it can be. While this involves some broad standards (have energy, know your material, don't sing in keys unrelated to that in which your accompanist is playing, etc), preparing a student variety show is the very definition of a non-standardized activity. Each singer approaches her material with her own personality and style. High levels of precision are more attainable for some dancing groups than for others (and for girls dancing in inflatable T Rex costumes, precision really isn't the point). Each performer has personal goals, brings personal skills and choices to the attainment of those goals, and responds to different styles of coaching and directing. How I work with them is also a function of the relationship that we have (or don't), as well as my own judgment about how far and hard I can push them without making things worse instead of better (demoralizing a performer pretty much never elicits their best work). Sometimes as we work together, we come up with really cool pieces of inspiration; sometimes the performers find their own way to a sweet spot outside the box (I wish I could show you Zoe and Matt doing Coldplay's "Yellow" with guitar and upright bass).

A full book show is, of course, even more challenging. I'm fortunate in that my years on the co-op show have been spent with a woman who is a hugely gifted director, who gets just how to create a picture, what movement to incorporate, how to pull all the pieces together. It's a combination of so many little details in the service of a bigger vision, all put together with the work of students who are, generally, not exactly fully mature performers. But there are moments when things come together, when we find space for bits of inspiration and improvisation, on top of the adjustments that one must make when translating vision into reality.

I've been doing school and community theater for thirty-some years, learning a little more every single time. But there's one thing I'm absolutely certain of.

A show could not be put together by a piece of software.

You could not give each performer some sort of standardized test that would generate data that in turn would determine how people should be cast and how they should go about developing their performance. You could not, in place of rehearsal, have everyone come in and take an on-line performer quiz that would generate a personalized set of software-generated performance improvement instructions. You could not even have the computer "watch" the student perform and then generate a personalized performance report.

The whole idea of a computer "directing" the school show is so transparently foolish it's hard to tell which parts to mock first, but in particular, think about how a student's performance would change if they were performing not for a live human, but for a piece of software. As a performer, it's very hard to get excited about performing for an audience that won't get excited back at you. Software doesn't know the student, doesn't have the background of knowledge to place the student's goals in the context of music in general, doesn't have the understanding to gauge what an audience may or may not respond well to, knows neither the difference between nor the proper timing of holding a hand or kicking a butt. Software doesn't know how to build the trust necessary to convince a performer "It may seem awkward at first, but trust me, this will help the performance."

Nor is it possible to compare all performers on some standardized scale. Was Tyler's dry, wry turn as Gomez Addams "better" than Kate's moving and layered portrayal of Belle? Did Forrest "beat both of them by stopping the show with a single grunt as Lurch? Did the tap dance routine from ten years ago rate "higher" than last year's modern jazz-flavored dance group? And how do I compare either of them to the rock band that covered 99 Luftballoons, or Maddy playing Budapest on ukulele? Exactly how would any piece of software crank out a 4-point-scale rating for any of these?

Bottom line: a computer is too dumb, too ignorant, too not-human to direct your school show.

Now. Does anybody really think that teaching students in a classroom is all that different from directing them on a stage?

I don't. And for all the reasons I believe that software would make a terrible show director, I think software makes a lousy teacher, no matter how "personalized" or algorithm-driven it is, no matter what super-duper data wingdoodles it has.

Spare a thought for us tonight. My students have some traditions to uphold, and they are more than ready to meet the challenge. There are many, many parts to teaching; this is one of the best.

Thursday, March 22, 2018

Teacher Evaluation: Plus or Minus?

Matthew Kraft is an Assistant Professor of Education and Economics at Brown University, and it says something about where we are today that there is even such a job. I look forward to universities hiring someone as a Professor of Microbiology and Sociology, or a Professor of Astrophysics and Cheeseburgers. The notion that economists are automatically qualified to talk about education continues to be one of the minor plagues us these days.

But I digress.

Kraft is in Education Next making some big, sweeping statements about his research:

When I present my research on teacher evaluation reforms, I’m often asked whether, at the end of the day, these reforms were a good or bad thing. This is a fair question—and one that is especially important to grapple with given that state policymakers are currently deciding on whether to refine or reject these systems under ESSA. For all the nuanced research and mixed findings that concern teacher evaluation reforms and how teachers’ unions have shaped these reforms on the ground, what is the end result of the considerable time, money, and effort we have invested?

I think I could skip right ahead to a conclusion here, but Kraft has created a nice little pro and con list that both helps us address the question and gives a quick and dirty picture of what reformsters think they have done in the teacher evaluation world. So let's see what he's got.

The Pro List

So here's what Kraft thinks are the "positive consequences" of ew evaluation systems.

Growing national recognition of the importance of teacher quality.

Is that a consequence of new evaluation systems? Were that many people wandering about before saying, "You know, I don't think it matters at all whether school teachers are awesome or if they suck."  Granted, we did have one group dedicated to the notion that we should have a system in which it doesn't matter which teacher you get, that every class should be "teacher-proofed," and that if we do all that well enough, we can park any warm body in a classroom. But those were the reformsters, and I'm not sure they've gotten any wiser on this point.

A slight shift toward the belief that some teachers are better or worse than others.

Again, I'm not sure this is news to ordinary civilians, but lord knows that reformsters have been complaining loudly and constantly that schools are loaded with Terrible Teachers who must be weeded out. How is this a positive, exactly?

The widespread adoption of rigorous observational rubrics for evaluating instructional practice that provide clear standards and a common language for discussing high-quality instruction.

Nope. This is not a positive. Reducing the evaluation of teacher quality to a "rigorous rubric" is not a positive. Academians and economists like it because it lets them pretend that they are evaluating teachers via cold, hard numbers, but you can no more reduce teaching to a "rigorous rubric" than you can come up with a rubric for marital success or parental effectiveness.

For that matter, not only is this not a positive, but there's not much evidence that it has actually happened in any meaningful new way. We've seen lots of teacher eval rubrics (aka checklists) before (get out your Madeline Hunter worksheet, boys and girls) but they never last, because they turn out to be bunk. But at the moment, rubrics and checklists still take a back seat in most districts to Big Standardized Test scores soaked in some kind of VAM sauce. So this item from the pro list is wrong twice.

New administrative data from student information systems that, linked to teacher human resource systems, allow administrators and researchers to answer a range of important questions about teacher effectiveness.

Holy jargonized bovine fecal matter! This is also not a positive. But it's a sign of where this list is headed.

More and better (albeit still imperfect) teacher performance metrics to inform important human capital decisions made by administrators.

See? By the time you're talking about "human capital decisions," you've lost the right to be taken seriously by people who actually work in education. Plus, this is just purple prose dressing up the old reformster idea that we should be using teacher evaluation data to decide who to hire and fire, which is old sauce and not a positive because Kraft has mistyped "albeit still imperfect" when what he surely means is "still completely invalid and unsupported."

Increased attention to the inequitable access to highly effective teachers across racial and socio-economic lines.

First, the "data" on relative awesomeness of teacher at poor schools is almost impossible to take seriously because A) it's based on crappy BS Testing data and B) comparing teachers at wealthy and poor schools is like comparing the speeds of people running down a mountain to the speeds of people running up one.

Second, we don't need a lot of hard data to know that non-wealthy, non-white schools get less support or that state and district funding systems inevitably short change those schools. If you can only believe that because you see numbers on a spread sheet-- I mean, I guess it's swell that you've finally figured it out, but damn, what is wrong with you?

Increased turnover among less effective teachers.

You have no idea whether that is happening or not, because you have no way of knowing which teachers are doing well and which ones are doing poorly. The mere fact that you assume awesomeness or non-awesomeness is a permanent state for every teacher shows that you don't understand the issues involved here.

That's the pro list. And it's pretty much all bunk.The Con List

What downside has there been to the evaluation revolution?

The loss of principal's time to a formal evaluation process and paperwork that (often) have little value.

That is correct. New evaluation systems have created a host of hoops for administrators to jump through, most of which serve no local purpose, but are simply there to satisfy a state bureaucracy's need to see numbers on forms.

The erosion of trust between teachers and administrators. That trust would be useful for real ongoing professional development.

That is correct. Because so many modern reformster evaluation systems were designed with the idea of weeding out all the Terrible Teachers, and because those evaluation systems are often based on random data that the teacher's job performance doesn't actually affect (looking at you, BS Test scores), teachers view the whole process with distrust. One of the most powerful things an administrator can say to a teacher is, "How I can I help you do the kind of job you want to do?" These evaluation systems stand directly in the path of any such interaction.

An increased focus on individual performance at the potential cost of collective efforts.

I'm giving Kraft a bonus point for this one, because too many reformsters refuse to acknowledge that their evaluation systems set up a kind of teacher thunderdome, a system in which I can't collaborate with a colleague because I might just collaborate myself out of a raise or a job. Because a school doesn't make a profit, all teacher merit pay systems must be zero sum, which means in order for you to win, I must lose. This does not build collegiality in a building.

Decreased interest among would-be teachers for entering the profession.

There are certainly many factors at play here, but Kraft is right-- knowing that your job performance will be decided by a capricious, random and fundamentally unfair system certainly doesn't make the profession more attractive.

The costs associated with teacher turnover, particularly in a hard-to-staff schools.

This is correct. Once teachers have been driven out or fired, schools cannot just go grab new teachers of the Awesome Teacher Tree in the back yard. Costs associated with turnover include a lack of stability and continuity at the school, which is not helpful for the students who attend.

I'd add that Kraft has missed a few, but most notably, the waste of time, money, and psychological energy on a system that doesn't provide useful or accurate information, but which presents teachers with, at a minimum, an attack on their own sense of themselves as professionals and, at a maximum, an attack on their actual earning power, or even career. When it comes to teacher evaluation, we are spending a lot of money on junk.

Looking at the balance sheer.

From my perspective, teacher evaluation reforms net a modest positive effect nationally. While my judgment is informed by a growing body of scholarship, it is also subjective, imprecise, and colored by my hope that the negative consequences can be addressed productively going forward.

As I often tell one of my uber-conservative friends, "We see different things here." Since I find none of the positives convincing or compelling, but all of the negatives strike me as accurate, if understated, I see the balance as overwhelmingly negative.

Kraft does ask some good questions in his concluding section. For instance, would schools, teachers and students be better off if states had not "implemented evaluation reforms at all"? It's a useful question because it reminds us that the current sucky system replaced previous sucky systems. But the critical difference is this:

Previous sucky evaluation systems may not have provided useful information about teachers (or depended on being used by good principals to generate good data). But at least those previous systems did not incentivize bad behavior. Modern reform evaluation systems add powerful motivation for schools to center themselves not on teachers or students or even standards, but on test results. And test-centered schools run upside down-- instead of meeting the students' needs, the test-centered school sees the students as adversaries who must be cajoled, coached, trained and even forced to cough up the scores that the school needs. The Madeline Hunter checklist may have been bunk, but at least it didn't encourage me to conduct regular malpractice in my classroom.

So yes-- everyone would be better off if the last round of evaluation "reforms" had never happened.

Kraft also asks if "the rushed and contentious rollout of teacher evaluation reforms poison the well for getting evaluation right."

Hmmm. First, I'll challenge his assumption that rushed rollout is the problem. This is the old "Program X would have been great if it had been implemented properly," but it's almost never the implementation, stupid. There's no good way to implement a bad program. Bad is bad, whether it's rushed or not.

Second, that particular well has never been a source of sparkling pure water, but yes, the current system made things worse. The problems could be reversed. The solution here is the same as the solution to many reform-created education problems-- scrap test-centered schooling. Scrap the BS Test. Scrap the use of a BS Test to evaluate schools or teachers or students. Strip the BS Test of all significant consequences; make it a no-stakes test. That would remove a huge source of poison from the education well.

Wednesday, March 21, 2018

To Facebook Or Not

In light of the most recent revelations about Facebook, folks are once again re-evaluating their relationship with the social media 800-pound gorilla. Should I be on there? Should I promote my social group, my blog, my hobbies on there?

I'm an early adopter. I hopped on when my daughter was a student at Penn State, back in the days when Facebook was only used by certain colleges and universities, and membership was open only to students and family members. It was glorious-- a tool that allowed us to stay in touch, show each other Cool Stuff we had seen. It was far more immediate and authentic writing letters. 

Over time it became increasingly complicated and complex, with the gates periodically opened to new groups of users, new utilities added, new ways to waste time on Facebook developed. I watched Facebook aggressively suck all manner of media and activity into its orbit and, like all the other online giants, trying to create an inclusive ecosystem so that users would never have to leave. In many ways, Facebook was a leader in a race to become, as one wag put it, the new AOL. 

My interest in the online world was already well-formed by the time I walked into Facebookland. It may have been in part a coincidence of history that the online world was ramping up just as I was figuring out how to get through weekends when my children were with their mother and I was in the house alone. I spent time on the old prodigy bbs system, made friends on ICQ, read the adventures of early online adopters like one celebrity who wrote a terrible letter that would not die or go away. 

It seemed fairly obvious in those days that human beings and their ability to create content of any sort, even if it was just filling up a message board or a chat channel (yeah, remember when we called things on line "channels"?), were a desired commodity. It seemed obvious that the online "community" deal was that you traded pieces of yourself for new connective capabilities. It seemed obvious that all of us who used these services were products.

How so many people lost sight of that, or failed to figure it out, is another discussion. But lose sight of it they did. People of my generation impart magical powers and knowledge to digital natives, but the fact is, the vast majority of digital natives are dopes about online life, imagining that they are entitled to secrecy and privacy on line. It is a measure of the seductiveness of online life that the promise to secrecy and privacy has almost never been explicitly made, and yet so many people implicitly believe in it.

The internet is not private. It never has been. That's the first thing you have to understand about going there. The second thing to understand is that everything on line is essentially forever. I've told my students this over and over-- the secret to a happy internet life is to understand that everything you do is public and permanent. I guess the third thing to understand is that people are becoming increasingly creative about how to mine your online self for data. 

Now, I'm not saying that if your privacy has been violated by Facebook or any other app it is your own fault. It's reasonable to assume that all of these companies will take steps to protect user privacy and data. But it's practical to assume that one way or another, they will fail. There's nothing wrong with telling a friend, "I'm going to leave this stack of money next to you while I run to the store. Will you keep an eye on it?" But it's a little silly to be shocked and surprised if some of the money is gone when you get back.

Every online activity is really a transaction. This blog's platform is owned by Google, and by running it and drawing in umpteen thousand views, I am making Google money (which is why my son-in-law says I really should be running ads here). But in exchange, I have had an opportunity to spread some words, raise some awareness, and create a tiny piece of noise for a cause I deeply believe in, and make important connections with other people similarly concerned. I'm satisfied with the balance on that transaction.

Likewise, promoting this blog via Facebook has helped me find more audience for my cause. I also use Facebook to maintain connections with old friends, students, and family. My older children live far away, and I have cousins that I've been lucky to see in person once or twice a decade. I get to see my grandchildren grow up. Thanks to Facebook, those connections are all stronger. I know I'm making money for Zuckerberg, but on balance, I'm satisfied with the value I'm getting out of the transaction.

Mind you, I'm thoughtful about what I post, and I keep an eye on my security settings. I don't generally take silly quizzes (which exist mostly to get you to give up access to your data in exchange for finding out which vegetable you most resemble). I'm aware that my digital pocket is being picked every day.

In fact, that sort of visibility is one of the reasons that I will keep maintaining an active facebook page for this blog-- I want the data miners to know that there are people who care about public education and resisting the ed reform movement. I'm not delusional-- I know that this blog has a smaller footprint than, say, people who are concerned about what Justin Bieber is wearing today. But if I'm not here, my cause becomes slightly less visible, marginally easier to ignore. Am I using a tool that is morally compromised? Yes, certainly. I am not aware of a single piece of modern computer technology that isn't. I wish compromise and transaction weren't necessary to function in the modern world, but as near as I can see, they are. So I will continue to weigh the benefits against the cost, try to make my choices mindfully, and for the time being, use Facebook with full awareness that it is also using me.

Tuesday, March 20, 2018

AEI: Voiding the Choice Warrantee

The American Enterprise Institute has a new report  that calls into question one of the foundational fallacies of the entire reform movement. Think of it as the latest entry in the Reformster Apostasy movement.

Do Impacts on Test Scores Even Matter? Lessons from Long-Run Outcomes in School Choice Research asks some important questions. We know they are important questions because some of us have been asking and answering them for twenty years.

Here are the key points as AEI lists them:

For the past 20 years, almost every major education reform has rested on a common assumption: Standardized test scores are an accurate and appropriate measure of success and failure.

This study is a meta-analysis on the effect that school choice has on educational attainment and shows that, at least for school choice programs, there is a weak relationship between impacts on test scores and later attainment outcomes.

Policymakers need to be much more humble in what they believe that test scores tell them about the performance of schools of choice: Test scores should not automatically occupy a privileged place over parental demand and satisfaction as short-term measures of school choice success or failure.

Yup. That's just about it. The entire reformster movement is based on the premise that Big Standardized Test results are a reliable proxy for educational achievement. They are not. They never have been, and some of us have been saying so all along. Read Daniel Koretz's book The Testing Charade: Pretending To Make Schools Better for a detailed look at how this has all gone wrong, but the short answer is that when you use narrow unvalidated badly designed tests to measure things they were never meant to measure, you end up with junk.

AEI is not the first reform outfit to question the BS Tests' value. Jay Greene was beating this drum a year and a half ago:

But what if changing test scores does not regularly correspond with changing life outcomes?  What if schools can do things to change scores without actually changing lives?  What evidence do we actually have to support the assumption that changing test scores is a reliable indicator of changing later life outcomes?

Greene concluded that tests had no real connection to student later-in-life outcomes and were therefor not a useful tool for policy direction. Again, he was saying what teachers and other education professionals had been saying since the invention of dirt, but to no avail.

In fact, if you are of a Certain Age, you may well remember the authentic assessment movement, which declared that the only way to measure any student knowledge and skill was by having the student demonstrate something as close to the actual skill in question. IOW, if you want to see if the student can write an essay, have her write an essay. Authentic assessment frowned on multiple choice testing, because it involves a task that is not anything like any real skill we're trying to teach. But ed reform and the cult of testing swept the authentic assessment movement away.

Really, AEI's third paragraph of findings is weak sauce. "Policymakers should be much more humble" about test scores? No, they should be apologetic and remorseful that they ever foisted this tool on education and demanded it be attached to stern consequences, because in doing so the wrought a great deal of damage on US education. "Test scores should not automatically occupy a privileged place..."? No, test scores should automatically occupy a highly unprivileged place. They should be treated as junk unless and until someone can convincingly argue otherwise.

But I am reading into this report a wholesale rejection of the BS Test as a measure of student, teacher, or school success, and that's not really what AEI is here to do. This paper is focused on school choice programs, and it sets out to void the warrantee on school choice as a policy.

Choice fans, up to and including education secretary Betsy DeVos, have pitched choice in terms of its positive effects on educational achievement. As DeVos claimed, the presence of choice will not even create choice schools that outperform public schools, but the public schools themselves will have their performance elevated. The reality, of course, is that it simply doesn't happen.The research continues to mount that vouchers, choice, charters-- none of them significantly move the needle on school achievement. And "educational achievement" and "school achievement" all really only mean one thing-- test scores.

Choice was going to guarantee higher test scores. They have had years and years to raise test scores. They have failed. If charters and choice were going to usher in an era of test score awesomeness, we'd be there by now. We aren't.

So what's a reformster to do?

Simple. Announce that test scores don't really matter. That's this report.

There are several ways to read this report, depending on your level of cynicism. Take your pick.

Hardly cynical at all. Reformsters have finally realized what education professionals have known all along-- that the BS Tests are a lousy measure of educational achievement. They, like others before them,  may be late to enlightenment, but at least they got there, so let's welcome them and their newly-illuminated light epiphanic light bulbs.

Kind of Cynical. Reformsters are realizing that the BS Tests are hurting the efforts to market choice, and so they are trying to shed the test as a measure of choice success because it clearly isn't working and they need reduce the damage to the choice brand being done.

Supremely Cynical. Reformsters always knew that the BS Test was a sham and a fraud, but it was useful for a while, just as Common Core was in its day. But just as Common Core was jettisoned as a strategic argument when it was no longer useful, the BS Test will now be tossed aside like a used-up Handi Wipe. The goal of free market corporate reformsters has always been to crack open the vast funding egg of public education and make it accessible to free marketeers with their education-flavored business models. Reformsters would have said that choice clears up your complexion and gives you a free pony if they thought it would sell the market based business model of schooling, and they'll continue to say-- or stop saying-- anything as long as it helps break up public ed and makes the pieces available for corporate use.

Bottom line. Having failed to raise BS Test scores, some reformsters would now like to promote the entirely correct idea that BS Tests are terrible measures of school success, and so, hey, let's judge choice programs some other way. I would add, hey, let's judge ALL schools some other way, because BS Testing is the single most toxic legacy of modern ed reform.

Monday, March 19, 2018

OH: Computers Are Grading Essays

No sooner had I vigorously mocked the idea of using computers to grade essays, then this came across my desk:

CLEVELAND, Ohio - Computers are grading your child's state tests.

No, not just all those fill-in-the bubble multiple choice questions. The longer answers and essays too.

According to State Superintendent Paolo DeMaria and state testing official Brian Roget (because "state testing official" is now  job-- that's where we are now), about 75% of Ohio's BS Tests are being fully graded by computers.

This is a dumb idea.

"The motivation is to be as effective and efficient and accurate in grading all these things," DeMaria told the board. He said that advances in AI are making this new process more consistent and fair - along with saving time and, in the long run, money.

If you think writing can be graded effectively and efficiently and accurately by a computer, then you don't know much about assessing writing. The saving money part is the only honest part of this.

But all the kids are doing it, Mom. American Institutes for Research (AIR-- which is not a research institute at all, but a test manufacturer) is doing it in Ohio, but Pearson and McGraw-Hill and ETS are all doing it, too, so you know it's cool.

DeMaria said that the research is really "compelling," which is another word for "not actually proving anything," and he also claims that even college professors are using Artificial Intelligence to grade papers. He does not share which colleges, exactly, are harboring these titans of educational malpractice. Would be interesting to know. Meanwhile, Les Perelman at MIT has made a whole second career out of repeatedly demonstrating that these essay grading computers are incompetent boobs.

The shift from human scorers is usually a little controversial, which may be why Ohio just didn't tell anyone it was happening. It came to light only after, the article notes wryly, "irregularities" were noticed in grades. Oddly enough, that constitutes a decent blind test of the software-- folks could tell it was doing something wrong even when they didn't know that software was doing the grading.

Some Ohio board members think the shift is just fine, though one picked an unfortunate choice of example:

"As a society, we're on the cusp of self-driving vehicles and we're arguing about whether or not AI can grade a third grade test?" asked recently-appointed board member James Shephard. "I think there just needs to be some perspective here."

I feel certain that as Shephard spoke, he was unaware that a self-driving vehicle just killed a pedestrian in Arizona.

The actual hiccup that called attention to the shift from meat widget grading was a large number of third grade reading tests that came back with a score of zero. That was apparently because they quoted too much of the passage they were responding to, though they are supposed to cite specific evidence from the text. It's the kind of thing that a live human could probably figure out, but since computer software does not actually understand what it is "reading," -- well, zeros. On a test that will determine whether or not the student can advance to fourth grade (because Ohio has that stupid rule, too).
I don't understand a word you just said, but you fail!

The state has offered some direction (30% is the tipping point for how much must be "original") so that now we have the opening shot in what is sure to be a long volley of rules entitled "How to write essays that don't displease the computer." Surely an admirable pedagogical goal for any writing program.

The state reported that of the thousand tests submitted for checking, only one was rescored. This fits with a standard defense of computer grading-- "When we have humans score the essays, the scores are pretty much the same as the computer's." This defense does not move me, because the humans have their hands and brains tied, strapped to the same algorithm that the computer uses. Of course a human gets the same score, if you force that human to approach the essay just as stupidly as the computer does. And computers are stupid-- they will do exactly as they're told, never understanding a single word of their instructions.

The humans-do-it-too defense of computer grading ignores another problem of this system-- audience. Perhaps on the first go round you'll get authentic writing that's an actual measure of something real. But what we already know from stupid human scoring of BS Tests is that teachers and students will adapt their writing to fit the algorithm. Blathering on and on redundantly and repetitiously may be bad writing any other time, but when it comes to tests, filling up the page pleases the algorithm. The algorithm also likes big words, so use those (it does not matter if you use them correctly or not). These may seem like dumb examples, but my own school has had success gaming the system with these rules and rules like them.

And this is worse. I've heard legitimate arguments from teachers who say the computer's ability to sift through superficial details can be on part of a larger, meat-widget based evaluation system, and I can almost buy that, but that's not what Ohio is doing-- they are handing the whole evaluation over to the software.

What do you suppose will happen when students realize that the computer will not care if they illustrate a point by referring to John F. Kennedy's noble actions to save the Kaiser during the Civil War? What do you suppose will happen when students realize that they are literally writing for no human audience at all? How will they write for an algorithm that can only analyze the most superficial aspects of their writing, with no concern or even ability to understand what they are actually saying?

This is like preparing a school band to perform and then having them play for an empty auditorium. It's like having an artist do her best painting and then hanging it in a closet. Even worse, actually-- this is like having those endeavors judged on how shiny they are, still unseen and unheard by human eyes and ears.

Ohio was offered a choice between doing something cheap and doing something right, and they went with cheap. This is not okay. Shame on you, Ohio.

What's the Teacher Role in a Tech Classroom

This story is a few months old, but still worth a look.

Back in January, Hechinger ran a report about a panel discussion at the NY Edtech Week global innovation festival back a month earlier. It's a reminder once again of how divorced ed tech can be from actual education in actual schools. But writer Tara Garcia Mathewson is still pretty excited:

Computers, laptops and other digital devices have become commonplace in most schools nationwide, changing the way students get instruction and complete assignments. Computers have also digitized student records and taken a whole host of school processes to the cloud. This has created new risks and led to the founding of new departments focused on the safety and security of all this data. It has also created new efficiencies for schools.

Well, that observation about keeping all this data safe is certainly timely, but the argument about efficiencies seems as timeless as a Shakespeare play.

Phil Dunn, the IT guy for Greenwich Public Schools says, basically, that newer IT makes his job as the IT guy easier. That is... unsurprising?

Mathewson also deploys a construction that I scold my students for frequently:

New technologies are coming out all the time. Some make life better and easier for the people who use them. Some make life different, but not necessarily better. And there are definitely the technologies — designed for the classroom and elsewhere — that make life, or learning, worse.

So some tech makes things better, some makes it worse, and some makes it different? That just about covers all the possibilities, right?

But it takes a guy whose job is pushing ed tech to really really demonstrate just how clueless some edtech people are about the ed part. Chris Rush is a co-founder and chief program officer of New Classrooms, one more "non-profit" group that is pushing product like crazy. They are all in on "personalized learning" ("Teach To One" math is their product) and adaptive software and you'll be unsusprised to discover they are supported to the tune of over-a-million-dollars each by Bezos, Gates, Dell, Chan-Zuckerberg and something called New Profit, Inc, a "national nonprofit venture philanthropy fund (and fans of Pay For Success, aka Social Impact Bonds)." Here in part is what New Classrooms has to say about their approach:

Our performance-based tasks don’t fit neatly into any single pedagogical practice. Because students work for an extended period of time on real-world challenges, there are some shades of project-based learning.

A key difference is that they are closely connected to specific skills and exit slips that are part of each student’s personalized curriculum, making it less open-ended than traditional project-based learning. In either case, the goal is the same: for students to acquire deeper knowledge.

The teacher’s role in this kind of learning experience is multifaceted, using a combination of techniques: planning, direct instruction, facilitating, challenging, and cheerleading.

So, teaching, only with computer-aligned educational jargon attached. Does New Classrooms know what it's doing? Well, back at the panel discussion, Rush had a few things to say:

Teachers spend a significant amount of time scoring papers rather than spending time with students

Wait! What? Does Rush imagine that in a traditional classroom, teachers say, "Okay, you students just do some stuff, but I'm going to be sitting at my desk grading things." Seriously? Because my wife, the fourth grade teacher who most daysdoesn't even have enough non-student-interaction time to allow her to pee, would disagree.

Let me be clear. Teachers do not spend time scoring papers instead of spending time with students. They spend time scoring papers instead of eating or peeing or interacting with their own children at home or instead of sleeping.

Also, leaving notes, explanations, thoughts, responses, and reactions written out on a piece of student work is, in fact, a form of interacting with students.

Automating not only multiple-choice test scoring but the grading of essays and project work would give teachers more time to focus on the student interaction that they’re uniquely capable of.

Automating multiple-choice test scoring is fine but A) good teachers know that multiple-choice tests are the lowest form of assessment and B) they take very little time to score anyway, which is why some teachers use them even when we know better.

Also, and I wan to make sure I'm really clear about this--

Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing. Computers are not capable of assessing writing.

I refer you to the work of Les Perelman for more specifics (here and here and here for starters). But to sum up my point-- computers are not capable of assessing writing.

Up next...

Jonathan Supovitz, director of the Consortium for Policy Research in Education at the University of Pennsylvania, talked about school improvement. Using a sports analogy, he said coaches don’t just look at the game summaries to consider how their players did. They look at videos of each play. Data systems in schools, though, skip straight to the summaries, Supovitz said. The play-by-play is missing.

Supovitz calls that missing data the "next frontier." I call it "what teachers already do."

But when the issue of what teachers will do comes up, the panel has more bosh to shovel. Rather than sidelining teachers, some panel members say that "teacher skills will just need to change." This is, indeed, the oldest ed tech pitch in the book.

Ed tech: We have invented a great new glass hammer for you to buy and use to build birdbaths.

Teacher: We are building great, solid houses for humans with power screwdrivers and wood screws.

Ed tech: Well, once you change your whole methodology, purpose and program, this hammer will be really useful.

What needs to change this time? Supovitz says "there will be a demand for teachers who are more sophisticated about looking at and responding to student performance data."

No problem, because that's what teachers do all day, every day. Except that by "more sophisticated" what he means is "Our system is not designed to give you the data you want and need, but to give you the data we decided to give you, so you're going to have to learn how to dig the data you actually need out of our reports." Gosh, thanks for all your help. I'm sure the company will also sell the professional development needed to "support this additional responsibility."

Put another way, ed tech sees a role for teacher, and that role is not so much "instructional leader" as "meat widget responsible for bridging the gap between the company has figured out how to do and what the students actually need." Ed Tech companies will provide all the glass hammers, and teachers can figure out how to use glass hammers and wood screws to build a solid house.

Sunday, March 18, 2018

ICYMI: St.Patrick's Day After Edition (3/18)

Here's a few choice tidbits for the week. Read and share!

School Choice Is a Lie That Harms Us All

From HuffPost. Zero punches pulled here.

Many Democrats Would Agree with Ideas in DeVos Clip

While everyone was hammering the awful 60 Minutes clips, Slate pointed out that many DeVos policy ideas have Dem Party faves for years.

Betsy DeVos Visited an Underperforming School.

This is a great catch. When DeVos said she never intentionally visited an underperforming school, she wasn't being obtuse-- just precise. She did visit a failing school-- but not on purpose. It was supposed to be an example of charter excellence.

Worst Government Possible on Purpose

In which even the mainstream Rolling Stones can see the DeVos is a disaster

What DeVos Needs To Hear

A venture capitalist traveled to 200 schools to learn something. What he learned is that much reformster rhetoric is baloney.

The Truth about Charter Schools

A former charter teacher talks about how awful it was.

When the Charter Lobby Wants Your Turf

From Chicago-- what it looks like when charter boosters want a piece of your action.

Facts About New Jersey Charters, Part II

Mark Weber continues to excerpt his report with Julia Sass Rubin, looking this time at just how many students with special needs NJ charters really teach

Lessons from the West Virginia Teachers Strike

The Have You Heard podcast landed a mountain of WV teacher interviews. You will not find a better picture of what happened.

Why Public Schools?

Jeff Bryant looks at why public schools seem to be the origin of so much rebellion these days.