Friday, April 22, 2016

Duncan Finds a New Platform

Well, you knew it was going to happen. Somewhere, some thinky tank was going to bid on the right to hang Arne Duncan's shingle on their porch and give him the chance to continue his misguided, ill-informed gumflappery. Yes, the Emerson Collective has already hired him to fill a seat, but who would let him keep talking?



The winner? Ha! Trick question-- there are no winners in this transaction. But the thinky tank that will be providing a megaphone for the former Secretary of Education is the Brookings Institute.

The Duncanator will serve as a nonresident senior fellow in the Governance Studies program's Brown Center on Education Policy.

It is, on reflection, a match made in heaven. Brookings, a right-tilted thinky tank that leans heavily on the wisdom of economists, has long been a reliable purveyor of education policy nonsense. They done "research" that poor kids really do suck. They have won my award for "Most Clueless Commentary on Common Core" as well as mis-predicting its future. They have cobbled together weak sauce arguments for annual Big Standardized Tests. And they have scolded the poor for continuing to fornicate.

In short, they have been consistently wrong when it comes to issues of education policy, which makes Arne Duncan a perfect fit.

Duncan will blog (no word on whether or not Brookings is providing him with an intern) and "participate in public events on relevant issues." His first gig? A forum on charter schools. Great.

"The Brown Center is proud to welcome Arne Duncan, who has demonstrated passionate leadership on education and youth development issues throughout his life and career,” said Darrell West, Vice President of Governance Studies at Brookings.  “The research and activities of the Brown Center will benefit greatly from his decades of experience shaping and implementing education policy, not just at the federal but at the state and local levels as well. His perspective will help the Brown Center generate fresh ideas and new approaches to the challenges facing American schools and communities.”

If the Brown Center is looking for someone to talk about policies that failed, or the insider mechanics of pissing off Congress so badly that they commit the unprecedented act of rolling back the powers of your department, then Duncan is the man. Big win. Heckuva job, Brookings.

So this is good news for Duncan, who gets to cash in some more on his years of promoting failed policies. But it's a lose for Brookings, which as usual is kind of oblivious and doesn't seem to know that Duncan has few fans on the left or the right (AEI's Frederic Hess tweeted "Swell...another platform for him to offer up self-righteous nastiness. I wonder whose motives he'll question first."). I suppose it's a win for snarky bloggers, who will now have more material to mine on slow days. But it's a lose for everyone who has to continue to be exposed to Duncan's misguided and ill-informed thoughts about education.


MI: Let's Test Kids Into Oblivion

[Update: check the comments or this link for a bit more nuance and background from someone who lives there]

Congratulations, Michigan-- your state superintendent is nuts.

Brian Whiston was in front of state legislators last week to lay out his "vision" for education, and it's genius-- test the little buggers, all of them, into oblivion.













Where did Michigan find State Superintendent Whiston? Well, he was previously head of Dearborn Public Schools. He was a school board member for many years. And apparently he did some student teaching once. Oh, and he's won two awards-- he was Superintendent of the Year in 2014, and in 2007, he was Lobbyist of the Year. Because for part of his career he was a lobbyist for the Oakland school district (during which time he "learned some life lessons" about excessive expenses).

He did an interview with the Detroit Free Press back when he was elevated to the state level last summer, and in that he lays out some of his thoughts about education. These include ideas like model classrooms where the teacher is awesome and all other teachers can be trotted through and told, "See? Do it like this!" Let's imagine the teacher who replies, "Sure. Can I have this batch of students, too?" And he wants you to know that in Dearborn he was firing teachers all over the place, so totally working on that improvement of staff thing.

But his biggest plan of all is Top 10 in 10, Whiston's initiative to put Michigan among the top ten education states within ten years. That would be an impressive achievement, considering how far in the basement Michigan is on indicators like childhood literacy. That big strategic plan focuses on these goals:
  • construct a solid and sustainable P-20 system to educate all children for success;
  • meet and support the learning needs of ALL children;
  • meet and support the professional needs of ALL educators;
  • design systems to overcome the disparities experienced by children and schools;
  • empower parents and families to actively participate in their child’s education;
  • partner with employers to develop a strong, educated, and highly-skilled workforce; and
  • leading and lifting Michigan education through greater service from Lansing
That's an agreeably vague set of educational goals. But looking into the strategic details, we find a giant bureaucratic word salad laced with all the usual reformy suspects-- aligning with college and career ready goals, implementing with fidelity, deeper learning competencies, promoting teaching by "celebrating" educators, super-duper PD, providing choice for families, coordinate with employers to better produce worker drones for them, and also, most hilariously, "accelerate student achievement by adjusting the structure of the department," because if there's anything that has an influence on student achievement, it's how the state bureaucracy is organized. It is a reformsterific plan, and it deserves to have some abuse heaped upon its head, but I'll wait for another day.

Because there's one other thing that Whiston feels is super-important, and he stated that clearly to the Free Press in that big interview. Talking about what he'd do right out of the gate, Whiston mentioned calling a bunch of thinky tanks together to advise him (not, of course, teachers-- who the hell needs to talk to teachers about education), and also this:

Testing is obviously something I'm going to start day one trying to work towards.

Yes, obviously, Big Standardized Tests are necessary. Which brings us to his chat with legislators Wednesday. 

Because what Michigan's students need rather than, say, an actual investment of resources in their schools or the removal of the charter school boot from their financial necks or a reality-based attempt to recruit and retain teachers-- what Michigan students need more than all that is more testing.

Mind you, the Michigan Student Test of Educational Progress  (M-STEP) is only on Year 2. Also, it's expensive, time-consuming, and roundly criticized for being one more crappy Big Standardized Test. A state House committee voted to cut its funding. But when a BS Test is failing, the only thing to do is test more harder.

Whisten proposes to administer the test twice a year (or maybe even more) to "get a better sense of academic progress, and inform class instruction" says the man who has never been a classroom teacher. And instead of starting in third grade, Whisten believes that "age-appropriate" testing should start in kindergarten.

You know what kind of standardized testing is appropriate in kindergarten? None. None standardized testing is appropriate in kindergarten.

So condolences to you, Michigan. A child-poisoning governor, an entrenched system of replacing democracy with emergency managers-- oh, excuse me-- with CEO's, and a state education superintendent with no classroom experience and a BS Test fetish.



Thursday, April 21, 2016

PA: Funding Follies (Part 15,263)

So, you may recall from last time, the elected capital clown car that is Pennsylvania's state government had sort of passed a budget that included an education spending increase, but had not passed rules on how to spend that extra money. Governor Tom Wolf whipped up his own plan for how to divvy up the money, only his plan didn't so much "divvy it up" as it "dumped most of it on a handful of select school districts" and also technically "ignored the elected legislature and their lawmaking powers." This made it unpopular with very many people. Very many.


Wolf's theory was that some districts were particularly deep in a financial hole (thanks to the last two administrations, though Wolf prefers to blame it on just the last one), we need some restorative budgeting. In other words, if school funding is a race, Wolf wanted everyone else to just kind of sit on the curb and wait while a few people in the back of the pack catch a ride and join up.

The problem-- well, one of the problems-- as some folks tried to tell Wolf in a meeting or two, is that way more school districts are feeling Big Time Hurt than just those who made the Wolf Special Care List (a list which, frankly, looks more like a list of districts that have been pulling notable bad press-- Philly, Chester Uplands, Wilkinsburg-- than a carefully researched collection).

On top of that, as I previously warned/noted/predicted, Pennsylvania is just chock full of people who hate hate HATE having tax dollars yanked out of their pockets and sent off to Philly or other Big Cities. We can argue all day about justice and fairness and intra-state financial support, but the bottom line is that the issue is a guaranteed political turd bomb in Pennsylvania.

And so the House and Senate put together a spending bill of their own, passed it with a veto-proof margin, and sent it off to the governor. As with the budget, he can sign it or just let it become law while he sits in the corner and makes a pouty face.

This is good news for every school district that wasn't on Wolf's list (which is most of them) as they'll see more money-- maybe even enough to help offset the effects of all the borrowing, cutting and finagling that districts had to do to weather the nine-month Harrisburg budget storm.

The only good news for the state is that, miraculously, the nations' most expensive legislators actually managed to work across party lines and accomplish something. The bad news is that now that last year's budget has taken ten months to fully settle, we are already behind on the next budget-- and there isn't the slightest sign that anybody in Harrisburg has learned a thing from this mess that might help with the next mess.  Standard and Poor's thinks so too-- even as the long-overdue budget was limping across the finish line, S&P was threatening to downgrade PA's rating even further, rather that lifting it.

Meanwhile, Wolf has managed to make himself almost completely irrelevant to the budgeting process, and Pennsylvania's school funding system, which is fundamentally messed up, remains unaddressed. You can say we're moving forward, if circling the drain is a forward-ish sort of motion. 

Big Brother in a Box

Are you excited about the prospect of computer-centered competency based education? Are you an administrator whose fondest dream is to sit in your office, managing every aspect of your school by way of a big shiny bank of computer screens? Well, here's just one example of the many companies eager to make a buck help you achieve your vision. Meet Schoolrunner.org.

Schoolrunner promises, well, everything. Time for teachers. Administrator bliss. Power of parents. Student success. Those are all their headlines, not mine. And as we break it down more, the picture becomes at once more vivid and more terrible.



Evidence based academics. Because academics are now based on, I don't know-- tea leaves and palm readings? But Schoolrunner promises "Don't just view results, elicit actionable insight from your academic data." Because we all love to elicit actionable insight.

Track student behavior. We will "log, view and communicate behavioral performance." Simplify attendance. It is possible I'm doing attendance wrong, because I thought it was pretty simple already. Empower your students. Apparently by letting them look at some of their own data files.  But wait-- there's more.

Easy-to-consume data. Consume by whom, one wonders, but Schoolrunner promises to "make molehills out of mountains" which doesn't even-- I mean, what does that even mean? Reduce large amounts of data to small meaningless blips?

One system to do it all. One system to find them. One system to bring them all and in the darkness bind them. Put all your data eggs in our special cyber basket!

Configure your goals. Figure out the purpose of everything and lock it into the Big Brother Box.

Above and Beyond School Management. "More than just a management system" is what you have to keep saying to sell this multi-limbed management octopus. Don't try to sell it by declaring, "Now we will control everything." Definitely don't follow with a maniacal laugh. Instead, keep insisting that if you can have centralized monitoring and control of everything everyone does in the district, you will "create the highest level of achievement for your students." Always remember that system domination is For The Children.

If you want to look at a more fleshed-out pitch for this sort of uber-management, Schoolrunners has a lovely "white paper" entitled "Five Ways SMART SCHOOLS Are Using Data To Drive Performance." (I don't know why they yell "SMART SCHOOLS"-- perhaps they're just very excited).

So what are these five golden rings of data enabled awesomeness?

1) Transparency.

The opening example is uncompelling. Apparently, if you keep actual records of student behavior problems, when a parent calls, you can use those specifics to talk to the parent. Also, if you serve food in the cafeteria, students are more likely to find it at lunch time.

They go on to argue that with data transparency, students can tell how they're doing, families receive an "in-depth look into their child's education," teachers can "immediately discern trouble areas for students," and administrators can-- well, let me hold onto that one for a second. Students can use the data, for sure. Parents in some families (you know-- the ones where parents and children don't communicate much) will benefit. The teacher who needs this should not be a teacher. If the answer to, "How is Chris doing in class" is "I won't know until I check the data read-out," I have my doubts about how much the data read-out will really help you.

Administration? Well, administrators "can see the performance of both their students and their staff in real time."  Emphasis mine. This suggests that this system means to keep the teachers chained to their computers at all times, so that administrators can see what the teacher is up to. This seems like twelve kinds of a bad idea, showing little trust and reducing teachers to mindless widgets. MIndless widgets make lousy teachers no matter how great a system they're chained to.

2) Culture. 

 Numbers don’t create culture. If numbers created culture, salons would be run by math books. People create culture. Understanding how and why people make decisions improves
the relationship within your school’s community.

And then they explain how you use the numbers to see if you made the right culture choices or not. So numbers don't create culture, but they must be used to measure and justify it. Baloney.

3) Efficiency

Everyone is familiar with the concept of doing more with less.

Yikes. From that inauspicious opening, they move on to explain that having a super-duper data system frees up teachers from having to spend all their time massaging data. One school used centralized data and that led to a "holistic view of their students at a global level." So, wow. Also, in the end they learned that they could actually do more with less. So I think maybe they meant to say "productivity" instead of "efficiency," which is just as well, because efficiency is actually the enemy of excellence. The most efficient system is one that manages to hit the high side of mediocrity and the low side of cost. This is not a great target for a public school system.

4) Access

If you have data in a computer system, people can see it. That seems to be the point here. Illustration include a school nurse who can look up a policy on vomiting students or can see that a student turns up sick every day at the same time. Because without computers, nobody would ever know these things? 

Being able to get "the information you want, when you need it" is a pretty good selling point, but I'm not sure we need Big Brother in a Box to do that.

5) Action

There is no need to rely on gut feeling, intuition, or spidey-sense when you know exactly where your strengths are and how you can leverage those strengths to address the pain-points that have crept into your school. Data generates actionable intelligence.

I'd be more inclined to say that there is no need to rely on some number-crunching data-shoveling program that may or may not have been written by someone who knows what they're doing if instead you can use the sense nature gave you and the ability to pay attention to the carbon-based life forms around you.

"Gut feeling, intuition and spidey-sense" are just dismissive ways to refer to experience, intelligence, sensitivity, emotional intelligence, alertness, and awareness. Can you always use a different perspective and another set of eyes? Absolutely. But if your "gut" is so lousy that you think a computer program would be better, then 1) you should be in another line of work and 2) your "gut" is also not smart enough to make good use of whatever the computer program tells you.

Never trust any system, ever, that sets a goal of removing human judgment for the business of dealing with humans. First, the "removal" is a lie-- any such system merely substitutes the judgment of the system creators for the judgment of the humans on the ground, and therefor 2) you can never remove human judgment from situations that run on human judgment. , so your real question is how to get the best human judgment in play.

Spoiler alert-- the best way is not to try to create a system that makes all educational and behavioral decisions for the classroom teacher while putting the drivers' seat in some office where the school's CEO can sit and manage everything on a big bank of computer screens.

Who already uses this?

Schoolrunner proudly announces that they are "driving student success at the nation's most progressive schools." You may first want to ask exactly how one "drives" student success, and why would one describe the process in a way that seems to reduce the actual student to an inanimate object. But after that, of course, you'll ask, "And which are the nation's most progressive schools, pray tell?'

Well, the listed winners are the Milwaukee Collegiate Academy, Choice Foundation, Achievement School District, KIPP: Houston, and Crescent City Schools. Crescent City and Choice Foundation are both New Orleans charters (Crescent City is actually partnered with RelayGSE, so you know they are super-reformy).

Exactly what about these charters is progressive will remain a mystery for now, but it's easy to see why a system like Schoolrunner would appeal to a charter operator. You don't need highly trained, experienced or skilled teachers at all-- just unpack Big Brother in a Box, sit down at your desk, pull up your dashboard, and you are running a whole school!

This is competency base, computer controlled schooling at its worst. Dehumanizing, one-sixe-fits-all, sterile and yet one more version of school that you will never find the wealthy submitting their own children to.

Wednesday, April 20, 2016

Comparable Measures

So apparently I'm writing a series about teacher evaluation this week. This will stand on its own, but if you want more context, you can work your way backwards starting here.

One of the holy grails of ed reform is comparability. The aim is a score or grade or rating that allows us to say definitively that Hypothetical High School is a better school than Imaginary Academy, that Pat O'Furniture teaching third grade in Iowa is a better teacher that Teachy McTeacherson teaching tenth grade Spanish in Maine.

But we're also looking for evaluations that provide useful information, and there's one of the major problems in the evaluation world these days.











The more comparable a measure is, the less useful it is.

Comparable measures must be reductive. In order to compare the elementary teacher in Iowa and the language teacher in Maine, we have to reduce the measure to elements that both teachers possess. This means that the measure must be simple, and it must ignore most of what makes each teacher unique.

This evaluation problem is mirrored by the challenges of student assessment in a classroom. For an example, let me talk about grading writing assignments. I do a multitude of assignment types in my classroom, but for our purposes, let's focus on one particular type.

Many of my students essays are scored with a modified six traits writing rubric. The rubric breaks writing down into six different qualities; additionally I use a modified rubric that breaks each of the six into two or three sub-categories, for a grand total of fifteen specific characteristics of the writing. Those sub-scores provide a slightly richer assortment of data for the students and for me about where their strengths and weaknesses lie on a particular assignment. But I can't really compare that batch of fifteen scores easily. If I want to compare and rank the "best" writers, I need to combine the scores into raw totals. But those raw totals, while easy to compare, provide little useful information. I can say that Pat "ranks" one point higher than Chris, but that doesn't help either of them improve writing, and the raw comparison doesn't show that while Pat has a strong voice but lousy technical control, Chris is a good technician but cold and boring.

And the most useful feedback and evaluation for both is actually a one-on-one conference with me (hard to squeeze in, but now and then I manage) which involves discussion and give and take and reflection and plans for future approaches. These are exceptionally useful, and completely non-comparable (unless, of course, we apply some reductive tool that "helps" me turn the conference into a score, but then we've lost everything that was useful to the student about the conference.)

But wait, you may say. Doesn't that mean that our traditional grades are also reductive and pretty unuseful to the students. And I will say, yes, you are correct, but let's save that (more radical) discussion for another day.

Comparable measures can be useful, and do have their place when they are used in ways that acknowledge how narrow they are. Need to know which student is tallest or most consistently shows up to class on time? We can do that.

But complex human behaviors can't be reduced to comparison-ready measures without losing most of what matters in the translation. Not only are we talking about a complicated array of many different qualities, but those qualities themselves can cut in both positive and negative ways. It is one of the oldest observations about human character-- a person's greatest strength and most terrible weakness can be both sides of a single coin. I am a pretty solid and dependable guy; I am also pretty dull and unexciting. Two sides of the same coin. If the measurement system only weighs the coins without considering how they turn, we've missed important information.

Teachers teach different students. They teach different material. They teach it in different ways. They bring different strengths and weaknesses to the classroom, and those in turn may be weaknesses or strengths depending on what is in that classroom. We can't evaluate a teacher in isolation from all other factors any more than we can decide whether or not a man is a good husband if he's not in any sort of relationship.

If our goal is to do teacher evaluations that are helpful and useful, that help teachers develop and strengthen and grow their teaching skills, tools, and talents, then we must recognize that any such instrument will not yield easily comparable results. My question to reformsters is simple-- would you rather help Teachy McTeacherson do the very best teaching she can, or do you want to be able to compare her to Pat O'Furniture? Which do you think will best serve the needs of the child? Because you can't do both at once. It's possible (though I have to mull some more) that you can't do both at all. A yardstick can measure consistently, clearly and accurately-- but only in one dimension, and teaching never happens in just one direction.


NPE: Teacher Voices on Teacher Evaluation

The Network for Public Education was founded in 2013 by Diane Ravitch and Anthony Cody as an advocacy group for...well, public education. It has become a powerful networking connection for those of us who are public education advocates, and while it has been vocal in speaking out against education reform balderdash, NPE also has a full positive agenda of things that they support.



They have also produced some reports (including a state by state report card) and a new report released just last week. "Educators on the Impact of Teacher Evaluation is a rarity in the world of reports on the world of education in that it involves the voices of actual classroom teachers. The very first paragraph puts the whole business of teacher evaluation in context with the current state of education:

Teachers choose the teaching profession because of their love of children and their desire to help them grow and blossom as learners. Across the nation, however, far too many educators are leaving the classroom. Headlines report teacher shortages in nearly every state. One factor reported in almost every story is the discouragement teachers feel from a reform movement that is increasing pressure to raise student test scores, while reducing support. This pressure dramatically increased with the inclusion of student test scores in teacher evaluations, with some states using them to account for as much as 50% of evaluation scores. When combined with frameworks, rubrics, and high-stake consequences, the nature of teacher evaluation has dramatically changed, and narratives from educators across the United States document that it has changed for the worse.


NPE commissioned a study, and the researchers they hired eventually received responses from almost 3,000 teachers. Here are some of the findings of the research:

* Nobody much likes VAM or rubric-based data-generators like those based on the work of Danielson and Marzano.

* A whopping 84% of teacher report spending more time on evaluation, bringing teachers closer to those Dilbert-esque office workers who have to stop working on projects in order to create reports to explain why they aren't making more progress on the project.

* Being data driven translates to spending more time with spreadsheets and numbers than with colleagues and humans.

* Over half the respondents reported seeing active bias against veteran teachers. This surprised me, and I guess it shouldn't have, since it makes sense that in the current tight-budget environment, an experienced teacher is an expensive teacher. On top of that, veteran teachers are also more likely to call baloney when they see the next reformy lunch platter headed in.

* New teacher eval systems have been particularly hard on non-white teachers, which would be bad news in the best of times, but even worse news these days when the lack of teachers of color is a serious problem in the US school system.

* Professional development is making things worse. Not a surprise, particularly in states like mine where the rule is that it only counts as a required PD hour if it has something directly to do with raising test scores.

The report makes six recommendations.

1) Stop using student test scores for teacher evaluation. Absolutely.

2) Top-down collaboration is an oxymoron. Don't tie mandated and micromanaged teacher collaboration to evaluation.

3) The observation process should focus on reflection and dialogue as tools for improvement. One of my favorite lines in the report-- The result should be a narrative, not a number.

4) Less paperwork. This is not just a teacher problem. My administrators essentially have to stop doing all their other work for several weeks out of the year just to get their evaluation and observation paperwork done. Forms and forms and forms and forms for me, and ten times that many for them. Again-- do you want us to do our job, or do a bunch of paperwork about what we would be doing for our job if we weren't busy with the paperwork.

5) Take a good hard look at how evaluation systems are affecting veteran teachers and teachers of color.

6) Burn down the entire professional development system. Okay, that's my recommendation. NPE is more restrained-- decouple PD from the evaluation system and attach it to things that actually help teachers do their jobs.

That's the basic outline. There are more details and there are, most of all, actual quotes from actual teachers. I have read so many "reports" and "white papers" and "policy briefs" covering many aspects of education policy over the last few years, and the appearance of a teacher voice is rarer than Donald Trump having a good hair day and displaying humility at the same time. That alone makes this report valuable and useful. I recommend you read the whole thing.


   

Tuesday, April 19, 2016

Holding Accountable

This is turning into one of those conversations that wanders around the internet. You'll be able to read this post as a stand-alone, but if you want some context-- Start with Part I here, Follow that with the Charlotte Danielson post here,  and then read the Peter Cunningham post here.

Apparently it's a zeitgeist thing, because the same day I posted my reflection of teacher evaluation, Charlotte Danielson was also taking a look at the issue.



Danielson misses some points here and there (for instance, referencing "the Widget Effect," a pseudo report from TNTP that enjoys a life far beyond the value of its content), but she gets one point absolutely correct:

There is also little consensus on how the profession should define "good teaching."

And then there's the money quote that launched a thousand tweets:


I'm deeply troubled by the transformation of teaching from a complex profession requiring nuanced judgment to the performance of certain behaviors that can be ticked off on a checklist. In fact, I (and many others in the academic and policy communities) believe it's time for a major rethinking of how we structure teacher evaluation to ensure that teachers, as professionals, can benefit from numerous opportunities to continually refine their craft.

It's a sentiment that has been expressed by tens of thousands of teachers, but when it comes from the woman whose brand is on many an evaluation form, it gets attention. However, I'm a little less excited about her following paragraph.

Simultaneously, it's essential to acknowledge the fundamental policy imperative: Schools must be able to ensure good teaching. Public schools are, after all, public institutions, operating with public funds. The public has a right to expect good teaching. Every superintendent, or state commissioner, must be able to say, with confidence: "Everyone who teaches here is good. Here's how we know: We have a system."

The public has a right expect good teaching? Absolutely. We can prove the quality of the teaching with a system. I'm dubious. I am not a systems guy-- systems have huge value as long as we recognize their limits, but the minute we start imagining that a system can somehow do better than human judgment, I think we're in trouble. All systems are ways to impose the judgment of a few humans on the behavior of a large number of other humans, and therefor systems have the effect of moving judgment and decision-making further away from the place where the actual rubber meets the real road. 

Danielson tosses out a number-- 6% of teachers are in need of remediation. That seems high to me, but it also seems made up, so we'll let that slide, because part of her point is that a personnel system should be built around the vast majority of non-sucky teachers. To her, that means focusing on "professional learning" rather than ratings.

She offers four truths about professional learning that need to be folded in. 1) It requires active intellectual engagement. 2) It can only occur in an atmosphere of trust. 3) Both challenge and support, and a career-long process of support which is never "finished" to which I say yes, yes, yes. I've said it a zillion times-- every good and great teacher I know can give you a list of things they still need to work on. 4) Policy makers must acknowledge that top down, butt in seats, assigned reading traditional PD doesn't do jack.

And she has some preliminary thoughts on what a personnel policy might look like:

* must id underperforming teachers and promote professional learning

* should include a step from probationary to "continuing status"

* differentiated, according to employment status, with different rules for novice teachers and different roles for experienced ones.

* experienced teachers should still be evaluated now and then

Peter Cunningham (top dawg at Education Post) connected some dots between my piece and Danielson's. He reduced my post to a list of possible purposes for evaluation:
  1. To find bad teachers.
  2. To find good teachers.
  3. To guide and support teachers.
  4. To compare teachers.
  5. To let the taxpayers know whether or not they’re getting their money’s worth.
  6. To give teachers a clear set of expectations.
  7. To make the complex look simple.
His read is that Danielson is focusing on #1, #3 and #6 with a possible visit at #2. She has also addressed my side point, which is that hiring practices are often the culprit. Danielson puts evaluation squarely between hiring and tenure.

Into the mix, Cunningham throws Chicago writer Mike Dumke, who gets to ask the next logical question which is, if teachers don't want to be accountable for test scores, what do they want to be accountable for?

My first response is to suggest that we reframe the question-- instead of asking for the best ways for teachers to prove they're doing their job well, we would do better looking for the best way to find out if the teacher is doing a good job. It may seem like nit-picking, but to me, it's the difference between a boss who judges you by checking to see what you're doing and a boss who makes you stop doing your job so that you can go attend a meeting on doing your job.

Not all purposes on my list are created equal. #4 is useless and a waste of our time. #7 is destructive and also best avoided.

#3 is more important than #1 and #2, both of which assume that goodness and badness are static qualities in a teacher. They aren't. Some days I teach really well. Some days I teach okay. Some days I don't do well at all. Additionally, I teach some students far better than I teach others--not because of  will or skill or desire, but for the same reason I am a great partner for one woman and a lousy partner for some others. Teaching is a human relationship, and beyond good/bad or right/wrong, we have the combination of two specific humans.

In other words, instead of trying to find good (or bad) teachers, we should be trying to help teachers teach well.

Are there teachers who fall so far on one end or the other that we can go ahead and slap a good or bad label on them? Sure (I have had both as student teachers). But most of us fall in the middle, always striving and working to do our job a little better.

So what should we look for? How would I answer my version of Dumke's question?

I've tried to answer this before, and someday I'll work out all the details and launch my million-dollar consulting business. But here's the basic process that I envision:

1) Bring together, in person or by technomeans, a huge number of various stakeholders from the community-- parents, grandparents, employers, graduates, elected officials, business leaders, students, teachers themselves. Give them a hefty, robust list of teacher qualities, skills and behaviors. Have them determine which ones to include and how much to weight them.

2) Now you have generated both a job description and an evaluation form (and, if you're lucky and your local leadership is good, you have also generated some good, thoughtful discussion about what folks want from their schools and their teachers).

3) Just as you have generated a custom evaluation format for your schools and teachers, you will need a custom method for evaluating the items that you have selected. Some may be, by their nature, easy to hit (eg if your community wants to include "students get good test scores," that one will be easy to cover). Many, by their nature, will be a matter of opinion and observation. If, for instance, something like "maintains a safe and supportive classroom environment" is chosen, you aren't going to be able to administer a testing instrument that generates a Classroom Environment Rating number.

However, one of my premises is that most of the people in the school community already know the things you want to find out. Students know who the nurturing teachers are and who the hard-but-fair teachers are and who the mean-and-scary-for-no-good-reason teachers are. Former students know, and they know what turned out to be useful in the long run. Parents know. Other teachers know. Most of what you want to know is already known-- what is needed is an instrument that taps into that knowledge.

Cunningham suggests that a student rating system would be susceptible to teachers essentially sucking up for a good review-- but I believe that if you are asking the students to rate specifics, you'll get far better results. In other words, "Is your teacher good?" is a question that will yield lousy results. But "Does your teacher help you understand hard things?" will get a better answer from students even if that student hates that teacher.

4) Do not, no matter how great the temptation, hire outside consultants to do the work. Maybe to manage the structure and data involved. And somebody will need to come up with a master list/menu. Perhaps that's how I'll finance my eventual retirement. But do not let non-stakeholders in the local system try to tell locals what they are supposed to want from their schools. The state will need to look over local shoulders to see what's being done and whether it is being done well and without tilting the table and with broad community involvement.

And do not rush. I am guessing it will take at least a year-- at least-- to get the first go-round ready to go. And then you'll have to periodically review it as you find gaps or issues or time goes by and your local stakeholder population changes.

Some folks will not like this system because it absolutely fails in terms of making schools and teachers comparable across lines, and it lends itself more to a descriptive, qualitative rating than a simple one-dimensional stack-and-rank. But it could generate real data of actual use to the teachers and district themselves. As a bonus, it could also elevate and involve the voices of the community, giving them a real feeling of involvement and ownership in their district.