TNTP has the same root problem with teacher evaluation as TFA-- they love testing, they love Value-Added, and they already think they know who the Good Teachers are, so the evaluation tool must give an answer that checks out against what they already believe to be true. (This technique is known as The Not Very Scientific Method).
These days TNTP shares TFA's desire to bring diversity to classrooms (which is, if nothing else, a more easily-defensible PR position), and like all good supporters of the status quo, they are determined to fight the status quo.
But today they have taken another step in their quest for the appearance of excellence by releasing the TNTP Core Teaching Rubric. And because it's a snow day in my neck of the woods, I've been perusing this document.
The TNTP Core Teaching Rubric streamlines today’s bloated rubrics to bring the same focus and coherence to classroom observations that the Common Core brings to academic standards.
TNTP's premise is that current rubrics are too big and messy and give the observationator way too much to do, and I can hear Danielson-burdened principals across the country say, "No shinloa, sherlock!" And let me give TNTP credit, because if their goal was to come up with a more light and airy rubric, they have scored a big win.
The rubric scores teachers across four areas. They are:
· STUDENT ENGAGEMENT: Are all students engaged in the work of the lesson from start to finish?
· ESSENTIAL CONTENT: Are all students working with content aligned to the appropriate standards for their subject and grade?
· ACADEMIC OWNERSHIP: Are all students responsible for doing the thinking in this classroom?
· DEMONSTRATION OF LEARNING: Do all students demonstrate that they are learning?
So, okay. Students engaged? Fine. I know research says there's no actual correlation between engagement and learning, but my teacher intuition agrees with everybody else's-- student engagement is good.
But essential content? We're seriously proposing to evaluate teachers based on whether or not they are covering the CCSS. You're right TNTP-- there is not yet enough micromanaging of classroom teachers. Let's evaluate them on how well they allow themselves to be micromanaged.
"Are all students responsible for doing the thinking in the classroom?" Oh, good lord. I know somewhere in my head that these reformers prefer that teachers not think, but to just come out and say it is.... I don't know. Rude. Still, I think the taxpayers in my district would prefer that students not do ALL the thinking in my classroom. (And just to be clear, no, I didn't misplace the "all." If I say "I'll do the driving" or "She'll do the cooking," that does not indicate a shared task.) Later the document describes this element in terms tat make a little more sense, but that is an ongoing issue as well-- it's a short document, but it lacks internal consistency, as if each page was composed in a separate office.
Demonstration of Learning. And so we've hit all the basic reformer food groups. One part something that's supportable, one part bureaucratic nonsense, one part pedagogical nonsense, and now, one part something so obvious that only someone who knew nothing about teaching would think it needs to be pointed out. Oh, and twelve parts essential elements that have been left out because the creators don't know any better.
"Each performance has three components." We will be checking an essential question, descriptor language, and core teacher skills. The essential questions are close in wording to the descriptions above. The descriptor language is one more five-column rubric breaking all of these areas into specifics. As is typical of these holistic scoring tools, it takes an array of multiple details that allows for 152,633 possible configurations (I'm just roughly estimating here) and crams them into five different scores. For those of us who have been steeped in holistic scoring, it's not really as impossible as it seems.
The core teacher skills part is actually my favorite, because it's where the rubric backslides from its clean and simple lines. In this area, we try to reverse engineer what we think the teacher did in order to get the student behavior. For instance, if all the students demonstrate that they are learning, can we trace that back to teacher core skills of leading instruction, checking for understanding of content, and responding to student misunderstanding. Is it possible that, in keeping with the spirit of CCSS math, a teacher could arrive at the correct result, but not in the correct manner? At any rate, the teacher skills are not supposed to be part of the evaluation, but part of the conversation about the results.
As this is a pilot program, users are invited to "take what you learn from a pilot to inform ongoing training and norming. And please tell us what you learn" at an email address. You're invited to change the language of the rubric to fit your local and reminded that this should be one of "multiple measures of performance." You didn't think we were going to leave student test scores out, did you?
Is there a research basis for this? Why, sure. It's the standard reformy model. In this case, TNTP leans on their experience training teachers for the field, but the formula is the same. We know that these are Excellent Qualities because Excellent Teachers use them, and we can identify those Excellent Teachers because they are the ones using Excellent Qualities. Though it should be noted that only a very few should receive the super-duper seal of excellent excellence, modeled on the winners of TNTP's Fishman Prize (an absolutely awesome name for a prize even though I'm sure the actual trophy is nowhere near as cool as the one I imagine).
So there you have it. Not evil or nefarious. Just kind of sloppy, ill-considered, and generally mediocre. Once we all get our school districts to volunteer to do TNTP's field testing for free, we'll have yet another superlative tool for evaluating teachers into such a state of excellence that they won't know what hit them.