As you contemplate your end of the year evaluation paperwork, you are probably thinking (and not for the first time), "This doesn't make any sense." And you are correct. Current practices in teacher evaluation do not make sense-- if you assume that the purpose of these evals is to actually evaluate teachers accurately and effectively.
A good evaluation system gives the employees clear and useful feedback-- a picture of what they do well, and a plan for what they can improve. A good evaluation system also provides management with a clear picture of their organization's strengths and weaknesses. Current thought in teacher evaluation is not interested in either of these.
Proving What We Already Think We Know
Reformsters are sure that schools are failing, and that they are failing because they are packed floor to ceiling with stinky bad teachers. So evaluations don't need to be created in order to answer the question, "How are we doing?" Reformsters already know how we're doing-- we're failing. What they need is an evaluation system that confirms what we already know.
Hence stack ranking for schools. Stack ranking (ICYMI) is a now-discredited corporate model that involved determining the distribution of rankings before anyone was even evaluated. If there are ten employees in your department, we know before we even start the process that two are excellent, two are poor, and six are fair-to-middlin'.
In teacher eval land, this crops up as statements like "You don't live in excellent/distinguished/super-duper. You just visit." This is not a comment on your actual ability; the system starts with the assumption that there are very few teachers who are really good, and probably only in occasional moments. We are not looking to find excellence, because we already know it is not there.
This is just like deciding, before you even hand out the test, exactly which grades will be given, and the grading the tests by matching each test to one of the pre-determined grades. Whether your students all ace it or all flunk it, the pre-determined grades rule the outcome.
The Illusion of Objectivity
Reformsters think numbers are magical, and that only concrete objects are real (this is one of many reasons that one tends to assume that reformsters have rather sad and shallow inner lives).
If an administrator sees something with his own eyes that he can write down on paper, that must be objective. If an administrator is looking at an artifact that he can touch with his own hands, and he assigns it a number, that must be objective. Because, numbers.
I imagine the people who design these kinds of systems sitting at home evaluating the relationships in their lives. "Well, spousal unit, we performed sexual intercourse a total of two times this month, with an average duration of seventeen minutes. I cross-checked this with the video record of those events and determined that your facial expression shows a 5.7 on the arousal scale, giving us a solid 42 scale intimacy rating for this month. This compares to a 46 rating in April and a 51 in March, by which I must conclude that we are experiencing a significant decrease in marital satisfaction, and -- wait? why are you packing??"
Objectivity in teacher evaluation is an illusion. Because, human beings. If your boss hates you and is out to get you, no system in the world can keep him from finding a way to game your evaluation to hurt you. If she's a decent person who is trying to do the best for her people, no system can keep her from doing so (though we're trying hard to come up with a system that keeps her from succeeding.)
Even if we hand evaluators a specific list of behaviors to check off as signifiers of teacher quality, that list is itself a reflection of the bias of the person who made it, and the observer's own biases will affect what he does or doesn't see. There is no such thing as an objective measure of teacher quality. It does not exist. It has never existed. It will never exist. To present a system and claim that it is objective is in and of itself a demonstration of subjective biases about teaching.
Baloney Out Of Your Control
Depending on your location, you are subject to a bunch of evaluative baloney beyond your control.
This is simply hostage-taking. We want you to take these stupid pointless useless high stakes tests seriously, so we will hold your job rating hostage until you do. We want you to think AP courses are worth spending money on, so we will give you a job rating bump if you give us money.
It's also building in a safety for Reformsters. If we left it up to things that are in your control, it would be harder to get the results that we want. Throwing in some X factors helps guarantee that you won't somehow game the system and keep us from finding the widespread failure that the system exists to "reveal."
Saving for a Rainy Day
In Pennsylvania, we go through a long convoluted process to arrive at a pass-fail grade for teachers. Many other states have also chosen a relatively low-impact approach to evaluating, and so teachers feel relatively unthreatened by the process. Don't be fooled. The data is there, showing a wide range of teacher ability and "proving" that there's a vast pulsating pool of teacherly awfulness. Just because they haven't put the data in your local newspaper yet doesn't mean they won't get around to it.