Last February, Chad Aldeman and Ashley LiBetti Mitchel, working at Bellwether Partners (a right-tilted reformy-favoring thinky tank) released a report that asked the question "Is it possible to ensure teachers are ready on day one?" and answered that question in the title: "No Guarantees."
Now Aldeman is back with a look at some specific tools for filtering out the chaff, raising the bar, and pre-selecting the best and the brightest. The results fit snugly into the folder of Things Teachers Already Knew and Have Been Trying To Get Someone To Hear For Years, but that's okay-- let's take a look, and the next time you need to discuss this kind of baloney, you'll have some legit-ish research right at your fingertips.
Aldeman lays out the problem in a sassy tone that I have to respect:
First, there’s a lot of interest in “raising the bar” for the teaching
profession. It’s not clear what this means exactly, but at root it
implies that if we could somehow just recruit better people to become
teachers, then “poof!” we’d have better teachers.
So Aldeman first looks at the beloved Praxis exams (and their various descendants). Full disclosure-- I can mock the Praxis exams because I am so old. How old am I? I'm so old that I never had to take the Praxis. But I've watched plenty of student teachers sweat it.
You will be shocked to discover that research shows no super-strong correlation between Praxis results and teacher effectiveness (and as always, I'll note that we aren't really talking about teacher effectiveness at all, but the test results of students assigned to that teacher-- but at the moment I'm playing in the reformster sandbox, so we're stuck with their rules). Looking at a couple of state comparisons (because states can set different "passing" scores for the Praxis), researchers found that on average, teachers who did well on the Praxis generally had higher effectiveness than those who scored poorly on Praxis, but the differences are tiny and the "on average" hides wide ranges of results.
Nor does it make a difference whether we're talking Praxis I or Praxis II.
There are plenty of possible explanations of why the lack of predictive Praxis power, but I think we can go with the obvious. The Praxis measures a math teacher's ability to take a standardized multiple-choice math test, not their ability to teach math. If you want to get your carburetor fixed, you don't give mechanics a multiple-choice test to take-- you find someone who does a good job working on carburetors. If you need a doctor to fix your spleen, you find someone who is known to be good at operating on spleens, not somebody who's good at taking multiple choice tests about spleens.
It's true that at the very bottom, a test may be helpful. Someone who can't get any questions right on the math Praxis probably doesn't know enough math to teach math. But once we get out of the basement, we are trying to find the best apples by seeing which ones make the best orange juice.
Aldeman asks the question-- if a bubble test isn't the right model, then how about something more open-ended. How about, for example, edTPA?
Well, here's a research paper looking at edTPA and math teachers and-- whoopsies-- other than a general overall average trend as we saw with Praxis, edTPA doesn't really tell you anything about the value-added prospects for that proto-teacher (and VAM doesn't tell you anything about anything, but that's the tool the researcher chose). The scatterplot looks like someone sneezed on graph paper.
Aldeman is looking for a policy tool, something that policymakers can impose on the system to filter out more bad teachers. I'd submit that Huge Problem #1 is that we have exactly zero zip zada tools that can assess the effectiveness of teachers in the field. If I can't tell a good apple from a bad apple when they're in front of me, how will I ever tell them apart when they're just buds on the tree?
But even when we use the tools for detecting effectiveness currently preferred by reformsters, Aldeman concludes that there is still no useful policy tool available. States that use Praxis or edTPA to keep some people out of the teaching profession are barring people who would be effective teachers, and admitting other people who aren't so hot. Which makes these tests bad policy tools.
Should we give up? Aldeman says no, but we have to shift "the locus of control should shift from states to districts." Because as Aldeman also notes, "what’s useful for a district may not be actionable in policy,
because picking the best option between two possible teachers is a
different question than whether those teachers deserve to enter the
profession at all." He's absolutely correct. I would add that the best option between two teachers is also a question that has a different answer for each different district.
I also agree with his conclusion--
...states should stop trying to do the impossible in finding the “right” bar to keep people out of the teaching profession.
You cannot standardize teaching, and you cannot standardize the requirements for becoming a teacher. Each local district has to make the best choices its local leaders can make, based on interviews, demonstrations, portfolios, recommendations, all filtered through the professional judgment of the local decision-makers. It is not perfect, but as the saying goes, the only thing worse is every other method.
Short and sweet. The reformers want measures that create infallible decision making. That's the point of BS tests, VAM, edTPA and an assortment of other things. The leaders are no longer responsible for judgment and decision making. The numbers will tell the story and that's it.
ReplyDeleteThey want a foolproof system but they can't get it. Humans aren't predictable. If we were I'd do better in relationships. This is the folly of measurement and data collection. It's a lot of work for small advantages but it can't and won't transform a system with the results they desire. They're chasing their tails and we're the victims of this pursuit of the improbable.