Bellwether Partners is a right-leaning pro-reform outfit that often comes across as the Fordham Institute's little brother. Like most such outfits, they like to crank out the occasional "report," and their latest is an interesting read. "No Guarantees" by Chad Aldeman and Asley LiBetti Mitchel is a look at the teacher creation pipeline that asks the subheading question, "Is it possible to ensure that teachers are ready on day one?"
The introduction sets the tone for the piece:
The single best predictor of who will be a great teacher next year is who was a great teacher this year.
The second best predictor is... Well, there really isn’t one that’s close.
And that carries right through to the title of the first section-- "We Don't Know How to Train Good Teachers."
Let me be clear right up front. My own teacher training came from a not-so-traditional program, and my experience with student teachers over the decades does not make me inclined to give uncritical spirited defense of our current techniques for preparing teachers for the classroom. So I'm not unsympathetic to some of Bellwether's concerns. I just think they miss a few critical points. Okay, several. Let's take a look at what they have to say.
What We Don't Know
The authors note that teacher preparation has always focused on inputs, and those inputs include a lot of time and a buttload of money. But there's not much research basis to support those inputs. And they break down the various points at which we don't know things.
"We don't know which candidates to admit." Tightening admission requirements, checking SAT scores, tough admission tests-- these all seem like swell ideas to some folks, but there's no proof that tougher admissions policies lead to better teachers. This makes sense-- why would things like SAT scores, which are not highly predictive of much of anything,
"We don't know what coursework to require-- if any." On the one hand, there are many teacher preparation programs that involve ridiculous, time-wasting courses. I'd bet that almost every teacher who ever worked with a student teacher has stories of playing that game where, during a supervisory visit from the college, the student and co-operating teacher pretend to be using some method endorsed by the university and implemented by approximately zero real live classroom teachers. On the other hand, if you think a teacher can be adequately prepared without any methods courses at all, or courses dealing with child development-- that any random assortment of courses is as good as any other assortment-- then you are just being silly.
"We don't know what the right certification requirements are." The authors don't have an actual point here other than, "Why shouldn't people who have been through a short-- say, five weekish-- training program be just as certifiable as people who studied teaching?" The reformster vision is deeply devoted to the idea that The Right People don't need any of that fancy-pants teacher training, and even when they are being relatively even-handed, they can't get past that bias.
"We don't know how to help teachers improve once they begin teaching." This has been covered before, in the TNTP "report" The Mirage.The short answer is that the most effective professional development happens when it control of it is in the hands of the teachers themselves. The disappointing or non-existent results are not so much related to Professional Development as they are related to Programmed Attempts To Get Teachers To Do What Policymakers Want Them To, Even If The Ideas Are Stupid or Bad Practice.
What We Really Don't Know
What Bellwether and other reformsters really don't know is how to tell whether any of these factors make a difference or not. What they really don't know is how to identify a great teacher. Every one of the items above are dismissed on the grounds of showing no discernible effect on "student achievement" or "teacher effectiveness" or other phrases that are euphemisms for "student scores on standardized tests."
This is a fair and useful measure only if you think the only purpose of a teacher, the only goal of teaching as a profession, is to get students to score higher on standardized tests. This is a view of teaching the virtually nobody at all agrees with (and I include in that "nobody" reformsters themselves, who do NOT go searching for private schools for their children based on standardized test scores).
Bellwether's metric and criticism is the equivalent of benching NBA players based on how well their wives do at macrame. The Bellwether criticism only seems more legit because it overlaps with some issues that deserve some thoughtful attention. The problem is that all the thoughtful attention in the world won't do any good if we are using a lousy metric to measure success. Student standardized test scores are a lousy metric for almost anything, but they are a spectacularly lousy metric for finding great teachers.
So Let's Talk About Outcomes
Next up, we contemplate the idea of measuring teacher preparation programs by looking at their "outcomes." This has taken a variety of forms, the most odious of which is measuring a college teaching program by looking at the standardized test results of the students in the classrooms of the graduates of the program, which (particularly if you throw some VAM junk science on top) makes a huge baloney sandwich that can't be seriously promoted as proof of anything at all. This is judging an NBA player based on the math skills of the clerk in the store that sells the wife-made macrame.
Another outcome to consider is employment rates, which is actually not as crazy as it seems; at the lowest ebb of one local college's program, my district stopped sending them notices of vacancies because their graduates were so uniformly unprepared for a classroom. But of course graduates' employment prospects can be affected by many factors far outside the university's control.
Aldeman and Mitchel provide a good survey of the research covering interest in outcomes, and they fairly note that efforts at outcome-based program evaluations have run aground on a variety of issues, not the least of which is that the various models don't really find any significant differences between teacher prep programs. Focusing on outcomes, they conclude, seems to be a good idea right up to the point you try to actually, practically do it.
What Might Actually Work
All of this means that policymakers are still looking for the right way to identify effective teacher preparation and predict who will be an effective teacher. Nothing tried so far guarantees effective teachers. Yet there are breadcrumbs that could lead to a better approach.
Aldeman and Mitchel have several breadcrumbs that strike them as tasty. In particular, they note that teacher quality is fairly predictable from day one-- the point at which teachers are actually in a classroom with actual students. Which-- well, yes. That's the point of student teaching. But I agree-- among first year teachers I think you find a small percentage who are excellent from day one, a smaller percentage that will be dreadful (the percentage is smaller because student teaching, done right, will chase away the worst prospects), and a fair number who can learn to be good with proper mentoring and assistance.
But Bellwether has four recommendations. They make their case, and they note possible objections.
Make it easier to get in
Right now getting into teaching is high risk, high cost, and low reward. There's little chance for advancement. There is considerable real cost and opportunity cost for entering the profession, which one might suppose makes fewer people likely to do so.
Drop the certification requirements, knock off foolishness like EdTPA, punt the Praxis, and just let anybody who has a hankering into the profession. Local schools would hire whoever they felt inclined to hire. Teachers might still enroll in university programs in hopes that it will improve their chances-- "add value" as these folks like to put it. But the market would still be flooded with plenty of teacher wanna-bes. And I'm sure that if any of these were open to working for lower pay because it hadn't cost them that much to walk into the profession, plenty of charter and private and criminally underfunded public schools would be happy to hire these proto-teachers.
The authors note the objection to untrained teachers in the classroom, and generally lowering the regard for the profession by turning it into a job that literally anybody can claim to be qualified for. The "untrained teacher" objection is dismissed by repeating that there's no proof that "training" does any good. At least, no proof that matches their idea of proof. As for the regard for the profession, the authors wax philosophical-- who really knows where regard for a profession comes from, anyway??
What did they miss here? Well, they continue to miss the value of good teacher preparation programs which do a good job of preparing teachers for the classroom. But even the worst programs screen for an important feature-- how badly do you want it? One of the most important qualities needed to be a good teacher is a burning, relentless desire to be a good teacher, to be in that classroom. Even if a program requires candidates to climb a mountain of cowpies to then fill out meaningless paperwork at the top, it would be marginally useful because it would answer the question, "Do you really, really want to be a teacher?"
The teaching profession has no room for people who are just trying it out, thought it might be interesting, figured they might give it a shot, want to try it for a while, or couldn't think of anything else to do. Lowering the barriers to the profession lets more of those people in, and we don't need any of them.
Make schools and districts responsible for licensing teachers
Again, this is an idea that would make life so much easier for the charters that Bellwether loves so much. It's still an interesting idea-- the authors are certainly correct to note that nobody sees the teacher being a teacher more clearly or closely than the school in which that teacher works. The authors suggest that proto-teachers start out in low stakes environment like summer school or after school tutoring, both of which are so far removed from an actual classroom experience as to be unhelpful for our purposes. On top of that, it would seriously limit the number of new teachers that a district could take on, while requiring them to somehow bring those proto-teachers on a few years before they were actually needed for a real classroom, requiring a special school administrators crystal ball.
In other words, this idea is an interesting idea, but it will not successfully substitute for making sure that a candidate has real teacher training in the first place.
The other huge problem, which they sort of acknowledge in their objections list, is that this only works if the school or district are run by administrators who know what the hell they're doing and who aren't working some sort of other agenda. A lousy or vindictive or just plain messed up administrator could have a field day with this sort of power. Possible abuses range from "you'll work an extra eight hours a week for free in exchange for certification" to "you'll serve as the building janitor for free to earn your certification" to "come see if you can find your teaching certification in my pants."
Measure and Publicize Results
Baloney. This is the notion of a market-driven new business model for teacher preparation, and it's baloney. We've already established that states can't collect meaningful on teacher programs, and Bellwether wants to see the data collection expanded to all the various faux teacher programs. They've already said that nobody has managed to scarf up data in useful or reliable quantities; now they're saying, well, maybe someone will figure out how soon. Nope.
Unpack the Black Box of Good Teaching
This boils down to "More research is required. We should do some." But this is problematic. We can't agree on what a good teacher looks like, or even what they are supposed to be doing. Bellwether becomes the gazillionth voice to call for "new assessments that measures [sic] higher-order thinking," which is just unicorn farming. Those tests do not exist, and they will never exist. And their suggestion of using Teach for America research as a clue to great teaching is ludicrous as well. There is no evidence outside of TFA's own PR to suggest that TFA knows a single thing about teaching that is not already taught in teaching prep programs across the country-- and that several things they think they know are just not true.
Another huge problem with unpacking the black box is the assumption that the only thing inside that box is a teacher. But all teachers operate in a relationship with their students, their school setting, their community, and the material they teach. The continued assumption that a great teacher is always a great teacher no matter what, and so this fixed and constant quality can be measured and dissected-- that's all just wrong. It's like believing that a great husband would be a great husband no matter which spouse he was paired up with, that based on my performance as a husband to my wife, I could be an equally great partner for Hillary Clinton or Taylor Swift or Elton John or Ellen Degeneres. I'm a pretty good teacher of high school English, but I'm pretty sure I would be a lousy teacher of fifth grade science.
Great teaching is complex and multifaceted and on top of everything else, a moving target. It deserves constant and thorough study because such research will help practitioners fit more tools into their toolbox, but there will never be enough research completed to reduce teaching to a simple recipe that allows any program to reliably cook up an endless supply of super-teachers suitable for any and all schools. And more to the point, the research seems unlikely to reveal that yes, anybody chosen randomly off the street, can be a great teacher.
Operating at that busy and complicated intersection requires a variety of personal qualities, professional skills, and specialized knowledge.
Bottom Line
There are plenty of interesting questions and criticisms raised by this report, but the conclusions and recommendations are less interesting and less likely to be useful for anyone except charters and privatizers who want easier access to a pliable and renewable workforce. Dumping everything into the pool and just buying a bigger filter is not a solution. Tearing down the profession and pretending that no training really matters is silly. We do need to talk about teacher preparation in this country, but one of the things we need to talk about is how to keep from poisoning the well with the bad policies and unfounded assumptions of the reformster camp.
There are some good questions raised by this report, but we will still need to search for answers.
Showing posts with label Bellwether. Show all posts
Showing posts with label Bellwether. Show all posts
Wednesday, February 3, 2016
Monday, May 25, 2015
The Testing Circus: Whose Fault Is It?
Andrew Rotherham of Bellwether Education Partners, a reformster-filled thinky tank, took to the pages of US News last week to address the Testing Circus and shift the blame for it explain its origins.
The ridiculous pep rallies? The matching t-shirts? The general Test Prep Squeezing Out Actual Education? That's all the fault of the local districts. In fact, Rotherham notes, "a cynic might think it's a deliberate effort to sour parents on the tests." Yes, that's it-- the schools are just making all this up in an attempt to make the public think testing is stupid.
Reformsters have been doing this a lot-- trying to shift the blame for testing frenzy from the policy makers and the reformsters pushing testing policies onto the local teachers and districts. In a video that I cannot, for some reason, link, John White, education boss of Louisiana, argues that it's local tests from teachers and school districts that are muddying the testing water, and so every single test deployed in a classroom ought to come under the control and direction of the state. Or we could go back to Arne Duncan et al suggesting that we need to trim back "unnecessary" tests, which turns out to mean tests developed on the local level.
It is hard to see this working. Can we really mollify Mrs. McGrumpymom by saying, "We know that your child really hated the PARCC and found the whole experience stressful and useless, so we're going to have her teacher stop giving those weekly spelling quizzes. All better, right?"
As with Arne Duncan, who continually seems just oh so mystified about how schools could possibly have gotten so worked up over testing, the reformster mystery here is this: do they really not understand what they've done, or do they understand and are just unleashing the lamest PR campaign ever?
Rotherham blames the Testing Circus on three factors.
First, he thinks it's a matter of capacity. But his explanation suggests that he simply doesn't understand the problem.
What elementary schools are asked to do is daunting though not unreasonable. Getting students to a specific degree of literacy and numeracy is challenging but it can be done.
Bzzzzrtt!! Wrong. Elementary schools were not asked to get students to a specific degree of literacy and numeracy. They were commanded (do it, or else) to raise test scores, and that is what they have devoted themselves to. Achieving a specific degree of literacy and numeracy might help with that goal, but only if the test is a good and valid measure, and that topic is open to debate. On top of achieving the specific degree etc, students have to actually care about the test to the point that they try. Test advocates love to assume this as a given, and they are fools to do so. If I walk into your workplace and assign you a difficult task that seems unrelated to your actual job and which will have no effect on your rating or performance review, exactly how hard will you try?
It is not the reading and numeracy level that is the goal. It is the test score. Test advocates can pretend those are the same thing, but they are not. Schools can hang tough and refuse to start with pep rallies for the tests-- or they can recognize that the nine-year-olds who will decide their fate will do a better job if someone convinces them to try.
Second, new tests. Rotherham repeats a version of a new talking point that makes no sense. The new tests are causing turmoil, stress, and even low scores. These tests are more challenging because they test awesome things like critical thinking and consequently, they are impervious to Test Prep. However, students will do better as everyone gets used to the test. So, the new tests have nothing to do with Test Prep, but students will do better as they are better Prepared for the Test.
Third, new technology. One point for Rotherham, who pretty much admits that making everybody take the test on computer was a bad idea. But I'm going to take the point back because he does not acknowledge that the decision to do so was not a local or classroom foul-up, but a mandate pushed from the highest level of reformsterdom.
Rotherham is correct to argue that some schools have gone berserk on the Testing Circus and some have quietly avoided it. He would like to use this to assert that the Testing Circus is not inevitable, and there I don't think he has a point.
Some states have put more weight on the Big Standardized Test than others. On the local level, some superintendents and principals have gone whole hog on testing and some have done their best to tell teachers, "Just do your job and let the chips fall where they may.'
But Rotherham et al cannot ignore that some pretty big chips are falling. New York teachers are looking at fifty percent of their professional rating coming from test scores, and they are not alone. Nor did states decide to roll test scores into teacher evaluations on a whim-- that 's a federal mandate of Race to the Top and/or NCLB waivers. And all of us the teacher biz can hear the hounds in the not-very-great-distance calling for those same teacher ratings to be used to decide pay and job security.
Nor can Rotherham ignore that some states are invoking considerable punishment for low test scores, using low scores as an excuse to declare that a school is "failing" and must be turned around, replaced, bulldozed, or handed over to charter operators.
Reformsters seem to want the following message to come from somewhere:
"Hey, public schools and public school teachers-- your entire professional future and career rests on the results of these BS Tests. But please don't put a lot of emphasis on the tests. Your entire future is riding on these results, but whatever you do-- don't do everything you can possibly think of to get test scores up."
I have no way of knowing whether Rotherham, Duncan, et al are disingenuous, clueless, or big fat fibbers trying to paper over the bullet wound of BS Testing with the bandaid of PR. But the answer to the question "Who caused this testing circus" is as easy to figure out as it ever was.
Reformy policymakers and politicians and bureaucrats declared that test scores would be hugely important, and ever since, educators have weighed self-preservation against educational malpractice and tried to make choices they could both live with and which would allow them to have a career. And reformsters, who knew all along that the test would be their instrument to drive instruction, have pretended to be surprised testing has driven instruction and pep rallies and shirts. They said, "Get high test scores, or else," and a huge number of schools said, "Yessir!" and pitched some tents and hired some acrobats and lion tamers. Oddly enough, the clowns were already in place.
The ridiculous pep rallies? The matching t-shirts? The general Test Prep Squeezing Out Actual Education? That's all the fault of the local districts. In fact, Rotherham notes, "a cynic might think it's a deliberate effort to sour parents on the tests." Yes, that's it-- the schools are just making all this up in an attempt to make the public think testing is stupid.
Reformsters have been doing this a lot-- trying to shift the blame for testing frenzy from the policy makers and the reformsters pushing testing policies onto the local teachers and districts. In a video that I cannot, for some reason, link, John White, education boss of Louisiana, argues that it's local tests from teachers and school districts that are muddying the testing water, and so every single test deployed in a classroom ought to come under the control and direction of the state. Or we could go back to Arne Duncan et al suggesting that we need to trim back "unnecessary" tests, which turns out to mean tests developed on the local level.
It is hard to see this working. Can we really mollify Mrs. McGrumpymom by saying, "We know that your child really hated the PARCC and found the whole experience stressful and useless, so we're going to have her teacher stop giving those weekly spelling quizzes. All better, right?"
As with Arne Duncan, who continually seems just oh so mystified about how schools could possibly have gotten so worked up over testing, the reformster mystery here is this: do they really not understand what they've done, or do they understand and are just unleashing the lamest PR campaign ever?
Rotherham blames the Testing Circus on three factors.
First, he thinks it's a matter of capacity. But his explanation suggests that he simply doesn't understand the problem.
What elementary schools are asked to do is daunting though not unreasonable. Getting students to a specific degree of literacy and numeracy is challenging but it can be done.
Bzzzzrtt!! Wrong. Elementary schools were not asked to get students to a specific degree of literacy and numeracy. They were commanded (do it, or else) to raise test scores, and that is what they have devoted themselves to. Achieving a specific degree of literacy and numeracy might help with that goal, but only if the test is a good and valid measure, and that topic is open to debate. On top of achieving the specific degree etc, students have to actually care about the test to the point that they try. Test advocates love to assume this as a given, and they are fools to do so. If I walk into your workplace and assign you a difficult task that seems unrelated to your actual job and which will have no effect on your rating or performance review, exactly how hard will you try?
It is not the reading and numeracy level that is the goal. It is the test score. Test advocates can pretend those are the same thing, but they are not. Schools can hang tough and refuse to start with pep rallies for the tests-- or they can recognize that the nine-year-olds who will decide their fate will do a better job if someone convinces them to try.
Second, new tests. Rotherham repeats a version of a new talking point that makes no sense. The new tests are causing turmoil, stress, and even low scores. These tests are more challenging because they test awesome things like critical thinking and consequently, they are impervious to Test Prep. However, students will do better as everyone gets used to the test. So, the new tests have nothing to do with Test Prep, but students will do better as they are better Prepared for the Test.
Third, new technology. One point for Rotherham, who pretty much admits that making everybody take the test on computer was a bad idea. But I'm going to take the point back because he does not acknowledge that the decision to do so was not a local or classroom foul-up, but a mandate pushed from the highest level of reformsterdom.
Rotherham is correct to argue that some schools have gone berserk on the Testing Circus and some have quietly avoided it. He would like to use this to assert that the Testing Circus is not inevitable, and there I don't think he has a point.
Some states have put more weight on the Big Standardized Test than others. On the local level, some superintendents and principals have gone whole hog on testing and some have done their best to tell teachers, "Just do your job and let the chips fall where they may.'
But Rotherham et al cannot ignore that some pretty big chips are falling. New York teachers are looking at fifty percent of their professional rating coming from test scores, and they are not alone. Nor did states decide to roll test scores into teacher evaluations on a whim-- that 's a federal mandate of Race to the Top and/or NCLB waivers. And all of us the teacher biz can hear the hounds in the not-very-great-distance calling for those same teacher ratings to be used to decide pay and job security.
Nor can Rotherham ignore that some states are invoking considerable punishment for low test scores, using low scores as an excuse to declare that a school is "failing" and must be turned around, replaced, bulldozed, or handed over to charter operators.
Reformsters seem to want the following message to come from somewhere:
"Hey, public schools and public school teachers-- your entire professional future and career rests on the results of these BS Tests. But please don't put a lot of emphasis on the tests. Your entire future is riding on these results, but whatever you do-- don't do everything you can possibly think of to get test scores up."
I have no way of knowing whether Rotherham, Duncan, et al are disingenuous, clueless, or big fat fibbers trying to paper over the bullet wound of BS Testing with the bandaid of PR. But the answer to the question "Who caused this testing circus" is as easy to figure out as it ever was.
Reformy policymakers and politicians and bureaucrats declared that test scores would be hugely important, and ever since, educators have weighed self-preservation against educational malpractice and tried to make choices they could both live with and which would allow them to have a career. And reformsters, who knew all along that the test would be their instrument to drive instruction, have pretended to be surprised testing has driven instruction and pep rallies and shirts. They said, "Get high test scores, or else," and a huge number of schools said, "Yessir!" and pitched some tents and hired some acrobats and lion tamers. Oddly enough, the clowns were already in place.
Saturday, February 7, 2015
Aldeman in NYT: Up Is Down
In Friday's New York Times, Chad Aldeman of Bellwether offered a defense of annual testing that is a jarring masterpiece of backwards speak, a string of words that are presented as if they mean the opposite of what they say. Let me hit the highlights.
The idea of less testing with the same benefits is alluring.
Nicely played, because it assumes that we are getting some benefits out of the current annual testing. We are not. Not a single one. The idea of less testing is alluring because the Big Standardized Test is a waste of time, and less testing means less time wasting.
Yes, test quality must be better than it is today.
Other than that, Mrs. Lincoln, how did you like the play. Again, this assumes that there is some quality in the tests currently being used. There is not. They don't need to be improved. They need to be scrapped.
And, yes, teachers and parents have a right to be alarmed when unnecessary tests designed only for school benchmarking or teacher evaluations cut into instructional time.
A mishmosh of false assumptions. First, there are no "necessary" tests, nor have I ever read a convincing description of what a "necessary" test would be nor what would make it "necessary." And while there are no Big Standardized Tests that are actually designed for school benchmarking and teacher evaluation, in many states that is the only purpose of the BS Test! The only one! So in Aldeman's view, would those tests be okay because they are being used for purposes for which they aren't designed?
But annual testing has tremendous value. It lets schools follow students’ progress closely, and it allows for measurement of how much students learn and grow over time, not just where they are in a single moment.
Wait! What? A test is, in fact, single snapshot from a single day or couple of days-- that doesn't just give a picture of where students are at a single moment? Taking a single moment from four or five consecutive years does not let anybody follow students progress closely. This style of measurement is great for measuring student height-- and nothing else. This is like saying that the best way to assess the health of your marriage is to give your spouse a quiz one day a year.
Aldeman follows with several paragraphs pushing the disagregation argument-- that by forcing schools to measure particular groups, somebody somewhere gets a better picture of how the school is doing. It is, as always, unclear who needs this picture. You're the parent of a child in one of the groups. You believe your child is getting a good education or a bad education based on what you know about your child. How does getting disagregated data from the school change your understanding?
Besides, I thought we said a few paragraphs back that tests for measuring the school were bad and to be thrown out?
And of course that entire argument rests on the notion that the BS Test measures educational quality and there is not a molecule of evidence out there that it does so. Not. One. Molecule.
Coincidentally, the push for limiting testing has sprung up just as we’re on the cusp of having new, better tests. The Obama administration has invested $360 million and more than four years in the development of new tests, which will debut this spring. Private testing companies have responded with new offerings as well.
Oh, bullshit. New, better tests have been coming every year for a decade. They have never arrived. They will never arrive. It is not possible to create a mass-produced, mass-graded, standardized test that will measure the educational quality of every school in the country. It is like trying to use a ruler to measure the weight of a fluid-- I don't care how many times you go back to drawing board with the ruler-- it will never do the job. Educational quality cannot be measured by a standardized test. It is the wrong tool for the job, and no amount of redesign will change that.
Good reminder though that while throwing money at public schools is terrible and stupid, throwing money at testing companies is guaranteed awesome.
Annual standardized testing measures one thing-- how well a group of students does at taking an annual standardized test. That's it. Even Aldeman here avoids saying what exactly it is that these tests (you know, the "necessary ones") are supposed to measure.
Annual standardized testing is good for one other thing-- making testing companies a buttload of money. Beyond that, they are simply a waste of time and effort.
The idea of less testing with the same benefits is alluring.
Nicely played, because it assumes that we are getting some benefits out of the current annual testing. We are not. Not a single one. The idea of less testing is alluring because the Big Standardized Test is a waste of time, and less testing means less time wasting.
Yes, test quality must be better than it is today.
Other than that, Mrs. Lincoln, how did you like the play. Again, this assumes that there is some quality in the tests currently being used. There is not. They don't need to be improved. They need to be scrapped.
And, yes, teachers and parents have a right to be alarmed when unnecessary tests designed only for school benchmarking or teacher evaluations cut into instructional time.
A mishmosh of false assumptions. First, there are no "necessary" tests, nor have I ever read a convincing description of what a "necessary" test would be nor what would make it "necessary." And while there are no Big Standardized Tests that are actually designed for school benchmarking and teacher evaluation, in many states that is the only purpose of the BS Test! The only one! So in Aldeman's view, would those tests be okay because they are being used for purposes for which they aren't designed?
But annual testing has tremendous value. It lets schools follow students’ progress closely, and it allows for measurement of how much students learn and grow over time, not just where they are in a single moment.
Wait! What? A test is, in fact, single snapshot from a single day or couple of days-- that doesn't just give a picture of where students are at a single moment? Taking a single moment from four or five consecutive years does not let anybody follow students progress closely. This style of measurement is great for measuring student height-- and nothing else. This is like saying that the best way to assess the health of your marriage is to give your spouse a quiz one day a year.
Aldeman follows with several paragraphs pushing the disagregation argument-- that by forcing schools to measure particular groups, somebody somewhere gets a better picture of how the school is doing. It is, as always, unclear who needs this picture. You're the parent of a child in one of the groups. You believe your child is getting a good education or a bad education based on what you know about your child. How does getting disagregated data from the school change your understanding?
Besides, I thought we said a few paragraphs back that tests for measuring the school were bad and to be thrown out?
And of course that entire argument rests on the notion that the BS Test measures educational quality and there is not a molecule of evidence out there that it does so. Not. One. Molecule.
Coincidentally, the push for limiting testing has sprung up just as we’re on the cusp of having new, better tests. The Obama administration has invested $360 million and more than four years in the development of new tests, which will debut this spring. Private testing companies have responded with new offerings as well.
Oh, bullshit. New, better tests have been coming every year for a decade. They have never arrived. They will never arrive. It is not possible to create a mass-produced, mass-graded, standardized test that will measure the educational quality of every school in the country. It is like trying to use a ruler to measure the weight of a fluid-- I don't care how many times you go back to drawing board with the ruler-- it will never do the job. Educational quality cannot be measured by a standardized test. It is the wrong tool for the job, and no amount of redesign will change that.
Good reminder though that while throwing money at public schools is terrible and stupid, throwing money at testing companies is guaranteed awesome.
Annual standardized testing measures one thing-- how well a group of students does at taking an annual standardized test. That's it. Even Aldeman here avoids saying what exactly it is that these tests (you know, the "necessary ones") are supposed to measure.
Annual standardized testing is good for one other thing-- making testing companies a buttload of money. Beyond that, they are simply a waste of time and effort.
Subscribe to:
Posts (Atom)