The holidays are over, life is back to normal(ish), and your classroom has hit that post-holiday stride. It is time to finally make your voice heard on the subject of teacher preparation programs.
As you've likely heard, the USED would like to start evaluating all colleges, but they would particularly like to evaluate teacher preparation programs. And they have some exceptionally dreadful ideas about how to do it.
Under proposed § 612.4(b)(1), beginning in April, 2019 and annually
thereafter, each State would be required to report how it has made
meaningful differentiations of teacher preparation program performance
using at least four performance levels: “low-performing,” “at-risk,”
“effective,” and “exceptional” that are based on the indicators in
proposed § 612.5 including, in significant part, employment outcomes for
high-need schools and student learning outcomes.
And just to be clear, here's a quick summary from 612.5
Under proposed § 612.5, in determining the performance of each teacher
preparation program, each State (except for insular areas identified in
proposed § 612.5(c)) would need to use student learning outcomes,
employment outcomes, survey outcomes, and the program characteristics
described above as its indicators of academic content knowledge and
teaching skills of the program's
new teachers or recent graduates. In addition, the State could use
other indicators of its choosing, provided the State uses a consistent
approach for all of its teacher preparation programs and these other
indicators are predictive of a teacher's effect on student performance.
Yes, we are proposing to evaluate teacher prep programs based on the VAM scores of their graduates. Despite the fact that compelling evidence and arguments keep piling up to suggest that VAM is not a valid measure of teacher effectiveness, we're going to take it a step further and create a great chain of fuzzy thinking to assert that when Little Pat gets a bad grade on the PARCC, that is ultimately the fault of the college that granted Little Pat's teacher a degree.
Yes, it's bizarre and stupid. But that has been noted at length throughout the bloggosphere plenty. Right now is not the time to complain about it on your facebook page.
Now is the time to speak up to the USED.
The comment period for this document ends on February 2. All you have to do is go to the site, click on the link for submitting a formal comment, and do so. This is a rare instance in which speaking up to the people in power is as easy as using the same device you're using to read there words.
Will they pay any attention? Who knows. I'm not inclined to think so, but how can I sit silently when I've been given such a simple opportunity for speaking up? Maybe the damn thing will be adopted anyway, but when that day comes, I don't want to be sitting here saying that I never spoke up except to huff and puff on my blog.
I just gave you a two-paragraph link so you can't miss it. If you're not sure what to say, here are some points to bring up-
The National Association of Secondary School Principals has stated its intention to adopt a document stating clearly that they believe that VAM has no use as an evaluation tool for teachers.
The American Statistical Association has stated clearly that test-based measures are a poor tool for measuring teacher effectiveness.
A peer-reviewed study published by the American Education Research Association and funded by the Gates Foundation determined that “Value-Added Performance Measures Do Not Reflect the Content or Quality of Teachers’ Instruction.”
You can scan the posts of the blog Vamboozled, the best one-stop shop for VAM debunking on the internet for other material. Or you can simply ask a college education department can possibly be held accountable for the test scores of K-12 students.
But write something. It's not very often that we get to speak our minds to the Department of Education, and we can't accuse them ignoring us if we never speak in the first place.
Showing posts with label NASSP. Show all posts
Showing posts with label NASSP. Show all posts
Monday, January 5, 2015
Monday, December 15, 2014
Duncan in Denial
There are many portions of Arne Duncan's educational policies that are... what's the word? Counter-intuitive? Not aligned with reality as experienced by most sentient beings? Baloney? There are days when I imagine that the energy Duncan expends just holding cognitive dissonance at bay must be enough to power a small country (like, say, Estonia).
But nowhere are Duncan's powers of denial more obvious than in his deep and abiding love for Value Added Measures. Arne loves him some VAM sauce, and it is a love that simply refuses to die. "You just don't know her the way I do," he cries, as the rest of us just shake our heads.
At this point, VAM is no spring chicken, and perhaps when it was fresh and young some affection for it could be justified. After all, lots of folks, including non-reformy folks, like the idea of recognizing and rewarding teachers for being excellent. But how would we identify these pillars of excellence? That was the puzzler for ages until VAM jumped up to say, "We can do it! With Science!!" We'll give some tests and then use super-sciency math to filter out every influence that's Not a Teacher and we'll know exactly how much learnin' that teacher poured into that kid.
The plan is simple and elegant. All it requires is two simple tools:
1) A standardized test that reliably and validly measures how much students know
2) A super-sciency math algorithm that will reliably and validly strip out all influences except that of the teacher.
Unfortunately, we don't have either.
We know we don't have either. We are particularly clear on the degree to which we do not have the second. Scan the list of reformster programs, and while you can find plenty of principled disagreement on most points, there is no part of the reformster education platform that has been so thoroughly, widely debunked as VAM-for-teacher-evaluation. The National Association of Secondary School Principals has taken a stand, and if you read their resolution, you'll find not just a philosophical argument, but a list of striking debunkers. The American Statistical Association has made its own statement in opposition. A peer-reviewed study paid for by the Gates Foundation itself, the grand-daddy of all reformster backers, declared in no uncertain terms that VAM tells us nothing about teacher quality. The blog Vamboozled (by Audrey Amrein-Beardsley) provides unplumbable depths of VAM-busting research and essays.
At this point, even the Flat Earth Society would be reluctant to endorse VAM as a measure of teacher effectiveness.
NO portion of his policy has been so thoroughly disproven, and yet no portion of his policy has earned more of Duncan's loyalty. He stopped saying "Common Core" out loud. He at least pretends to be cooling off on testing. Even he has to admit that some charters have issues. And data collection has become the love that dare not speak its name. But VAM still owns a place close to Arne's heart.
Witness the most recent doubling down on VAM, in which Duncan not only pledges his allegiance to the flagging monster, but announces his intention to extend its reach, taking the already invalid VAM ratings of individual teachers and taking a giant leap backwards to use them to evaluate the college that trained that teacher. Is there anybody else who can present this idea with a straight face? Read Anthony Cody here as he takes this proposal down, then note that you have over a month to register your disagreement wit the feds, and do it.
Why would someone who professes such love for data and critical thinking stay so attached to a policy that is supported by neither? Why does Duncan insist on such a mountain of denial?
Well, I can't pretend to see into his brain. But I can see that if Duncan were to admit that his beloved VAM is a useless tool, a snub-nosed screwdriver with a briar-encrusted handle, then all his other favorite programs would collapse as well.
Everywhere we turn in reformsterland, we keep coming back to teacher effectiveness. Every one of the policies and programs either begins or ends with measuring teacher effectiveness. Why do we give the Big Test? To measure teacher effectiveness. How do we rank and evaluate our schools? By looking at teacher effectiveness. How do we find the teachers that we are going to move around so that every classroom has a great teacher? With teacher effectiveness ratings. How do we institute merit pay and a career ladder? By looking at teacher effectiveness. How do we evaluate every single program instituted in any school? By checking to see how it affects teacher effectivesness. How do we prove that centralized planning (such as Common Core) is working? By looking at teacher effectiveness. How do we prove that corporate involvement at every stage is a Good Thing? By looking at teacher effectiveness.And by "teacher effectiveness," we always mean VAM (because we don't know any other way, at all).
If our measure of teacher effectiveness, our magic VAM sauce, is a sham and a delusion and a big bowl of nothing, then a critical piece of the entire reformy puzzle is missing. We have no proof that we need reform, and we have no method of proving that reform is working (we already have means of measuring reform's effects, but we don't like those because the answers are not the ones we want).
Duncan has to hold onto his belief in VAM because without it, the whole ugly sweater of reform starts to unravel even faster than it already is.
VAM is the compass by which reform steers. To admit that it is random and useless would be to admit that our political leaders have been piloting the ship of education blindly, cluelessly, haplessly, that they are steering us onto the rocks and that they have no idea how to get us anywhere else. Either that, or they would have to admit that they've known all along exactly where they were taking us, and the VAM compass has just been a big fat lie to keep the passengers quiet and calm. Either way, admitting VAM is a fraud would be inviting (further) mutiny, and Duncan can't do that any time soon.
But nowhere are Duncan's powers of denial more obvious than in his deep and abiding love for Value Added Measures. Arne loves him some VAM sauce, and it is a love that simply refuses to die. "You just don't know her the way I do," he cries, as the rest of us just shake our heads.
At this point, VAM is no spring chicken, and perhaps when it was fresh and young some affection for it could be justified. After all, lots of folks, including non-reformy folks, like the idea of recognizing and rewarding teachers for being excellent. But how would we identify these pillars of excellence? That was the puzzler for ages until VAM jumped up to say, "We can do it! With Science!!" We'll give some tests and then use super-sciency math to filter out every influence that's Not a Teacher and we'll know exactly how much learnin' that teacher poured into that kid.
The plan is simple and elegant. All it requires is two simple tools:
1) A standardized test that reliably and validly measures how much students know
2) A super-sciency math algorithm that will reliably and validly strip out all influences except that of the teacher.
Unfortunately, we don't have either.
We know we don't have either. We are particularly clear on the degree to which we do not have the second. Scan the list of reformster programs, and while you can find plenty of principled disagreement on most points, there is no part of the reformster education platform that has been so thoroughly, widely debunked as VAM-for-teacher-evaluation. The National Association of Secondary School Principals has taken a stand, and if you read their resolution, you'll find not just a philosophical argument, but a list of striking debunkers. The American Statistical Association has made its own statement in opposition. A peer-reviewed study paid for by the Gates Foundation itself, the grand-daddy of all reformster backers, declared in no uncertain terms that VAM tells us nothing about teacher quality. The blog Vamboozled (by Audrey Amrein-Beardsley) provides unplumbable depths of VAM-busting research and essays.
At this point, even the Flat Earth Society would be reluctant to endorse VAM as a measure of teacher effectiveness.
NO portion of his policy has been so thoroughly disproven, and yet no portion of his policy has earned more of Duncan's loyalty. He stopped saying "Common Core" out loud. He at least pretends to be cooling off on testing. Even he has to admit that some charters have issues. And data collection has become the love that dare not speak its name. But VAM still owns a place close to Arne's heart.
Witness the most recent doubling down on VAM, in which Duncan not only pledges his allegiance to the flagging monster, but announces his intention to extend its reach, taking the already invalid VAM ratings of individual teachers and taking a giant leap backwards to use them to evaluate the college that trained that teacher. Is there anybody else who can present this idea with a straight face? Read Anthony Cody here as he takes this proposal down, then note that you have over a month to register your disagreement wit the feds, and do it.
Why would someone who professes such love for data and critical thinking stay so attached to a policy that is supported by neither? Why does Duncan insist on such a mountain of denial?
Well, I can't pretend to see into his brain. But I can see that if Duncan were to admit that his beloved VAM is a useless tool, a snub-nosed screwdriver with a briar-encrusted handle, then all his other favorite programs would collapse as well.
Everywhere we turn in reformsterland, we keep coming back to teacher effectiveness. Every one of the policies and programs either begins or ends with measuring teacher effectiveness. Why do we give the Big Test? To measure teacher effectiveness. How do we rank and evaluate our schools? By looking at teacher effectiveness. How do we find the teachers that we are going to move around so that every classroom has a great teacher? With teacher effectiveness ratings. How do we institute merit pay and a career ladder? By looking at teacher effectiveness. How do we evaluate every single program instituted in any school? By checking to see how it affects teacher effectivesness. How do we prove that centralized planning (such as Common Core) is working? By looking at teacher effectiveness. How do we prove that corporate involvement at every stage is a Good Thing? By looking at teacher effectiveness.And by "teacher effectiveness," we always mean VAM (because we don't know any other way, at all).
If our measure of teacher effectiveness, our magic VAM sauce, is a sham and a delusion and a big bowl of nothing, then a critical piece of the entire reformy puzzle is missing. We have no proof that we need reform, and we have no method of proving that reform is working (we already have means of measuring reform's effects, but we don't like those because the answers are not the ones we want).
Duncan has to hold onto his belief in VAM because without it, the whole ugly sweater of reform starts to unravel even faster than it already is.
VAM is the compass by which reform steers. To admit that it is random and useless would be to admit that our political leaders have been piloting the ship of education blindly, cluelessly, haplessly, that they are steering us onto the rocks and that they have no idea how to get us anywhere else. Either that, or they would have to admit that they've known all along exactly where they were taking us, and the VAM compass has just been a big fat lie to keep the passengers quiet and calm. Either way, admitting VAM is a fraud would be inviting (further) mutiny, and Duncan can't do that any time soon.
Friday, November 21, 2014
Principals vs. VAM
The National Association of Secondary School Principals issued a statement on November 7 that it intended to adopt a policy statement regarding the use of Value-Added measures in teacher evaluation. The policy statement is currently in its 60-day comment period, with final deliberation on the policy at the February meeting.
You can read the whole thing here, and you should. But let me run through the sparknotes version for you.
The Challenge
States are adopting new VAM measures that count for up to 50% of teacher evaluation scores in some states. At the same time, states were adopting certain "more rigorous college- and career standards. These standards are intended to raise the bar from having every student earn a high school diploma to the much more ambitious goal of having every student be on-target for success in post-secondary education and training."
Do you detect a whiff of feistiness in the NASSP language? It's subtle, but I think I can scent it on the breeze.
For instance, the statement notes that the new standards require a departure from the "old, much less expensive" tests. "Not surprisingly," raising the bar and adding new assessments results in far fewer "proficient" students.
Herein lies the challenge for principals and school leaders. New teacher evaluation systems demand the inclusion of student data at a time when scores on new assessments are dropping. The fears accompanying any new evaluation system have been magnified by the inclusion of data that will get worse before it gets better. Principals are concerned that the new evaluation systems are eroding trust and are detrimental to building a culture of collaboration and continuous improvement necessary to successfully raise student performance to college and career-ready levels.
And then there's VAM.
The Trouble With VAM
Given what VAM claims it can do, "at first glance, it would appear reasonable to use VAMs to gauge teacher effectiveness." But the statement continues-- "Unfortunately, policy makers have acted on that impression over the consistent objections of researchers" who have said it's a bad idea. And then they start ticking off the VAM objections.
They cite the 2014 American Statistical Association report urging schools not to use VAM to make personnel decisions. They offer some strong quotage from the ASA report.
They cite the "peer-reviewed study" funded by Gates and published by AERA which stated emphatically that "Value-added performance measures do not reflect the content or quality of teachers' instruction." This study went on to note that VAM doesn't seem to correspond to anything that anybody considers a feature of good teaching.
They cite the objections of researchers Bruce Baker and Edward Haertal. They move on to Linda Darling-Hammond. They include plenty of well-researched, clear but not inflammatory language that hammers away at how VAM simply can't be used to evaluate teachers in any real or meaningful way. It's very direct, very clear, and kind of awesome.
Their Recommendations
I'll compress here.
NASSP recommends that teacher eval include multiple measure, and that Peer Assistance and Review programs are the way to go. Teacher-constructed portfolios of student learning are also cool.
VAMs should be used to fine tune programs and instructional methods as well as professional development on a building level, but they should not be "used to make key personnel decisions about individual teachers." Principals should be trained in how to properly interpret and use VAMmy data.
And they have footnotes.
If you are looking for a clear-headed professional take-down of the idea that VAM should be used for personnel decisions by the people who have to help make those decisions, here it is. As many reformsters on the TNTP-Fordham-Bellwether axis of reformdom bemoan the fact that school leaders don't use data to inform their personnel decisions, here is an actual national association of actual school leaders saying why they prefer not to use VAM data to make personnel decisions. Now if only reformsters and policy makers will actually pay attention to the school leaders on the front lines.
You can read the whole thing here, and you should. But let me run through the sparknotes version for you.
The Challenge
States are adopting new VAM measures that count for up to 50% of teacher evaluation scores in some states. At the same time, states were adopting certain "more rigorous college- and career standards. These standards are intended to raise the bar from having every student earn a high school diploma to the much more ambitious goal of having every student be on-target for success in post-secondary education and training."
Do you detect a whiff of feistiness in the NASSP language? It's subtle, but I think I can scent it on the breeze.
For instance, the statement notes that the new standards require a departure from the "old, much less expensive" tests. "Not surprisingly," raising the bar and adding new assessments results in far fewer "proficient" students.
Herein lies the challenge for principals and school leaders. New teacher evaluation systems demand the inclusion of student data at a time when scores on new assessments are dropping. The fears accompanying any new evaluation system have been magnified by the inclusion of data that will get worse before it gets better. Principals are concerned that the new evaluation systems are eroding trust and are detrimental to building a culture of collaboration and continuous improvement necessary to successfully raise student performance to college and career-ready levels.
And then there's VAM.
The Trouble With VAM
Given what VAM claims it can do, "at first glance, it would appear reasonable to use VAMs to gauge teacher effectiveness." But the statement continues-- "Unfortunately, policy makers have acted on that impression over the consistent objections of researchers" who have said it's a bad idea. And then they start ticking off the VAM objections.
They cite the 2014 American Statistical Association report urging schools not to use VAM to make personnel decisions. They offer some strong quotage from the ASA report.
They cite the "peer-reviewed study" funded by Gates and published by AERA which stated emphatically that "Value-added performance measures do not reflect the content or quality of teachers' instruction." This study went on to note that VAM doesn't seem to correspond to anything that anybody considers a feature of good teaching.
They cite the objections of researchers Bruce Baker and Edward Haertal. They move on to Linda Darling-Hammond. They include plenty of well-researched, clear but not inflammatory language that hammers away at how VAM simply can't be used to evaluate teachers in any real or meaningful way. It's very direct, very clear, and kind of awesome.
Their Recommendations
I'll compress here.
NASSP recommends that teacher eval include multiple measure, and that Peer Assistance and Review programs are the way to go. Teacher-constructed portfolios of student learning are also cool.
VAMs should be used to fine tune programs and instructional methods as well as professional development on a building level, but they should not be "used to make key personnel decisions about individual teachers." Principals should be trained in how to properly interpret and use VAMmy data.
And they have footnotes.
If you are looking for a clear-headed professional take-down of the idea that VAM should be used for personnel decisions by the people who have to help make those decisions, here it is. As many reformsters on the TNTP-Fordham-Bellwether axis of reformdom bemoan the fact that school leaders don't use data to inform their personnel decisions, here is an actual national association of actual school leaders saying why they prefer not to use VAM data to make personnel decisions. Now if only reformsters and policy makers will actually pay attention to the school leaders on the front lines.
Subscribe to:
Posts (Atom)