The Center for Education Reform is a charter promotion group, perhaps one of the most cynical and self-serving of the reformster groups. Search their website for information or ideas about education-- the actual pedagogy and instruction in a classroom-- and you will find nothing, because the Center has no actual interest in education.
Check out their board of directors-- you will find a combination of money managers and charter school operators. That is where the Center's interest lies-- in getting more money into more charters.
And what stands in the way of these corporate interests making a better, bigger buck? Well, those damn unions, of course. The Center may not have any section devoted to actually educating children, but they have a whole tab devoted to those damn unions, and here's What They Believe:
We believe that the special interests that draw funds from the tax dollars funding public education, and that have become an intransient [sic-- pretty sure they mean "intransigent," though "intransient" as in "won't move away to some other place" might suit them as well] force in political and policy circles, have outlived the usefulness of the associations they once had and have become obstacles to programs and activities that can best and most judiciously serve children. Such groups—from teachers unions, to the associations of administrators, principals, school boards and hybrids of all (e.g., “The Blob”)—should be free to organize but without access to the dollars that are spent to fund schools and should be free to recruit but not mandate members, but they should not have a public stream of money that permits the dues of members to subsidize their defense of the status quo.
The Center is currently excited with itself because it placed a quote in a Wall Street Journal article. The piece (behind a paywall) discusses the desire of some charter teachers to unionize. Or, as the Center headlined it in their regular email, "Teachers at Successful Los Angeles Charter School Organization Being Manipulated by Union Leaders."
The charter in question is the Alliance charter, a chain run by rich folks like a former mayor of LA and the owner of the Atlanta Hawks. Alliance is a big gun in the LA charter scene, and seventy of its 500-person teacher workforce started pushing for a union last spring.
"We believe that when teachers have a respected voice in policymaking it
leads to school sustainability and teacher retention," said Elana
Goldbaum, who teaches history at Gertz-Ressler High School, a member of
the Alliance group. "We have a lot of talent and we want to see that
stay. We want to see our teachers be a part of the decision-making and
we want to advocate for our students and ourselves."
The union movement has sparked controversy, with the LA union claiming interference on the part of charter management and Alliance saying the teachers feel harassed by the union. The struggle escalated at the end of October when the California Public Employment Relations Board sued Alliance for engaging in anti-union activity.
All of this, somehow, is the evil union pulling the wool over the eyes of the poor, hapless teachers.
Look, the big unions are no angels, and the big-city unions are probably the least angelic of all. But you know that teachers need some kind of union when the charters are letting loose with baloney like this, the quote from the WSJ of which the Center is so proud:
“It’s not surprising that teachers that work at charter schools would
not want to join a union,” said Alison Zgainer, executive vice president
of the Center for Education Reform, a pro-charter organization in
Washington, D.C. “They want more autonomy in the classroom, and being
part of a union you lose that autonomy.”
I guess Zgainer is referring to "autonomy" as defined by charter operators-- the autonomy to be told you must work long hours over a long week. The autonomy to have instruction strictly dictated. The autonomy to be paid as little as the charter wants to pay you. The autonomy to be fired any time the charter feels like it. The autonomy to be trained in "no excuse" techniques that are just as prescriptive of teacher behavior as they are of student behavior. That autonomy.
The autonomy that business-driven charters care about is the autonomy of management. Their dream is the same dream as that of the 19th century robber barons who fought unions tooth and nail. It's a dream where a CEO sits in his office and runs his company with complete freedom to hire and fire, raise and lower salaries, and change the work hours (or any other terms of employment) at will. It's a dream of a business where the CEO is a visionary free to seek his vision (and profit from it) without having anyone ever say "no" to him.
That's the autonomy that folks like the Center for Education Reform are interested in.
In the CEO-centered vision of school, unions are bad. Unions are evil obstacles that dare to make rules by which the CEO must abide (they are often aided by Big Government, which also dares to interfere with the CEO). I think these folks believe in the myth of the Hero Teacher because it echoes the myth of the Hero CEO-- a bold genius who makes the world a better place by pushing aside all obstacles, including the people who don't recognize his genius, until he arrives at the mountain top, loved and praised by all the Little People who are grateful that he saved them. Compromise and collaboration are for the weak, and unions are just weaklings who want to drag down the Hero CEO because they are jealous of his awesomeness and afraid that their undeserved power will be stripped from them by his deserving might.
In this topsy-turvy world, unions must be crushed not just because they set up rules to thwart the Hero CEO, but because they are holding captive all the teachers who really want to give themselves body and soul to the Hero CEO's genius vision, but the union won't let them. Damn you, evil unions.
This does not explain all charter supporters (it does not, for instance, reflect the motivations of the social justice warrior school of charter support). But it sure does explain some, even as it is oddly reminiscent of "We'll be greeted as liberators" and the tantrums of any three-year-old. But I hope that the Center for Education Reform has to live impotently with the threat of evil unions for years to come.
Wednesday, November 18, 2015
Tuesday, November 17, 2015
Accelerated Reader's Ridiculous Research
If you are not familiar with Renaissance Learning and their flagship product, Accelerated Reader, count yourself lucky.
Accelerated Reading bills itself as a reading program, but it would be more accurate to call it a huge library of reading quizzes, with a reading level assessment component thrown in. That's it. It doesn't teach children how to read; it just puts them in a computerized Skinner box that feeds them points instead of pellets for performing some simple tasks repeatedly.
Pick a book (but only one on the approved AR list). Read it. As soon as you've read it, you can take the computer quiz and earn points. AR is a great demonstration of the Law of Unintended Consequences as well as Campbell's Law, because it ends up teaching students all sorts of unproductive attitudes about reading while twisting the very reading process itself. Only read books on the approved list. Don't read long books-- it will take you too long to get to your next quiz to earn points. If you're lagging in points, pick short books that are easy for you. Because the AR quizzes are largely recalling questions, learn what superficial features of the book to read for and skip everything else. And while AR doesn't explicitly encourage it, this is a program that lends itself easily to other layers of abuse, like classroom prizes for hitting certain point goals. Remember kids-- there is no intrinsic reward or joy in reading. You read only so that somebody will give you a prize.
While AR has been adopted in a huge number of classrooms, it's not hard to find folks who do not love it. Look at some articles like "3 Reasons I Loathe Accelerated Reader" or "Accelerated Reader: Undermining Literacy While Undermining Library Budgets" or "Accelerated Reader Is Not a Reading Program" or "The 18 Reasons Not To Use Accelerated Reader." Or read Alfie Kohn's "A Closer Look at Reading Incentive Programs." So, a wide consensus that the Accelerated Reading program gets some very basic things wrong about reading.
But while AR sells itself to parents and schools as a reading program, it also does a huge amount of work as a data mining operation. Annually the Renaissance people scrape together the data that they have mined through AR and they issue a report. You can get at this year's report by way of this website.
The eyebrow raising headline from this year's report is that a mere 4.7 minutes of reading per day separate the reading stars from the reading goats. Or, as US News headlined it, "Just a Few More Minutes Daily May Help Struggling Readers Catch Up." Why, that's incredible. So incredible that one might conclude that such a finding is actually bunk.
Now, we can first put some blame on the media's regular issues with reporting sciency stories. US News simply ran a story from the Hechinger Report, and when Hechinger originally ran it, they accompanied it with much more restrained heading "Mining online data on struggling readers who catch up: A tiny difference in daily reading habits is associated with giant improvements." But what does the report actually say?
I think it's possible that the main finding of this study is that Renaissance is a very silly business. I'm no research scientist, but here are several reasons that I'm pretty sure that this "research" doesn't have anything useful to tell us.
1) Renaissance thinks reading is word encounter.
The first chunk of the report is devoted to "an analysis of reading practice." I have made fun of the Common Core approach of treating reading as a set of contextless skills, free-floating abilities that are unrelated to the content. But Renaissance doesn't see any skills involved in reading at all. Here's their breakdown of reading practice:
* the more time you practice reading, the more vocabulary words you encounter
* students who spend more time on our test-preppy program do better on SAB and PARCC tests
* students get out of the bottom quartile by encountering more words
* setting goals to read more leads to reading more
They repeatedly interpret stats in terms of "number of words," as if simply battering a student with a dictionary would automatically improve reading.
2) Renaissance thinks PARCC and SBA are benchmarks of college and career readiness
There is no evidence to support this. Also, while this assumption pops up in the report, there's a vagueness surrounding the idea of "success." Are they also using success at their own program as proof of growing student reading swellness? Because that would be lazy and unsupportable, and argument that the more students do AR activities, the better they get at AR activities.
No, if you want to prove that AR stuff makes students better at reading, you'll need a separate independent measure. And there's no reason to think that the SBA or PARCC constitute valid, reliable measures of reading abilities.
Bottom line: when Renaissance says that students "experienced higher reading achievement," there's no reason to believe that the phrase means anything.
3) About the time spent.
Much ado is made in the report about the amount of time a student spends on independent reading, but I cannot find anything to indicate how they are arriving at these numbers. How exactly do they know that Chris read fifteen minutes every day but Pat read thirty. There are only a few possible answers, and they all raise huge questions.
In Jill Barshaw's Hechinger piece, the phrase "an average of 19 minutes a day on the software"crops up. But surely the independent reading time isn't based on time on the computer-- not when so much independent reading occurs elsewhere.
The student's minutes reading could be self-reported, or parent-reported. But how can we possibly trust those numbers? How many parents or children would accurately report, "Chris hasn't read a single minute all week."
Or those numbers could be based on independent reading time as scheduled by the teacher in the classroom, in which case we're really talking about how a student reads (or doesn't) in a very specific environment that is neither chosen nor controlled by the student. Can we really assume that Chris reading in his comfy chair at home is the same as Chris reading in an uncomfortable school chair next to the window?
Nor is there any way that any of these techniques would consider the quality of reading-- intensely engaged with the text versus staring in the general direction of the page versus skimming quickly for basic facts likely to be on a multiple choice quiz about the text.
The only other possibility I can think of is some sort of implanted electrodes that monitor Chris's brain-level reading activity, and I'm pretty sure we're not there yet. Which means that anybody who wants to tell me that Chris spent nineteen minutes reading (not twenty, and not eighteen) is being ridiculous.
(Update: The AR twitter account directed me to a clarification on this point of sorts. The truth is actually worse than any of my guesses.)
4) Correlation and causation
Barshay quotes University of Michigan professor Nell Duke, who points out what should not need to be pointed out-- correlation is not causation and "we cannot tell from this study whether the extra five minutes a day is causing kids to make dramatic improvements." So it may be
that stronger readers spend more time reading. So we don’t know if extra reading practice causes growth, or if students naturally want to read a few minutes more a day after they become better readers. “It is possible that some other factor, such as increased parental involvement, caused both,” the reading growth, and the desire to read more, she wrote.
But "discovering" that students who like to read tend to read more often and are better at it-- well, that's not exactly big headline material.
5) Non-random subjects
In her coverage of last year's report, Barshay noted a caveat. The AR program is not distributed uniformly across the country, and in fact seems to skew rural. So while some demographic characteristics do at least superficially match the national student demographics, it is not a perfect match, and so not a random, representative sampling.
So what can we conclude
Some students, who may or may not be representative of all students, read for some amount of time that we can't really substantiate tend to read at some level of achievement that we can't really verify.
A few things we can learn
The data mining that goes into this report does generate some interesting lists of reading materials. John Green is the king of high school readers, and all the YA dystopic novels are still huge, mixed in with the classics like Frankensein, MacBeth, the Crucible, and Huck Finn. Scanning the lists also gives you an idea of how well Renaissance's proprietary reading level software ATOS works. For instance, the Crucible scores a lowly 4.9-- lower than the Fault in our Stars (5.5) or Frankenstein (12.4) but still higher than Of Mice and Men (4.5). Most of the Diary of a Wimpy Kid books come in in the mid-5.somethings. So if the wimpy kid books are too tough for your students, hit them with Lord of the Flies which is a mere 5.0 even.
Also, while Renaissance shares the David Coleman-infused Common Core love of non-fiction ("The majority of texts students encounter as they progress through college or move into the workforce are nonfiction"), the AR non-fiction collection is strictly articles. So I guess there are no book length non-fiction texts to be read in the Accelerated Reader 360 world.
Is the reading tough enough?
Renaissance is concerned about its discovery that high school students are reading work that doesn't rank highly enough on the ATOS scale. By which they mean "not up to the level of college and career texts." It is possible this is true. It is also possible that the ATOS scale, the scale that thinks The Catcher in the Rye is a 4.7, is messed up. Just saying.
The final big question
Does the Accelerated Reader program do any good?
Findings from prior research have detected a tipping point around a comprehension level of about 85% (i.e., students averaging 85% or higher on Accelerated Reader 360 quizzes taken after reading a book or article). Students who maintain this level of success over a quarter, semester, or school year are likely to experience above-average achievement growth.
Remember that "student achievement" means "standardized test score." So what we have is proof that students who do well on the AR battery of multiple choice questions also do well on the battery of PARCC and SBA standardized test questions. So at least we have another correlation, and at most we have proof that AR is effective test prep.
Oddly enough, there is nothing in the report about how AR influences joy, independence, excitement, or lifelong enthusiasm for reading. Nor does it address the use of reading to learn things. Granted, that would all be hard to prove conclusively with research, but then, this report is 64 pages of unsupported, hard-to-prove assertions, so why not throw in one more? The fact that the folks at Renaissance Learning found some results important enough to fake but other results not even worth mentioning-- that tells us as much about their priorities and their program as all their pages of bogus research.
Accelerated Reading bills itself as a reading program, but it would be more accurate to call it a huge library of reading quizzes, with a reading level assessment component thrown in. That's it. It doesn't teach children how to read; it just puts them in a computerized Skinner box that feeds them points instead of pellets for performing some simple tasks repeatedly.
Pick a book (but only one on the approved AR list). Read it. As soon as you've read it, you can take the computer quiz and earn points. AR is a great demonstration of the Law of Unintended Consequences as well as Campbell's Law, because it ends up teaching students all sorts of unproductive attitudes about reading while twisting the very reading process itself. Only read books on the approved list. Don't read long books-- it will take you too long to get to your next quiz to earn points. If you're lagging in points, pick short books that are easy for you. Because the AR quizzes are largely recalling questions, learn what superficial features of the book to read for and skip everything else. And while AR doesn't explicitly encourage it, this is a program that lends itself easily to other layers of abuse, like classroom prizes for hitting certain point goals. Remember kids-- there is no intrinsic reward or joy in reading. You read only so that somebody will give you a prize.
While AR has been adopted in a huge number of classrooms, it's not hard to find folks who do not love it. Look at some articles like "3 Reasons I Loathe Accelerated Reader" or "Accelerated Reader: Undermining Literacy While Undermining Library Budgets" or "Accelerated Reader Is Not a Reading Program" or "The 18 Reasons Not To Use Accelerated Reader." Or read Alfie Kohn's "A Closer Look at Reading Incentive Programs." So, a wide consensus that the Accelerated Reading program gets some very basic things wrong about reading.
But while AR sells itself to parents and schools as a reading program, it also does a huge amount of work as a data mining operation. Annually the Renaissance people scrape together the data that they have mined through AR and they issue a report. You can get at this year's report by way of this website.
The eyebrow raising headline from this year's report is that a mere 4.7 minutes of reading per day separate the reading stars from the reading goats. Or, as US News headlined it, "Just a Few More Minutes Daily May Help Struggling Readers Catch Up." Why, that's incredible. So incredible that one might conclude that such a finding is actually bunk.
Now, we can first put some blame on the media's regular issues with reporting sciency stories. US News simply ran a story from the Hechinger Report, and when Hechinger originally ran it, they accompanied it with much more restrained heading "Mining online data on struggling readers who catch up: A tiny difference in daily reading habits is associated with giant improvements." But what does the report actually say?
I think it's possible that the main finding of this study is that Renaissance is a very silly business. I'm no research scientist, but here are several reasons that I'm pretty sure that this "research" doesn't have anything useful to tell us.
1) Renaissance thinks reading is word encounter.
The first chunk of the report is devoted to "an analysis of reading practice." I have made fun of the Common Core approach of treating reading as a set of contextless skills, free-floating abilities that are unrelated to the content. But Renaissance doesn't see any skills involved in reading at all. Here's their breakdown of reading practice:
* the more time you practice reading, the more vocabulary words you encounter
* students who spend more time on our test-preppy program do better on SAB and PARCC tests
* students get out of the bottom quartile by encountering more words
* setting goals to read more leads to reading more
They repeatedly interpret stats in terms of "number of words," as if simply battering a student with a dictionary would automatically improve reading.
2) Renaissance thinks PARCC and SBA are benchmarks of college and career readiness
There is no evidence to support this. Also, while this assumption pops up in the report, there's a vagueness surrounding the idea of "success." Are they also using success at their own program as proof of growing student reading swellness? Because that would be lazy and unsupportable, and argument that the more students do AR activities, the better they get at AR activities.
No, if you want to prove that AR stuff makes students better at reading, you'll need a separate independent measure. And there's no reason to think that the SBA or PARCC constitute valid, reliable measures of reading abilities.
Bottom line: when Renaissance says that students "experienced higher reading achievement," there's no reason to believe that the phrase means anything.
3) About the time spent.
Much ado is made in the report about the amount of time a student spends on independent reading, but I cannot find anything to indicate how they are arriving at these numbers. How exactly do they know that Chris read fifteen minutes every day but Pat read thirty. There are only a few possible answers, and they all raise huge questions.
In Jill Barshaw's Hechinger piece, the phrase "an average of 19 minutes a day on the software"crops up. But surely the independent reading time isn't based on time on the computer-- not when so much independent reading occurs elsewhere.
The student's minutes reading could be self-reported, or parent-reported. But how can we possibly trust those numbers? How many parents or children would accurately report, "Chris hasn't read a single minute all week."
Or those numbers could be based on independent reading time as scheduled by the teacher in the classroom, in which case we're really talking about how a student reads (or doesn't) in a very specific environment that is neither chosen nor controlled by the student. Can we really assume that Chris reading in his comfy chair at home is the same as Chris reading in an uncomfortable school chair next to the window?
Nor is there any way that any of these techniques would consider the quality of reading-- intensely engaged with the text versus staring in the general direction of the page versus skimming quickly for basic facts likely to be on a multiple choice quiz about the text.
The only other possibility I can think of is some sort of implanted electrodes that monitor Chris's brain-level reading activity, and I'm pretty sure we're not there yet. Which means that anybody who wants to tell me that Chris spent nineteen minutes reading (not twenty, and not eighteen) is being ridiculous.
(Update: The AR twitter account directed me to a clarification on this point of sorts. The truth is actually worse than any of my guesses.)
4) Correlation and causation
Barshay quotes University of Michigan professor Nell Duke, who points out what should not need to be pointed out-- correlation is not causation and "we cannot tell from this study whether the extra five minutes a day is causing kids to make dramatic improvements." So it may be
that stronger readers spend more time reading. So we don’t know if extra reading practice causes growth, or if students naturally want to read a few minutes more a day after they become better readers. “It is possible that some other factor, such as increased parental involvement, caused both,” the reading growth, and the desire to read more, she wrote.
But "discovering" that students who like to read tend to read more often and are better at it-- well, that's not exactly big headline material.
5) Non-random subjects
In her coverage of last year's report, Barshay noted a caveat. The AR program is not distributed uniformly across the country, and in fact seems to skew rural. So while some demographic characteristics do at least superficially match the national student demographics, it is not a perfect match, and so not a random, representative sampling.
So what can we conclude
Some students, who may or may not be representative of all students, read for some amount of time that we can't really substantiate tend to read at some level of achievement that we can't really verify.
A few things we can learn
The data mining that goes into this report does generate some interesting lists of reading materials. John Green is the king of high school readers, and all the YA dystopic novels are still huge, mixed in with the classics like Frankensein, MacBeth, the Crucible, and Huck Finn. Scanning the lists also gives you an idea of how well Renaissance's proprietary reading level software ATOS works. For instance, the Crucible scores a lowly 4.9-- lower than the Fault in our Stars (5.5) or Frankenstein (12.4) but still higher than Of Mice and Men (4.5). Most of the Diary of a Wimpy Kid books come in in the mid-5.somethings. So if the wimpy kid books are too tough for your students, hit them with Lord of the Flies which is a mere 5.0 even.
Also, while Renaissance shares the David Coleman-infused Common Core love of non-fiction ("The majority of texts students encounter as they progress through college or move into the workforce are nonfiction"), the AR non-fiction collection is strictly articles. So I guess there are no book length non-fiction texts to be read in the Accelerated Reader 360 world.
Is the reading tough enough?
Renaissance is concerned about its discovery that high school students are reading work that doesn't rank highly enough on the ATOS scale. By which they mean "not up to the level of college and career texts." It is possible this is true. It is also possible that the ATOS scale, the scale that thinks The Catcher in the Rye is a 4.7, is messed up. Just saying.
The final big question
Does the Accelerated Reader program do any good?
Findings from prior research have detected a tipping point around a comprehension level of about 85% (i.e., students averaging 85% or higher on Accelerated Reader 360 quizzes taken after reading a book or article). Students who maintain this level of success over a quarter, semester, or school year are likely to experience above-average achievement growth.
Remember that "student achievement" means "standardized test score." So what we have is proof that students who do well on the AR battery of multiple choice questions also do well on the battery of PARCC and SBA standardized test questions. So at least we have another correlation, and at most we have proof that AR is effective test prep.
Oddly enough, there is nothing in the report about how AR influences joy, independence, excitement, or lifelong enthusiasm for reading. Nor does it address the use of reading to learn things. Granted, that would all be hard to prove conclusively with research, but then, this report is 64 pages of unsupported, hard-to-prove assertions, so why not throw in one more? The fact that the folks at Renaissance Learning found some results important enough to fake but other results not even worth mentioning-- that tells us as much about their priorities and their program as all their pages of bogus research.
Monday, November 16, 2015
USED Goes Open Source, Stabs Pearson in the Back for a Change
The United States Department of Education announced at the end of last month its new #GoOpen campaign, a program in support of using "openly licensed" aka open source materials for schools. Word of this is only slowly leaking into the media, which is odd, because unless I'm missing something here, this is kind of huge. Open sourced material does not have traditional copyright restrictions and so can be shared by anybody and modified by anybody (to really drive that point home, I'll link to Wikipedia).
Is the USED just dropping hints that we are potentially reading too much into? I don't think so. Here's the second paragraph from the USED's own press release:
“In order to ensure that all students – no matter their zip code – have access to high-quality learning resources, we are encouraging districts and states to move away from traditional textbooks and toward freely accessible, openly-licensed materials,” U.S. Education Secretary Arne Duncan said. “Districts across the country are transforming learning by using materials that can be constantly updated and adjusted to meet students’ needs.”
Yeah, that message is pretty unambiguous-- stop buying your textbooks from Pearson and grab a nice online open-source free text instead.
And if that still seems ambiguous, here's something that isn't-- a proposed rules change for competitive grants.
In plain English, the proposed rule "would require intellectual property created with Department of Education grant funding to be openly licensed to the public. This includes both software and instructional materials." The policy parallels similar policies in other government departments.
The represents such a change of direction for the department that I still suspect there's something about this I'm either not seeing or not understanding. We've operated so long under the theory that the way government gets things done is to hand a stack of money to a private company, allowing them both to profit and to maintain their corporate independence. You get federal funds to help you develop a cool new idea, then you turn around and market that cool idea to make yourself rich. That was old school. That was "unleashing the power of the free market."
But imagine if this new policy had been the rule for the last fifteen years. If any grant money had touched the development of Common Core, the standards would have been open source, free and editable to anyone in the country. If any grant money touched the development of the SBA and PARCC tests, they would be open and editable for every school in America. And if USED money was tracked as it trickled down through the states- the mind reels. If, for instance, any federal grant money found its way to a charter school, all of that schools instructional ideas and educational materials would have become property of all US citizens.
As a classroom teacher, I find the idea of having the federal government confiscate all my work because federal grant money somehow touched my classroom-- well, that's kind of appalling. But I confess-- the image of Eva Moskowitz having to not only open her books but hand over all her proprietary materials to the feds is a little delicious.
Corporations no doubt know how to build firewalls that allow them to glom up federal money while protecting intellectual property. And those that don't may just stop taking federal money to fuel their innovation-- after all, what else is a Gates or a Walton foundation for?
And realistically speaking, this will not have a super-broad impact because it refers only to competitive grants, which account for about $3 billion of the $67 billion that the department throws around.
So who knows if anything will actually come of this. Still, the prospect of the feds standing in front of a big rack of textbooks and software published by Pearson et al and declaring, "Stop! Don't waste your money on this stuff!" Well, that's just special.
And in case you're wondering if this will survive the transition coming up in a month, the USED also quotes the hilariously-titled John King:
“By requiring an open license, we will ensure that high-quality resources created through our public funds are shared with the public, thereby ensuring equal access for all teachers and students regardless of their location or background,” said John King, senior advisor delegated the duty of the Deputy Secretary of Education. “We are excited to join other federal agencies leading on this work to ensure that we are part of the solution to helping classrooms transition to next generation materials.”
The proposed change will be open for thirty days of comment as soon as it's published at the regulations site. In the meantime, we can ponder what curious conditions lead to fans of the free market declaring their love for just plain free. But hey-- we know they're serious because they wrote a hashtag for it.
Is the USED just dropping hints that we are potentially reading too much into? I don't think so. Here's the second paragraph from the USED's own press release:
“In order to ensure that all students – no matter their zip code – have access to high-quality learning resources, we are encouraging districts and states to move away from traditional textbooks and toward freely accessible, openly-licensed materials,” U.S. Education Secretary Arne Duncan said. “Districts across the country are transforming learning by using materials that can be constantly updated and adjusted to meet students’ needs.”
Yeah, that message is pretty unambiguous-- stop buying your textbooks from Pearson and grab a nice online open-source free text instead.
And if that still seems ambiguous, here's something that isn't-- a proposed rules change for competitive grants.
In plain English, the proposed rule "would require intellectual property created with Department of Education grant funding to be openly licensed to the public. This includes both software and instructional materials." The policy parallels similar policies in other government departments.
The represents such a change of direction for the department that I still suspect there's something about this I'm either not seeing or not understanding. We've operated so long under the theory that the way government gets things done is to hand a stack of money to a private company, allowing them both to profit and to maintain their corporate independence. You get federal funds to help you develop a cool new idea, then you turn around and market that cool idea to make yourself rich. That was old school. That was "unleashing the power of the free market."
But imagine if this new policy had been the rule for the last fifteen years. If any grant money had touched the development of Common Core, the standards would have been open source, free and editable to anyone in the country. If any grant money touched the development of the SBA and PARCC tests, they would be open and editable for every school in America. And if USED money was tracked as it trickled down through the states- the mind reels. If, for instance, any federal grant money found its way to a charter school, all of that schools instructional ideas and educational materials would have become property of all US citizens.
As a classroom teacher, I find the idea of having the federal government confiscate all my work because federal grant money somehow touched my classroom-- well, that's kind of appalling. But I confess-- the image of Eva Moskowitz having to not only open her books but hand over all her proprietary materials to the feds is a little delicious.
Corporations no doubt know how to build firewalls that allow them to glom up federal money while protecting intellectual property. And those that don't may just stop taking federal money to fuel their innovation-- after all, what else is a Gates or a Walton foundation for?
And realistically speaking, this will not have a super-broad impact because it refers only to competitive grants, which account for about $3 billion of the $67 billion that the department throws around.
So who knows if anything will actually come of this. Still, the prospect of the feds standing in front of a big rack of textbooks and software published by Pearson et al and declaring, "Stop! Don't waste your money on this stuff!" Well, that's just special.
And in case you're wondering if this will survive the transition coming up in a month, the USED also quotes the hilariously-titled John King:
“By requiring an open license, we will ensure that high-quality resources created through our public funds are shared with the public, thereby ensuring equal access for all teachers and students regardless of their location or background,” said John King, senior advisor delegated the duty of the Deputy Secretary of Education. “We are excited to join other federal agencies leading on this work to ensure that we are part of the solution to helping classrooms transition to next generation materials.”
The proposed change will be open for thirty days of comment as soon as it's published at the regulations site. In the meantime, we can ponder what curious conditions lead to fans of the free market declaring their love for just plain free. But hey-- we know they're serious because they wrote a hashtag for it.
Sunday, November 15, 2015
KY: Big Data in Action
If you've been following the discussions of Competency Based Education and personalized education and huge new data mining, and you've been wondering what it would all look like on the ground--well, let's go to Kentucky!
The US Department of Education is might proud of Kentucky and their embrace of a one-stop shop for data about students and teachers. That stop is called the Continuous Instructional Improvement Technology System, and yes, there are so many naming and branding problems with the system that it is almost endearing in its clunkiness. I would not be surprised for a moment if I learned that Kentucky teachers are in-serviced by watching a filmstrip accompanied by a cassette that includes droning narration and a beep every time the filmstrip is supposed to be advanced. The sort-of-logo is a misshapen star that is clearing racing across something, carrying the words "Unbridled learning" on its...um... back. I presume that's some sort of Kentucky horsey reference. On top of that, nobody seems to know what to do with the name, which I have now seen rendered as "CIITS" or "CiiTS" in a variety of fonts and, well, it comes across anywhere between awkward and grossly inappropriate. And how is it pronounced? Apparently "sits," which is kind of awesome, because now when a Kentucky teacher gets a lousy rating through the system, colleagues can say the teacher took a real sitz bath.
All I'm saying is that somebody did not perform due diligence on the naming of this thing.
So what is this thing actually?
It gives teachers ready access to student data, customizable lessons and assessments, and a growing selection of professional development resources, such as training videos and goal-setting tools.
Folks praise it with the same sort of language usually used to laud CBE efforts-- "before I'd have to use a one-size-fits-all assessment, but now the computer administers one and gives me results for each student so I can design exactly what they need" and if you're thinking that sounds like regular teacher stuff, just with a computer, I'm right there with you.
But as we dig into CiiTS, we find an awful lot of plain old teacher stuff is now supposed to be done with computer.
For instance, here's a video showing how to load student assignments into The System. You will notice that The System is particularly well-suited to loading multiple choice question based materials, so if I were teaching in Kentucky, I'm sure I'd want to cut back on all those subjective writing thinky type assignments and stick with stuff that doesn't give The System gas. So here's our seventy-gazillionth example of how designing education systems backwards warps the function of the system. In other words, a teacher ought to be asking, "What's the best way to check for understanding? How can I best check for the most high-order, critical thinking understanding and skills." A teacher should not be asking, "What kind of assessment can I whip up that will fit the computer's data collection software?"
Oh, but CiiTS has more to offer than just recording every single grade for each student. Let's give that some context by feeding the computer all the lesson plans, linked to all the materials.
"Well, gee," you may ask. "If CiiTS is so loaded with data, it seems like I could keep an eye on everything." And indeed you could. Here's a power point presentation from the beginning of 2015 that looks at, among other things, getting people aligned to their correct job category so that CiiTS data can be properly deployed. So we have the capability of holding teachers accountable not just for one Big Standardized Test, but all those assignments the students did while they were still trying to learn the concepts. So remember, teachers-- when you design those materials, don't just remember to first consider the needs of the computer, but also remember that the assignment results will be part of your own personal record.
The presentation also reminds us that newer browsers are experiencing some conflicts with CiiTS, which is not surprising since CiiTS was rolled out in 2011.
The presentation also shares some of the states use numbers for the program, which include 47,524 unique teacher and leader logins. Kentucky has "over 40,000" teachers, so it looks like CiiTS is in wide use, with those 47,524 logins signing in almost 28 million times in 2014.
The slide show also indicates that teachers can load personal growth goals into the system, and so can students (who can record the self-reflection). So here's a system that can log in and assess every single assignment for every single student and track it against the standards, all stored up by individual.
USED thinks this all sounds swell. They say things like "more complete picture of student learning" and "more targeted support." Students can move from district to district and have their complete record follow them. Anywhere. And there are banks of videos, materials, assessments, and other swell things that are already pre-keyed to the system. True, there have been technical glitches along the way, but the IT guys are always improving. Meanwhile, the teacher evaluation portion (KY is the only state to go full Orwell on teacher evals so far) may soon be upgraded to include student surveys. And of course all of that is carefully stored as well. I wonder if any Kentucky teacher will ever have to fill out a job application ever again.
Just saying that if you've been worried that Big Data will get the tools in place to suck up every piece of personal data from your child in school, and that we have to really worry about Big Data getting their hands on too much data some day, I am sorry to tell you that apparently some day arrived in Kentucky four years ago.
It sounds kind of like hell, but if any Kentucky teachers want to enlighten me further, I'd love to hear more. Because, yeah, it sounds pretty much like hell.
The US Department of Education is might proud of Kentucky and their embrace of a one-stop shop for data about students and teachers. That stop is called the Continuous Instructional Improvement Technology System, and yes, there are so many naming and branding problems with the system that it is almost endearing in its clunkiness. I would not be surprised for a moment if I learned that Kentucky teachers are in-serviced by watching a filmstrip accompanied by a cassette that includes droning narration and a beep every time the filmstrip is supposed to be advanced. The sort-of-logo is a misshapen star that is clearing racing across something, carrying the words "Unbridled learning" on its...um... back. I presume that's some sort of Kentucky horsey reference. On top of that, nobody seems to know what to do with the name, which I have now seen rendered as "CIITS" or "CiiTS" in a variety of fonts and, well, it comes across anywhere between awkward and grossly inappropriate. And how is it pronounced? Apparently "sits," which is kind of awesome, because now when a Kentucky teacher gets a lousy rating through the system, colleagues can say the teacher took a real sitz bath.
All I'm saying is that somebody did not perform due diligence on the naming of this thing.
So what is this thing actually?
It gives teachers ready access to student data, customizable lessons and assessments, and a growing selection of professional development resources, such as training videos and goal-setting tools.
Folks praise it with the same sort of language usually used to laud CBE efforts-- "before I'd have to use a one-size-fits-all assessment, but now the computer administers one and gives me results for each student so I can design exactly what they need" and if you're thinking that sounds like regular teacher stuff, just with a computer, I'm right there with you.
But as we dig into CiiTS, we find an awful lot of plain old teacher stuff is now supposed to be done with computer.
For instance, here's a video showing how to load student assignments into The System. You will notice that The System is particularly well-suited to loading multiple choice question based materials, so if I were teaching in Kentucky, I'm sure I'd want to cut back on all those subjective writing thinky type assignments and stick with stuff that doesn't give The System gas. So here's our seventy-gazillionth example of how designing education systems backwards warps the function of the system. In other words, a teacher ought to be asking, "What's the best way to check for understanding? How can I best check for the most high-order, critical thinking understanding and skills." A teacher should not be asking, "What kind of assessment can I whip up that will fit the computer's data collection software?"
Oh, but CiiTS has more to offer than just recording every single grade for each student. Let's give that some context by feeding the computer all the lesson plans, linked to all the materials.
"Well, gee," you may ask. "If CiiTS is so loaded with data, it seems like I could keep an eye on everything." And indeed you could. Here's a power point presentation from the beginning of 2015 that looks at, among other things, getting people aligned to their correct job category so that CiiTS data can be properly deployed. So we have the capability of holding teachers accountable not just for one Big Standardized Test, but all those assignments the students did while they were still trying to learn the concepts. So remember, teachers-- when you design those materials, don't just remember to first consider the needs of the computer, but also remember that the assignment results will be part of your own personal record.
The presentation also reminds us that newer browsers are experiencing some conflicts with CiiTS, which is not surprising since CiiTS was rolled out in 2011.
The presentation also shares some of the states use numbers for the program, which include 47,524 unique teacher and leader logins. Kentucky has "over 40,000" teachers, so it looks like CiiTS is in wide use, with those 47,524 logins signing in almost 28 million times in 2014.
The slide show also indicates that teachers can load personal growth goals into the system, and so can students (who can record the self-reflection). So here's a system that can log in and assess every single assignment for every single student and track it against the standards, all stored up by individual.
USED thinks this all sounds swell. They say things like "more complete picture of student learning" and "more targeted support." Students can move from district to district and have their complete record follow them. Anywhere. And there are banks of videos, materials, assessments, and other swell things that are already pre-keyed to the system. True, there have been technical glitches along the way, but the IT guys are always improving. Meanwhile, the teacher evaluation portion (KY is the only state to go full Orwell on teacher evals so far) may soon be upgraded to include student surveys. And of course all of that is carefully stored as well. I wonder if any Kentucky teacher will ever have to fill out a job application ever again.
Just saying that if you've been worried that Big Data will get the tools in place to suck up every piece of personal data from your child in school, and that we have to really worry about Big Data getting their hands on too much data some day, I am sorry to tell you that apparently some day arrived in Kentucky four years ago.
It sounds kind of like hell, but if any Kentucky teachers want to enlighten me further, I'd love to hear more. Because, yeah, it sounds pretty much like hell.
Guest Post: No Excuse, Deceptive Metrics and School Success
Emily Kaplan is an elementary school teacher in the Boston area. She's currently teaching in a public school, but her previous experience is with one of the region's high-achieving charter chains. She has written here about both her experience and some lessons from it, and I'm pleased to publish this here with her permission.
NO EXCUSE: AN ARGUMENT AGAINST DECEPTIVE METRICS OF SCHOOL SUCCESS
Sixteen seven- and eight-year olds sit in a circle on the floor. On the wall to their left— the first thing they see upon entering and exiting the classroom, always done in complete silence— is a list of individual “Assessment Goals.” (This “no excuses” charter network creates its own high-stress tests, which all students take at least five times per month, beginning in kindergarten.) One student´s math goal reads, “I only use strategies that I know.” All are written in the teacher’s handwriting. Others include, “I read my work over so I don´t make careless mistakes.” “I begin each sentence with a capital letter.” “I draw base-ten blocks to show my work.”
NO EXCUSE: AN ARGUMENT AGAINST DECEPTIVE METRICS OF SCHOOL SUCCESS
Sixteen seven- and eight-year olds sit in a circle on the floor. On the wall to their left— the first thing they see upon entering and exiting the classroom, always done in complete silence— is a list of individual “Assessment Goals.” (This “no excuses” charter network creates its own high-stress tests, which all students take at least five times per month, beginning in kindergarten.) One student´s math goal reads, “I only use strategies that I know.” All are written in the teacher’s handwriting. Others include, “I read my work over so I don´t make careless mistakes.” “I begin each sentence with a capital letter.” “I draw base-ten blocks to show my work.”
On the wall to their right is a list of the class averages from the last six network assessments (taken by all second graders across the charter network´s three campuses), all of which are in the 50s and 60s. Even though these two-hour tests are designed by network leaders to be exceptionally challenging— a class average of an 80% is the holy grail of teachers, who use their students´ scores to compete for status and salary increases— this class´s scores are the lowest in the school, and the students know it.
The teacher speaks to them in a slow, measured tone. “When I left school here yesterday, after working hard all day to give you a good education so you can go to college, I felt disappointed. I felt sad.”
Shoulders drop. Children put their faces in their hands.
“And do you know why?” The teacher looks around the circle; children avert their eyes.
One child raises her hand tentatively. “We didn´t do good on our tests?”
The teacher nods. “Yes, you didn´t do well on your assessments. Our class average was very low. And so I felt sad. I went home and I felt very sad for the rest of the day.”
The children nod resignedly. They´ve heard this many times before.
Suddenly, one child, an eight-year-old who has been suspended for a total of sixteen days for repeatedly failing to comply with school rules, raises his hand. The teacher looks at him. “I am noticing that there is a question.”
The child tilts his head. “What does average mean?” Several children nod; it seems that they, too, have been wondering this, but have been too afraid to ask.
The teacher sighs. “It´s a way to tell if everyone in this room is showing self-determination. And what I saw yesterday is that we are not. Scholars in Connecticut College” —at the school, children are “scholars,” and classrooms are named after four-year colleges— “are not less smart than scholars in UMass. But the scholars in UMass got a 78% average.”
One girl pipes up. “And we only got a 65%!”
The teacher moves the child´s clothespin a rung down on the “choice stick” for speaking out of turn. “And the scholars in Lesley got a 79%. The scholars in UMass and the scholars in Lesley are not smarter than you are. They do not know how to read better than you.” She looks around. “They do not know how to write better than you.” Suddenly, her voice rises in volume. “Scholars, what can we do to show UMass and Lesley that we are just as smart as they are?”
The children look to the list of “assessment goals” posted on the wall. They raise their hands, one by one.
“I will read my work over so I don´t make mistakes.”
The teacher nods.
“I will begin every sentence with a capital letter.”
“I will do my best work so you don´t get sad anymore.”
The teacher smiles. “Good.”
This teacher— with whom I co-taught a second grade class— is now a high-level administrator and “instructional coach” at the school. It is her job to ensure that the school’s instructors (almost all of whom are white) to “teach” using these dehumanizing, teacher-focused tactics with their students (almost all of whom are children of color from low-income families.) The school is one of several Boston-area “no excuses” charters that receive major accolades (and many hundreds of thousands of dollars in grants and prizes) for their high scores on state standardized tests. Supporters and leaders of these schools claim that the high scores extracted using these methods prove that the schools are “closing the achievement gap.” Look, they say, pointing to the score reports: poor black kids in Boston are outperforming rich white kids in Newton and Brookline and Wellesley.
And, indeed, this data is compelling. Its very existence teaches a powerful lesson that this country needs to hear: children of color from low-income homes can outperform wealthy white children on standardized tests, which are the metrics that we as a society have decided mean…well, something.
The problem is that standardized test scores mean very little. On the only tests that do mean a tremendous amount for these students— the SSATs— students at the school I taught at perform abysmally. Subsequently, these same middle schoolers who often dramatically outperform their wealthy white peers on these tests are not accepted in large numbers to the most selective high schools (and most of those who do struggle socially and emotionally when thrust into student bodies that aren’t upwards of 98% students of color); struggle to succeed academically in high school (81% earn high school grade-point averages below 3.0 in the first semester); and certainly do not thrive after high school, graduating from college at very low rates and, among those who don’t go to college, failing in large numbers to secure full-time employment.
Correlation is not causation, after all; the fact that those wealthy white students who do well on state standardized tests go on to enjoy tremendous opportunities, in education and in life, does not mean that these scores cause these outcomes. This fallacy, however, constitutes the fuel of the no-excuses runaway train, and leads to the dehumanization of children of color at schools like the one at which I taught. At this school, children are deprived of a comprehensive, developmentally appropriate, and humane education; instead, they are subjected to militaristic discipline, excessive amounts of testing (well beyond that which is already mandated by the state), a criminally deficient amount of playtime (in a nine-hour school day, kindergartners have twenty minutes of recess), and lack of access to social-emotional curricula— all so that the people who run their schools can make a political point.
If we are to improve the educational prospects of this country’s most at-risk students, we need to examine our educational practices and institutions using metrics that matter. Standardized test scores are easier to obtain and compare than data which are nuanced, holistic, and, to the extent possible, representative of aspects of K-12 education which enable and predict access to higher education and opportunities in life. (The fact that we have not yet found the perfect embodiment of the latter by no means excuses the continued use of the former.) Our obsession with meaningless, deceptive standardized test scores creates schools, like the “no excuses” charter at which I taught, which seem to excel— but fail in the ways that truly matter. There is simply no excuse.
ICYMI: Sunday Reading from the Interwebs
Some reading for your Sunday afternoon leisure (if you have such a thing)
The Investment
Jose Vilson went to New Jersey to talk to teachers there. This is a piece of what he had to say.
EngageNY Math, Now Eureka, a Common Core Dropping
One feisty teacher's journey into the land of pre-packaged, not-so-great math curriculum.
Plutocrats in Plunderland
Many of us took a swipe at the TeachStrong rollout this week. This piece gives us a good look at some of the connections being worked behind the curtain.
I also recommend this take on TeachStrong from Daniel Katz.
The Strange, True Story of How a Chairman at McKinsey Made Millions of Dollars off His Maid
This piece from The Nation is not directly related to education. But it is a well-researched story about corruption in New York and how the folks in the 1% just kind of roll over the rest of us. If you've been following the reformster world, you know the name McKinsey, the consulting group responsible for growing so much of the reformster careers. Here's a good hard look at just what sort of people we're talking about.
Dear Mark
Emily Talmage is a Maine blogger with an interesting story. As an Amherst grad she fell into the arms of Teach for America, and then decided that she's like to be a real teacher. But before Amherst, she prepped at Phillips Exeter, where her time overlapped with that of Mark Zuckerberg. Here she is, writing a letter to her old classmate about his sudden interest in "personalized" learning.
The Investment
Jose Vilson went to New Jersey to talk to teachers there. This is a piece of what he had to say.
EngageNY Math, Now Eureka, a Common Core Dropping
One feisty teacher's journey into the land of pre-packaged, not-so-great math curriculum.
Plutocrats in Plunderland
Many of us took a swipe at the TeachStrong rollout this week. This piece gives us a good look at some of the connections being worked behind the curtain.
I also recommend this take on TeachStrong from Daniel Katz.
The Strange, True Story of How a Chairman at McKinsey Made Millions of Dollars off His Maid
This piece from The Nation is not directly related to education. But it is a well-researched story about corruption in New York and how the folks in the 1% just kind of roll over the rest of us. If you've been following the reformster world, you know the name McKinsey, the consulting group responsible for growing so much of the reformster careers. Here's a good hard look at just what sort of people we're talking about.
Dear Mark
Emily Talmage is a Maine blogger with an interesting story. As an Amherst grad she fell into the arms of Teach for America, and then decided that she's like to be a real teacher. But before Amherst, she prepped at Phillips Exeter, where her time overlapped with that of Mark Zuckerberg. Here she is, writing a letter to her old classmate about his sudden interest in "personalized" learning.
Saturday, November 14, 2015
NCTQ New Report on Teacher Evaluation
It's a big report, over a hundred pages, and I've read it so you don't have to. But that doesn't mean you don't have your work cut out for you here on this blog. Let's get going.
Who are these people?
The National Council on Teacher Quality's continued presence in the education world is one of the great mysteries of the reformster era (or maybe just one of the great con jobs). This "national council" includes a staff composed almost exclusively of former TFA folks and professional bureaucrats and a board of directors that contains no teachers.
Let me say that again-- this group that has declared itself the arbiter of teacher quality for the country has no career teachers in positions of authority. None.
They have been an excellent tool for reformsters, which may be why their funders list is a who's who of reformy money (Gates, Broad, Walton, Joyce, and even Anonymous). Like other heavy-hitters (or at least heavy cash-checkers) of the "non-partisan research and policy organization" world, they specialize in putting a glossy figleaf of research study paper over the big ugly naked truth of reformster advocacy.
Their particular brand is about assaulting the teaching profession with a concern trolling spin. From their mission statement:
We recognize that it is not teachers who bear responsibility for their profession's many challenges, but the institutions with the greatest authority and influence over teachers. To that end we work to achieve fundamental changes in the policy and practices of teacher preparation programs, school districts, state governments, and teachers unions.
In other words, teachers suck, but it's not their fault, poor dears, because they are helpless, powerless tools of Important Forces. Oddly enough, I have never come across anything from NCTQ suggesting that empowering teachers might be a useful solution.
Let me be up front about NCTQ
There are people and organizations in the reformster world that can, I believe, be taken seriously. I may disagree with almost everything they conclude, but they are sincere, thoughtful, and at least to some degree intellectually honest. They raise questions that are worth wrestling with, and they challenge those of us who support public schools in ways that are good for us. I have a whole list of people with whom I disagree, but whom I'm happy to read or talk to because they are serious people who deserve to be taken seriously.
NCTQ is not on that list.
NCTQ once issued a big report declaring that college teacher education programs were much easier than other programs. Their research-- and I swear I am not making this up-- was to look through a bunch of college commencement programs and course syllabi.
This may actually be better than their signature report ranking the quality of various teacher education programs, a program infamous in my neck of the woods for rating a college on a program that didn't actually exist. This list is published in US News (motto: "Listicles make better click bait than new stories"), so it makes some noise, leading to critiques of NCTQ's crappy methodology here and here and here, to link to just a few. NCTQ's method here again focuses on syllabi and course listings, which, as one college critic noted, "is like a restaurant reviewer deciding on the quality of a restaurant based on its menu alone, without ever tasting the food." That college should count its blessings; NCTQ has been known to "rate" colleges without any direct contact at all.
The indispensable Mercedes Schneider has torn NCTQ apart at great length; if you really want to know more, you can start here. Or check out Diane Ravitch's NCTQ history lesson. That will, among other things, remind you that She Who Must Not Be Named, the failed DC chancellor and quite possibly the least serious person to ever screw around with education policy, was also a part of NCTQ.
Bottom line. Everything I know about NCTQ makes me inclined to expect that any report they put out is intellectually dishonest crap designed to further an agenda of braking down teaching as a profession.
So we are ever going to look at this new thing?
Yes, sure. I just wanted to make sure your expectations were set low enough.
State of the States 2015: Evaluating Teaching, Leading and Learning
That's the report, and here's your first spoiler alert: the report isn't really going to look at evaluating learning at all.
In fact, it will help to understand the report if you do not jump to the mistaken conclusion that NCTQ is asking, "Have we found effective ways to do these things?" Because the question NCTQ is really asking is, "How many of our preferred policies have we gotten people to implement?" At no point will they ever, ever ask, "Hey, are any of our preferred policies actually any good?"
If you understand the questions we're really asking (and not asking), the report makes a lot more sense.
Key Findings about Teacher Evaluation
NCTQ is happy to report that more states are falling in line. Almost all include student results in teacher evals, and some include those results extra hard. This is super-not-surprising, as such linkage was mandated by Race To The Top and the waivers that states pretty much had to try for. And we're super-happy that twenty-three states now require use ofstudent test scores evidence of teacher results to decide teacher tenure.
Oh, but there is sad news, too. A "troubling" pattern.
The critique of old evaluation systems was that the performance of 99 percent of teachers was rated satisfactory, regardless of student achievement. Some policymakers and reformers have naively assumed that because states and districts have adopted new evaluations, evaluation results will inevitably look much different. But that assumption continues to be proven incorrect. We think there are several factors contributing to the lack of differentiation of performance:
Dammit!! The new evaluation systems were supposed to root out the terrible teachers in schools ("look much different" means "look more faily"), because if ten percent of students fail the Big Standardized Test, that must mean that ten percent of the teachers stink. It's common sense. Like if a football team loses ten percent of its games, ten percent of its players must be bad. Or if ten percent of the patients in a hospital die, ten percent of the doctors must be terrible. Come on, people-- it's just common sense.
So what do they think screwed things up? Well, lots of states only do one observation a year. Okay-- so is there a correlation between number of observations and number of "ineffective" ratings? Cause that seems like an easy thing to check, unless you were the laziest research group on the planet. Don't have that data? Okay then.
The other possible culprits are SLOs, which NCTQ suggests might be a disorderly vague mess. Well, I can't really argue with that conclusion, though its effect on evaluations is unclear, other than I'd bet lots of principals are reluctant to give lousy teacher ratings based on a technique less reliable than throwing dice through the entrails of a brown snake under a full moon.
Also, NCTQ knows that implementing both new "college and career standards" and new test-based teacher evaluation systems created an "unfortunate collision." Yeah, implementing new amateur hour standards along with crappy tests to be used in junk science evaluation schemes, and doing it all at once-- that's a thing that just kind of happened and wasn't at all the result of deliberate poorly-thought out plans of the educational amateurs running the reformy show. Honest to goodness, it will be a truly amazing day if I ever find a reformster policymaker actually say, "Yeah, we did that wrong. We screwed up. We made a bad choice and we should have listened to the ten gazillion education professionals telling us to choose better." But today is not that day.
NCTQ does think that student surveys might improve the whole evaluation thing, and boy, nobody can imagine downsides to that approach. But they are thinking basically anything that makes observations less of a piece of the evaluation, because they're pretty sure it's those damn principals messing up the system and making teachers look better than they are.
Any way, states should be "sensitive," but should not "indulge critics." And if you're looking for the part of the report that considers whether or not any of these teacher evaluation policies is valid, reliable, useful or indicative of actual teacher effectiveness-- well, that's just not going to happen.
Meanwhile, that bad old opt out movement has been all about protecting teachers from evaluations, and evaluations are much better now, so knock it off.
Key Findings about Principal Evaluation
Folks have figured out that we have to hold principals' feet to the fire, but states have found a wide variety of ways to do that, some of which are so sketchy that nobody even knows whose responsibility the principal eval is.
But in big bold letters, comes the pull quote: "There is insufficient focus on meaningful consequences for ineffective school leaders." So whatever system we come up with for evaluating principals, it really needs to punish people harder.
Connecting the Dots
What NCTQ would like to see more than anything else in the whole wide world is a teacher evaluation system driven by test scores that in turn drives everything else. Hiring, firing, promotions, tenure, revoking tenure, pay level-- they would like to see all of those tied to the teacher evaluation.
NCTQ credits Delaware, Florida and Louisiana with "connecting the dots" best of all. The language used for this baloney is itself baloney-- it's like the baloney you make out of the leftover scraps of baloney. But it's worth seeing, because it's language that keeps reappearing, including in places like, say, TeachStrong.
While there has been some good progress on connecting the dots in the states, unless pay scales change, evaluation is only going to be a feedback tool when it could be so much more. Too few states are willing to take on the issue of teacher pay and lift the teaching profession by rewarding excellence.
Sigh. Yes, teachers are currently holding back their most excellent selves, but if we paid them more, they'd be motivated. Because teaching really attracts people motivated by money. Of course, that's not really the idea behind various forms of merit pay. The real idea is a form of demerit pay cuts-- let's only give good pay to only the people we've decided deserve it.
Lessons for the Future
NCTQ has a whole decade of policy-tracking under its belt, so they've reached some conclusions.
States should not go too far with teacher effectiveness policy. NCTQ actually calls out North Carolina for screwing up the teacher evaluation system and trashing pay and offering ridiculous bonus pay and trying to kill tenure and just generally being a giant jerk to all teachers. While I applaud them for noticing that North Carolina has done nobody any favors by trying to become the most inhospitable teaching environment in the country, I feel it's only fair to point out that North Carolina hasn't done anything that directly contradicts NCTQ's policy recommendations. They've just done it in an unsubtle and poorly PRed manner.
Principal and teacher evals need to be lined up.
It's important to focus on the positive and not let teachers see the evaluation process as "an ominous enterprise aimed at punishing teacher." So I guess back a few pages when NCTQ was saying it was such a huge disappointment that teacher eval systems were still finding mostly good teachers, or a few pages after that when they were saying how all employment decisions should be tied to evaluations-- those were somehow NOT talking about how evaluation should be used to punish teachers? Definite mixed message problem here.
Don't forget what this is all about. The children. We're doing all this for the children. Not that we've done a lick of study to see if our favorite policies actually help the children in any meaningful way.
Finally, "incentives" are better than "force." Bribes are superior to beatings. Sigh. Okay, let's link to Daniel Pink's "Drive" one more time.
Finally
We get page after page of state by state summary chart showing how well each state is doing at linking teacher evaluation to every aspect of teacher professional existence. You'll have to look your own page up. Look, I can't do everything for you.
There are also some appendices of other fun things that I'm also not going to summarize for you.
What's missing?
The report includes not a word about how we might know that any of the recommended policies actually works. We are clear that the be-all and end-all is to raise student test scores. Any proof that higher test scores are indicative of anything other than scoring higher? And as we move to teacher evaluation systems, is there any proof that, say, linking tenure to test scores improves test scores or anything that are actually related to a good education?
No. So the report is left with a basic stance of, "Here are some things everybody should be doing because we think they are good ideas, though none of us have ever been public school teachers, and none of us have any real experience in public education. But you should do these things, and if you do, education in your state will be better in ways that we can't really support or specify." And it took over 100 pages to say that. But this is NCTQ, so some bunch of media dopes are going to report on this as if it is real research from reputable experts who know what the hell they're talking about. What a world.
Who are these people?
The National Council on Teacher Quality's continued presence in the education world is one of the great mysteries of the reformster era (or maybe just one of the great con jobs). This "national council" includes a staff composed almost exclusively of former TFA folks and professional bureaucrats and a board of directors that contains no teachers.
Let me say that again-- this group that has declared itself the arbiter of teacher quality for the country has no career teachers in positions of authority. None.
They have been an excellent tool for reformsters, which may be why their funders list is a who's who of reformy money (Gates, Broad, Walton, Joyce, and even Anonymous). Like other heavy-hitters (or at least heavy cash-checkers) of the "non-partisan research and policy organization" world, they specialize in putting a glossy figleaf of research study paper over the big ugly naked truth of reformster advocacy.
Their particular brand is about assaulting the teaching profession with a concern trolling spin. From their mission statement:
We recognize that it is not teachers who bear responsibility for their profession's many challenges, but the institutions with the greatest authority and influence over teachers. To that end we work to achieve fundamental changes in the policy and practices of teacher preparation programs, school districts, state governments, and teachers unions.
In other words, teachers suck, but it's not their fault, poor dears, because they are helpless, powerless tools of Important Forces. Oddly enough, I have never come across anything from NCTQ suggesting that empowering teachers might be a useful solution.
Let me be up front about NCTQ
There are people and organizations in the reformster world that can, I believe, be taken seriously. I may disagree with almost everything they conclude, but they are sincere, thoughtful, and at least to some degree intellectually honest. They raise questions that are worth wrestling with, and they challenge those of us who support public schools in ways that are good for us. I have a whole list of people with whom I disagree, but whom I'm happy to read or talk to because they are serious people who deserve to be taken seriously.
NCTQ is not on that list.
NCTQ once issued a big report declaring that college teacher education programs were much easier than other programs. Their research-- and I swear I am not making this up-- was to look through a bunch of college commencement programs and course syllabi.
This may actually be better than their signature report ranking the quality of various teacher education programs, a program infamous in my neck of the woods for rating a college on a program that didn't actually exist. This list is published in US News (motto: "Listicles make better click bait than new stories"), so it makes some noise, leading to critiques of NCTQ's crappy methodology here and here and here, to link to just a few. NCTQ's method here again focuses on syllabi and course listings, which, as one college critic noted, "is like a restaurant reviewer deciding on the quality of a restaurant based on its menu alone, without ever tasting the food." That college should count its blessings; NCTQ has been known to "rate" colleges without any direct contact at all.
The indispensable Mercedes Schneider has torn NCTQ apart at great length; if you really want to know more, you can start here. Or check out Diane Ravitch's NCTQ history lesson. That will, among other things, remind you that She Who Must Not Be Named, the failed DC chancellor and quite possibly the least serious person to ever screw around with education policy, was also a part of NCTQ.
Bottom line. Everything I know about NCTQ makes me inclined to expect that any report they put out is intellectually dishonest crap designed to further an agenda of braking down teaching as a profession.
So we are ever going to look at this new thing?
Yes, sure. I just wanted to make sure your expectations were set low enough.
State of the States 2015: Evaluating Teaching, Leading and Learning
That's the report, and here's your first spoiler alert: the report isn't really going to look at evaluating learning at all.
In fact, it will help to understand the report if you do not jump to the mistaken conclusion that NCTQ is asking, "Have we found effective ways to do these things?" Because the question NCTQ is really asking is, "How many of our preferred policies have we gotten people to implement?" At no point will they ever, ever ask, "Hey, are any of our preferred policies actually any good?"
If you understand the questions we're really asking (and not asking), the report makes a lot more sense.
Key Findings about Teacher Evaluation
NCTQ is happy to report that more states are falling in line. Almost all include student results in teacher evals, and some include those results extra hard. This is super-not-surprising, as such linkage was mandated by Race To The Top and the waivers that states pretty much had to try for. And we're super-happy that twenty-three states now require use of
Oh, but there is sad news, too. A "troubling" pattern.
The critique of old evaluation systems was that the performance of 99 percent of teachers was rated satisfactory, regardless of student achievement. Some policymakers and reformers have naively assumed that because states and districts have adopted new evaluations, evaluation results will inevitably look much different. But that assumption continues to be proven incorrect. We think there are several factors contributing to the lack of differentiation of performance:
Dammit!! The new evaluation systems were supposed to root out the terrible teachers in schools ("look much different" means "look more faily"), because if ten percent of students fail the Big Standardized Test, that must mean that ten percent of the teachers stink. It's common sense. Like if a football team loses ten percent of its games, ten percent of its players must be bad. Or if ten percent of the patients in a hospital die, ten percent of the doctors must be terrible. Come on, people-- it's just common sense.
So what do they think screwed things up? Well, lots of states only do one observation a year. Okay-- so is there a correlation between number of observations and number of "ineffective" ratings? Cause that seems like an easy thing to check, unless you were the laziest research group on the planet. Don't have that data? Okay then.
The other possible culprits are SLOs, which NCTQ suggests might be a disorderly vague mess. Well, I can't really argue with that conclusion, though its effect on evaluations is unclear, other than I'd bet lots of principals are reluctant to give lousy teacher ratings based on a technique less reliable than throwing dice through the entrails of a brown snake under a full moon.
Also, NCTQ knows that implementing both new "college and career standards" and new test-based teacher evaluation systems created an "unfortunate collision." Yeah, implementing new amateur hour standards along with crappy tests to be used in junk science evaluation schemes, and doing it all at once-- that's a thing that just kind of happened and wasn't at all the result of deliberate poorly-thought out plans of the educational amateurs running the reformy show. Honest to goodness, it will be a truly amazing day if I ever find a reformster policymaker actually say, "Yeah, we did that wrong. We screwed up. We made a bad choice and we should have listened to the ten gazillion education professionals telling us to choose better." But today is not that day.
NCTQ does think that student surveys might improve the whole evaluation thing, and boy, nobody can imagine downsides to that approach. But they are thinking basically anything that makes observations less of a piece of the evaluation, because they're pretty sure it's those damn principals messing up the system and making teachers look better than they are.
Any way, states should be "sensitive," but should not "indulge critics." And if you're looking for the part of the report that considers whether or not any of these teacher evaluation policies is valid, reliable, useful or indicative of actual teacher effectiveness-- well, that's just not going to happen.
Meanwhile, that bad old opt out movement has been all about protecting teachers from evaluations, and evaluations are much better now, so knock it off.
Key Findings about Principal Evaluation
Folks have figured out that we have to hold principals' feet to the fire, but states have found a wide variety of ways to do that, some of which are so sketchy that nobody even knows whose responsibility the principal eval is.
But in big bold letters, comes the pull quote: "There is insufficient focus on meaningful consequences for ineffective school leaders." So whatever system we come up with for evaluating principals, it really needs to punish people harder.
Connecting the Dots
What NCTQ would like to see more than anything else in the whole wide world is a teacher evaluation system driven by test scores that in turn drives everything else. Hiring, firing, promotions, tenure, revoking tenure, pay level-- they would like to see all of those tied to the teacher evaluation.
NCTQ credits Delaware, Florida and Louisiana with "connecting the dots" best of all. The language used for this baloney is itself baloney-- it's like the baloney you make out of the leftover scraps of baloney. But it's worth seeing, because it's language that keeps reappearing, including in places like, say, TeachStrong.
While there has been some good progress on connecting the dots in the states, unless pay scales change, evaluation is only going to be a feedback tool when it could be so much more. Too few states are willing to take on the issue of teacher pay and lift the teaching profession by rewarding excellence.
Sigh. Yes, teachers are currently holding back their most excellent selves, but if we paid them more, they'd be motivated. Because teaching really attracts people motivated by money. Of course, that's not really the idea behind various forms of merit pay. The real idea is a form of demerit pay cuts-- let's only give good pay to only the people we've decided deserve it.
Lessons for the Future
NCTQ has a whole decade of policy-tracking under its belt, so they've reached some conclusions.
States should not go too far with teacher effectiveness policy. NCTQ actually calls out North Carolina for screwing up the teacher evaluation system and trashing pay and offering ridiculous bonus pay and trying to kill tenure and just generally being a giant jerk to all teachers. While I applaud them for noticing that North Carolina has done nobody any favors by trying to become the most inhospitable teaching environment in the country, I feel it's only fair to point out that North Carolina hasn't done anything that directly contradicts NCTQ's policy recommendations. They've just done it in an unsubtle and poorly PRed manner.
Principal and teacher evals need to be lined up.
It's important to focus on the positive and not let teachers see the evaluation process as "an ominous enterprise aimed at punishing teacher." So I guess back a few pages when NCTQ was saying it was such a huge disappointment that teacher eval systems were still finding mostly good teachers, or a few pages after that when they were saying how all employment decisions should be tied to evaluations-- those were somehow NOT talking about how evaluation should be used to punish teachers? Definite mixed message problem here.
Don't forget what this is all about. The children. We're doing all this for the children. Not that we've done a lick of study to see if our favorite policies actually help the children in any meaningful way.
Finally, "incentives" are better than "force." Bribes are superior to beatings. Sigh. Okay, let's link to Daniel Pink's "Drive" one more time.
Finally
We get page after page of state by state summary chart showing how well each state is doing at linking teacher evaluation to every aspect of teacher professional existence. You'll have to look your own page up. Look, I can't do everything for you.
There are also some appendices of other fun things that I'm also not going to summarize for you.
What's missing?
The report includes not a word about how we might know that any of the recommended policies actually works. We are clear that the be-all and end-all is to raise student test scores. Any proof that higher test scores are indicative of anything other than scoring higher? And as we move to teacher evaluation systems, is there any proof that, say, linking tenure to test scores improves test scores or anything that are actually related to a good education?
No. So the report is left with a basic stance of, "Here are some things everybody should be doing because we think they are good ideas, though none of us have ever been public school teachers, and none of us have any real experience in public education. But you should do these things, and if you do, education in your state will be better in ways that we can't really support or specify." And it took over 100 pages to say that. But this is NCTQ, so some bunch of media dopes are going to report on this as if it is real research from reputable experts who know what the hell they're talking about. What a world.
Subscribe to:
Posts (Atom)