There are plenty of people talking repeatedly and forcefully about resisting the infection of public education by the many tentacle-like limbs of Big Data. We know that reformsters have been talking for decades about the prospect of a cradle-to-career pipeline in which all manner of data can be collected and crunched and used to determine how best society can use the human beings that the data represents.
But if you want to see a real-world, already-happening demonstration of what this kind of data looks like, check out this article from the Washington Post that ran last month. "The new way police are surveilling you: Calculating your threat score."
Think Minority Report. Except instead of a predictive criminal system based on three psychics floating in a small pool, it's a giant pool of all the data from everywhere.
The article centers on Fresno's Real Time Crime Center, a high tech hub that allows police to access a gazillion data points available on the on-line world-- including feeds from 800 school and traffic cameras. There's also a library of license plate scans. The city is also networked with microphones that can figure out the location of gunshots. And of course there's a program to monitor social media.
But the scariest of all is a program called Beware. By using special proprietary (and therefore secret) algorithms, Beware can create a color-coded threat level for any person and for any address.
It's not like this is a senseless program with no point. Police repeatedly walk into situations without a clue whether they're facing a relatively harmless citizen or a dangerous menace. To be able to access someone's record in real time, knowing what their most likely response will be-- that was the advantage that small town cops had because they already knew everybody. And this is not just an advantage to police-- if police walk into a non-volatile situation with their own tension dialed back, perhaps a few fewer innocent citizens might not get shot.
But at the same time, the level of access is creepy. And when we attach that kind of data access to Everything a Student Ever Did in School, including databases that attempt to assess students social and emotional characteristics-- well, it's not hard to imagine police approaching someone with guns drawn and ready to fire at a danger-tagged suspect because that suspect had some behavior problems in second grade and some computer software thinks his teenage video gaming habits showed violent tendencies.
Pop culture has numbed us to much of this abuse. The noble heroes of shows like NCIS and Bones and the like regularly violate data privacy, but hey-- they're good guys who are just trying to stop bad guys. But what if the data is not being accessed by Jethro Gibbs, but by J. Edgar Hoover.
Big Data would like to get these data collections up and running for every citizen, and they'd like to get started on children, even infants, before anyone has a chance to object. These are complex and difficult issues, and they deserve a long and careful conversation in our country, but the conversation has barely begun to begin, and the construction of the Surveillance State is already well under way.
Friday, February 12, 2016
Thursday, February 11, 2016
Fordham Provides More Core Testing PR
The Fordham Institute is back with another "study" of circular reasoning and unexamined assumptions that concludes that reformster policy is awesome.
The Thomas B. Fordham Institute is a right-tilted thinky tank that has been one of the most faithful and diligent promoters of the reformster agenda, from charters (they run some in Ohio) to the Common Core to the business of Big Standardized Testing.
In 2009, Fordham got an almost-a-million dollar grant from the Gates Foundation to "study" Common Core Standards, the same standards that Gates was working hard to promote. They concluded that the Core was swell. Since those days, Fordham's support team has traveled across the country, swooping into various state legislators to explain the wisdom of reformster ideas.
This newest report fits right into that tradition.
Evaluating the Content and Quality of Next Generation Assessments is a big, 122-page monster of a report. But I'm not sure we need to dig down into the details, because once we understand that it's built on a cardboard foundation, we can realize that the details don't really matter.
The report is authored by Nancy Doorey and Morgan Polikoff. Doorey is the founder of her own consulting firm, and her reformy pedigree is excellent. She works as a study lead for Fordham, and she has worked with the head of Education Testing Services to develop new testing goodies. She also wrote a nice report for SBA about how good the SBA tests were. Polikoff is a testing expert and professor at USC at Rossier. He earned his PhD from UPenn in 2010 (BA at Urbana in 2006), and immediately raised his profile by working as a lead consultant on the Gates Measures of Effective Teaching project. He is in high demand as an expert on how test and implement Common Core, and he has written a ton about it.
So they have some history with the materials being studied.
So what did the study set out to study? They picked the PARCC, SBA, ACT Aspire and Massachussetts MCAS to study. Polikoff sums it up in his Brookings piece about the report.
A key hope of these new tests is that they will overcome the weaknesses of the previous generation of state tests. Among these weaknesses were poor alignment with the standards they were designed to represent and low overall levels of cognitive demand (i.e., most items requiring simple recall or procedures, rather than deeper skills such as demonstrating understanding). There was widespread belief that these features of NCLB-era state tests sent teachers conflicting messages about what to teach, undermining the standards and leading to undesired instructional responses.
Or consider this blurb from the Fordham website:
Evaluating the Content and Quality of Next Generation Assessments examines previously unreleased items from three multi-state tests (ACT Aspire, PARCC, and Smarter Balanced) and one best-in-class state assessment, Massachusetts’ state exam (MCAS), to answer policymakers’ most pressing questions: Do these tests reflect strong content? Are they rigorous? What are their strengths and areas for improvement? No one has ever gotten under the hood of these tests and published an objective third-party review of their content, quality, and rigor. Until now.
So, two main questions-- are the new tests well-aligned to the Core, and do they serve as a clear "unambiguous" driver of curriculum and instruction?
We start from the very beginning with a host of unexamined assumptions. The notion that Polikoff and Doorey or the Fordham Institute are in any way an objective third parties seems absurd, but it's not possible to objectively consider the questions because that would require us to unobjectively accept the premise that national or higher standards have anything to do with educational achievement, that the Core standards are in any way connected to college and career success, that a standardized test can measure any of the important parts of an education, and that having a Big Standardized Test drive instruction and curriculum is a good idea for any reason at all. These assumptions are at best highly debatable topics and at worst unsupportable baloney, but they are all accepted as givens before this study even begins.
And on top of them, another layer of assumption-- that having instruction and curriculum driven by a standardized test is somehow a good thing. That teaching to the test is really the way to go.
But what does the report actually say? You can look at the executive summary or the full report. I am only going to hit the highlights here.
The study was built around three questions:
Do the assessments place strong emphasis on the most important content for college and career readiness(CCR), as called for by the Common Core State Standards and other CCR standards? (Content)
Do they require all students to demonstrate the range of thinking skills, including higher-order skills, called for by those standards? (Depth)
What are the overall strengths and weaknesses of each assessment relative to the examined criteria forELA/Literacy and mathematics? (Overall Strengths and Weaknesses)
The first question assumes that Common Core (and its generic replacements) actually includes anything that truly prepares students for college and career. The second question assumes that such standards include calls for higher-order thinking skills. And the third assumes that the examined criteria are a legitimate measures of how weak or strong literacy and math instruction might be.
So we're on shaky ground already. Do things get better?
Well, the methodology involves using the CCSSO “Criteria for Procuring and Evaluating High-Quality Assessments.” So, here's what we're doing. We've got a new ruler from the emperor, and we want to make sure that it really measures twelve inches, a foot. We need something to check it against, some reference. So the emperor says, "Here, check it against this." And he hands us a ruler.
So who was selected for this objective study of the tests, and how were they selected.
We began by soliciting reviewer recommendations from each participating testing program and other sources, including content and assessment experts, individuals with experience in prior alignment studies, and several national and state organizations.
That's right. They asked for reviewer recommendations from the test manufacturers. They picked up the phone and said, "Hey, do you anybody who would be good to use on a study of whether or not your product is any good?"
So what were the findings?
Well, that's not really the question. The question is, what were they looking for? Once they broke down the definitions from CCSSO's measure of a high-quality test, what exactly were they looking for? Because here's the problem I have with a "study" like this. You can tell me that you are hunting for bear, but if you then tell me, "Yup, and we'll know we're seeing a bear when we spot its flowing white mane and its shiny horn growing in the middle of its forehead, galloping majestically on its noble hooves while pooping rainbows."
I'm not going to report on every single criteria here-- a few will give you the idea of whether the report shows us a big old bear or a majestic, non-existent unicorn.
Do the tests place strong emphasis on the most important content etc?
When we break this down it means--
Do the tests require students to read closely and use evidence from texts to obtain and defend responses?
The correct answer is no, because nothing resembling true close reading can be done on a short excerpt that is measured by close-ended responses that assume that all proper close readings of the text can only reach one "correct" conclusion. That is neither close reading (nor critical thinking). And before we have that conversation, we need to have the one where we discuss whether or not close reading is, in fact, a "most important" skill for college and career success.
Do the tests require students to write narrative, expository, and persuasive/argumentation essays (across each grade band, if not in each grade) in which they use evidence from sources to support their claims?
Again, the answer is no. None of the tests do this. No decent standardized test of writing exists, and the more test manufacturers try to develop one, the further into the weeds they wander, like the version of a standardized writing I've seen that involves taking an "evidence" paragraph and answering a prompt according to a method so precise that all "correct" answers will be essentially identical. If there is only one correct answer to your essay question, you are not assessing writing skills. Not to mention what bizarre sort of animal a narrative essay based on evidence must be.
Do the tests require students to demonstrate proficiency in the use of language, including academic vocabulary and language conventions, through tasks that mirror real-world activities?
None, again. Because nothing anywhere on a BS Tests mirrors real-world activities. Not to mention how "demonstrate proficiency" ends up on a test (hint: it invariably looks like a multiple choice Pick the Right Word question).
Do the tests require students to demonstrate research skills, including the ability to analyze, synthesize organize, and use information from sources?
Nope. Nope, nope, nope. We are talking about the skills involved in creating a real piece of research. We could be talking about the project my honors juniors complete in which they research a part of local history and we publish the results. Or you could be talking about a think tank putting together some experts in a field to do research and collecting it into a shiny 122-page report. But you are definitely not talking about something that can be squeezed into a twenty-minute standardized test section with all students trying to address the same "research" problem with nothing but the source material they're handed by the test. There are little-to-none research skills tested there.
How far in the weeds does this study get?
I look at the specific criteria for the "content" portion of our ELA measure, and I see nothing that a BS Test can actually provide, including the PARCC test for which I examined the sample version. But Fordham's study gives the PARCC a big fat E-for-excellent in this category.
The study "measures" other things, too.
Depth and complexity are supposed to be a thing. This turns out to be a call for higher-order thinking, as well as high quality texts on the test. We will, for the one-gazzillionth time, skip over any discussion of whether you can be talking about true high quality, deeply complex texts when none of them are ever longer than a page. How exactly do we argue that tests will cover fully complex texts without ever including an entire short story or an entire novel?
But that's what we get when testing drives the bus-- we're not asking "What would be the best assortment of complex, rich, important texts to assess students on?" We are asking "what excerpts short enough to fit in the time frame of a standardized text will be good enough to get by?"
Higher-order responses. Well, we have to have "at least one" question where the student generates rather than selects an answer. At least one?! And we do not discuss the equally important question of how that open response will be scored and evaluated (because if it's by putting a narrow rubric in the hands of a minimum-wage temp, then the test has failed yet again).
There's also math.
But I am not a math teacher, nor do I play one on television.
Oddly enough
When you get down to the specific comparisons of details of the four tests, you may find useful info, like how often the test has "broken" items, or how often questions allow for more than one correct answer. I'm just not sure these incidentals are worth digging past all the rest. They are signs, however, that researchers really did spend time actually looking at things, which shouldn't seem like a big deal, but in world where NCTQ can "study" teacher prep programs by looking at commencement fliers, it's actually kind of commendable that the researchers here really looked at what they were allegedly looking at.
What else?
There are recommendations and commendations and areas of improvement (everybody sucks-- surprise-- at assessing speaking and listening skills), but it doesn't really matter. The premises of this entire study are flawed, based on assumptions that are either unproven or disproven. Fordham has insisted they are loaded for bear, when they have, in fact, gone unicorn hunting.
The premises and assumptions of the study are false, hollow, wrong, take your pick. Once again, the people who are heavily invested in selling the material of reform have gotten together and concluded once again that they are correct, as proven by them, using their own measuring sticks and their own definitions. An awful lot of time and effortappears to have gone into this report, but I'm not sure what it good it does anybody except the folks who live, eat and breathe Common Core PR and Big Standardized Testing promotion.
These are not stupid people, and this is not the kind of lazy, bogus "research" promulgated by groups like TNTP or NCTQ. But it assumes conclusions not in evidence and leaps to other conclusions that cannot be supported-- and all of these conclusions are suspiciously close to the same ideas that Fordham has been promoting all along. This is yet another study that is probably going to be passed around and will pick up some press-- PARCC and SBA in particularly will likely cling to it like the last life preserver on the Titanic. I just don't think it proves what it wants to prove.
The Thomas B. Fordham Institute is a right-tilted thinky tank that has been one of the most faithful and diligent promoters of the reformster agenda, from charters (they run some in Ohio) to the Common Core to the business of Big Standardized Testing.
In 2009, Fordham got an almost-a-million dollar grant from the Gates Foundation to "study" Common Core Standards, the same standards that Gates was working hard to promote. They concluded that the Core was swell. Since those days, Fordham's support team has traveled across the country, swooping into various state legislators to explain the wisdom of reformster ideas.
This newest report fits right into that tradition.
Evaluating the Content and Quality of Next Generation Assessments is a big, 122-page monster of a report. But I'm not sure we need to dig down into the details, because once we understand that it's built on a cardboard foundation, we can realize that the details don't really matter.
The report is authored by Nancy Doorey and Morgan Polikoff. Doorey is the founder of her own consulting firm, and her reformy pedigree is excellent. She works as a study lead for Fordham, and she has worked with the head of Education Testing Services to develop new testing goodies. She also wrote a nice report for SBA about how good the SBA tests were. Polikoff is a testing expert and professor at USC at Rossier. He earned his PhD from UPenn in 2010 (BA at Urbana in 2006), and immediately raised his profile by working as a lead consultant on the Gates Measures of Effective Teaching project. He is in high demand as an expert on how test and implement Common Core, and he has written a ton about it.
So they have some history with the materials being studied.
So what did the study set out to study? They picked the PARCC, SBA, ACT Aspire and Massachussetts MCAS to study. Polikoff sums it up in his Brookings piece about the report.
A key hope of these new tests is that they will overcome the weaknesses of the previous generation of state tests. Among these weaknesses were poor alignment with the standards they were designed to represent and low overall levels of cognitive demand (i.e., most items requiring simple recall or procedures, rather than deeper skills such as demonstrating understanding). There was widespread belief that these features of NCLB-era state tests sent teachers conflicting messages about what to teach, undermining the standards and leading to undesired instructional responses.
Or consider this blurb from the Fordham website:
Evaluating the Content and Quality of Next Generation Assessments examines previously unreleased items from three multi-state tests (ACT Aspire, PARCC, and Smarter Balanced) and one best-in-class state assessment, Massachusetts’ state exam (MCAS), to answer policymakers’ most pressing questions: Do these tests reflect strong content? Are they rigorous? What are their strengths and areas for improvement? No one has ever gotten under the hood of these tests and published an objective third-party review of their content, quality, and rigor. Until now.
So, two main questions-- are the new tests well-aligned to the Core, and do they serve as a clear "unambiguous" driver of curriculum and instruction?
We start from the very beginning with a host of unexamined assumptions. The notion that Polikoff and Doorey or the Fordham Institute are in any way an objective third parties seems absurd, but it's not possible to objectively consider the questions because that would require us to unobjectively accept the premise that national or higher standards have anything to do with educational achievement, that the Core standards are in any way connected to college and career success, that a standardized test can measure any of the important parts of an education, and that having a Big Standardized Test drive instruction and curriculum is a good idea for any reason at all. These assumptions are at best highly debatable topics and at worst unsupportable baloney, but they are all accepted as givens before this study even begins.
And on top of them, another layer of assumption-- that having instruction and curriculum driven by a standardized test is somehow a good thing. That teaching to the test is really the way to go.
But what does the report actually say? You can look at the executive summary or the full report. I am only going to hit the highlights here.
The study was built around three questions:
Do the assessments place strong emphasis on the most important content for college and career readiness(CCR), as called for by the Common Core State Standards and other CCR standards? (Content)
Do they require all students to demonstrate the range of thinking skills, including higher-order skills, called for by those standards? (Depth)
What are the overall strengths and weaknesses of each assessment relative to the examined criteria forELA/Literacy and mathematics? (Overall Strengths and Weaknesses)
The first question assumes that Common Core (and its generic replacements) actually includes anything that truly prepares students for college and career. The second question assumes that such standards include calls for higher-order thinking skills. And the third assumes that the examined criteria are a legitimate measures of how weak or strong literacy and math instruction might be.
So we're on shaky ground already. Do things get better?
Well, the methodology involves using the CCSSO “Criteria for Procuring and Evaluating High-Quality Assessments.” So, here's what we're doing. We've got a new ruler from the emperor, and we want to make sure that it really measures twelve inches, a foot. We need something to check it against, some reference. So the emperor says, "Here, check it against this." And he hands us a ruler.
So who was selected for this objective study of the tests, and how were they selected.
We began by soliciting reviewer recommendations from each participating testing program and other sources, including content and assessment experts, individuals with experience in prior alignment studies, and several national and state organizations.
That's right. They asked for reviewer recommendations from the test manufacturers. They picked up the phone and said, "Hey, do you anybody who would be good to use on a study of whether or not your product is any good?"
So what were the findings?
Well, that's not really the question. The question is, what were they looking for? Once they broke down the definitions from CCSSO's measure of a high-quality test, what exactly were they looking for? Because here's the problem I have with a "study" like this. You can tell me that you are hunting for bear, but if you then tell me, "Yup, and we'll know we're seeing a bear when we spot its flowing white mane and its shiny horn growing in the middle of its forehead, galloping majestically on its noble hooves while pooping rainbows."
I'm not going to report on every single criteria here-- a few will give you the idea of whether the report shows us a big old bear or a majestic, non-existent unicorn.
Do the tests place strong emphasis on the most important content etc?
When we break this down it means--
Do the tests require students to read closely and use evidence from texts to obtain and defend responses?
The correct answer is no, because nothing resembling true close reading can be done on a short excerpt that is measured by close-ended responses that assume that all proper close readings of the text can only reach one "correct" conclusion. That is neither close reading (nor critical thinking). And before we have that conversation, we need to have the one where we discuss whether or not close reading is, in fact, a "most important" skill for college and career success.
Do the tests require students to write narrative, expository, and persuasive/argumentation essays (across each grade band, if not in each grade) in which they use evidence from sources to support their claims?
Again, the answer is no. None of the tests do this. No decent standardized test of writing exists, and the more test manufacturers try to develop one, the further into the weeds they wander, like the version of a standardized writing I've seen that involves taking an "evidence" paragraph and answering a prompt according to a method so precise that all "correct" answers will be essentially identical. If there is only one correct answer to your essay question, you are not assessing writing skills. Not to mention what bizarre sort of animal a narrative essay based on evidence must be.
Do the tests require students to demonstrate proficiency in the use of language, including academic vocabulary and language conventions, through tasks that mirror real-world activities?
None, again. Because nothing anywhere on a BS Tests mirrors real-world activities. Not to mention how "demonstrate proficiency" ends up on a test (hint: it invariably looks like a multiple choice Pick the Right Word question).
Do the tests require students to demonstrate research skills, including the ability to analyze, synthesize organize, and use information from sources?
Nope. Nope, nope, nope. We are talking about the skills involved in creating a real piece of research. We could be talking about the project my honors juniors complete in which they research a part of local history and we publish the results. Or you could be talking about a think tank putting together some experts in a field to do research and collecting it into a shiny 122-page report. But you are definitely not talking about something that can be squeezed into a twenty-minute standardized test section with all students trying to address the same "research" problem with nothing but the source material they're handed by the test. There are little-to-none research skills tested there.
How far in the weeds does this study get?
I look at the specific criteria for the "content" portion of our ELA measure, and I see nothing that a BS Test can actually provide, including the PARCC test for which I examined the sample version. But Fordham's study gives the PARCC a big fat E-for-excellent in this category.
The study "measures" other things, too.
Depth and complexity are supposed to be a thing. This turns out to be a call for higher-order thinking, as well as high quality texts on the test. We will, for the one-gazzillionth time, skip over any discussion of whether you can be talking about true high quality, deeply complex texts when none of them are ever longer than a page. How exactly do we argue that tests will cover fully complex texts without ever including an entire short story or an entire novel?
But that's what we get when testing drives the bus-- we're not asking "What would be the best assortment of complex, rich, important texts to assess students on?" We are asking "what excerpts short enough to fit in the time frame of a standardized text will be good enough to get by?"
Higher-order responses. Well, we have to have "at least one" question where the student generates rather than selects an answer. At least one?! And we do not discuss the equally important question of how that open response will be scored and evaluated (because if it's by putting a narrow rubric in the hands of a minimum-wage temp, then the test has failed yet again).
There's also math.
But I am not a math teacher, nor do I play one on television.
Oddly enough
When you get down to the specific comparisons of details of the four tests, you may find useful info, like how often the test has "broken" items, or how often questions allow for more than one correct answer. I'm just not sure these incidentals are worth digging past all the rest. They are signs, however, that researchers really did spend time actually looking at things, which shouldn't seem like a big deal, but in world where NCTQ can "study" teacher prep programs by looking at commencement fliers, it's actually kind of commendable that the researchers here really looked at what they were allegedly looking at.
What else?
There are recommendations and commendations and areas of improvement (everybody sucks-- surprise-- at assessing speaking and listening skills), but it doesn't really matter. The premises of this entire study are flawed, based on assumptions that are either unproven or disproven. Fordham has insisted they are loaded for bear, when they have, in fact, gone unicorn hunting.
The premises and assumptions of the study are false, hollow, wrong, take your pick. Once again, the people who are heavily invested in selling the material of reform have gotten together and concluded once again that they are correct, as proven by them, using their own measuring sticks and their own definitions. An awful lot of time and effortappears to have gone into this report, but I'm not sure what it good it does anybody except the folks who live, eat and breathe Common Core PR and Big Standardized Testing promotion.
These are not stupid people, and this is not the kind of lazy, bogus "research" promulgated by groups like TNTP or NCTQ. But it assumes conclusions not in evidence and leaps to other conclusions that cannot be supported-- and all of these conclusions are suspiciously close to the same ideas that Fordham has been promoting all along. This is yet another study that is probably going to be passed around and will pick up some press-- PARCC and SBA in particularly will likely cling to it like the last life preserver on the Titanic. I just don't think it proves what it wants to prove.
Risk and Rules
In his excellent look at the value of teacher coaches, Peter DeWitt drops this line with an important embedded assumption:
In order for coaching to work properly, the school has to have a climate conducive to learning, which means that there needs to be a balance between risk-taking and rule following.
A climate conducive to learning has to have a balance between risk-taking and rule-following. That notion really resonates with me, because I see teaching as an ongoing balancing act. And some of that balance is between risk and rules.
I spend a lot of time railing against rules and restrictions and oppressive demands for one-size-fits-all conformity, but my first published education rant was a letter to the NCTE (National Council of Teachers of English) journal complaining about the loose foolishness of whole language approaches. I'm a lot less tightly wound than when I was younger, but I really don't have much trouble understanding the point of view of conservative commenters on education.
Larry Cuban captures the age-old tension in a recent post.
Two traditions of teaching have competed with one another for millennia. Each has had a grab-bag of names over the centuries: conservative vs. liberal, hard vs. soft pedagogy, subject-centered vs. child-centered, traditional vs. progressive, teacher-centered vs. student-centered, mimetic vs. transformational.
Each tradition has its own goals (transmit knowledge to next generation vs. helping children grow into full human beings); practices (teacher-centered vs. student-centered); and desired outcomes (knowledgeable and skilled adults ready to enter the labor market and society versus an outcome of moral and civic engaged adults who use their knowledge and skills to help themselves and their communities). No evidence, then or now, has confirmed advocates’ claims for either tradition. These are choices anchored in beliefs.
Cuban goes on to suggest that the best teachers use a blend of both, but while I don't disagree with that notion, I don't see an equivalency between the two schools. The mimetic tradition is a very useful tool in achieving transformational teaching, but ultimately they are no more equivalent than a hammer and a house built with it.
The mimetic tradition is all about rules, about content set in concrete. By itself it is lifeless and inert. This is the sort of thing that Emerson railed against in "Self-Reliance"-- The ultimate traditionalist mimetic subject is Latin, a language dead and fossilized. I occasionally talk to someone who studied it and found the experience wonderful-- and invariably they are people who took the dead, dry stuff and made it transformational through their own use of it.
And yet there can be no transformation of a student into someone more fully human and completely themselves based simply on air, any more than you can build a blazing fire without anything to burn.Without the glass, the glass is neither half empty or half full-- it's just a puddle. Luke shuts down his targeting computer, but not his X-wing fighter. When I'm playing a jazz solo, I can play whatever I feel or want, but it lives or dies against the background of the chord structure.
Rules are the foundation on which everything else stands, but they are not the be-all and the end-all. They are not the purpose.Most importantly, our students are not there to serve the rules-- the rules are there to serve them.
Balancing rules and risk remains a challenge. The reformster program is all about rules-- rules piled on rules governed by rules enforced by more rules, based on a belief that we can just rulify education into a state of perfection. But perfection, like balance, is not a state-- it's a process.
Education is a balancing act performed by a teacher on a unicycle juggling twenty bowling balls with her hands while holding a long balance pole across her knees while a pack of squirrels chase each other back and forth across the pole while a flock of geese keeps flying through them all. Reformsters and other rules fans think the way to fix this system is to weld the parts of the unicycle together, put the teacher in straightjacket, and crazy-glue the pole to her knees. And if it doesn't seem to work, they think they just haven't done the welding and gluing in the right position-- but their premise is wrong.
The dynamic tension between rules and risk cannot be "settled," and more than we can devise a one-size-fits-all formula for transforming children into more fully realized grown humans. It's an ongoing process, and endless act of balance best managed by the person there on the high wire and not the clowns down on the ground.
In order for coaching to work properly, the school has to have a climate conducive to learning, which means that there needs to be a balance between risk-taking and rule following.
A climate conducive to learning has to have a balance between risk-taking and rule-following. That notion really resonates with me, because I see teaching as an ongoing balancing act. And some of that balance is between risk and rules.
I spend a lot of time railing against rules and restrictions and oppressive demands for one-size-fits-all conformity, but my first published education rant was a letter to the NCTE (National Council of Teachers of English) journal complaining about the loose foolishness of whole language approaches. I'm a lot less tightly wound than when I was younger, but I really don't have much trouble understanding the point of view of conservative commenters on education.
Larry Cuban captures the age-old tension in a recent post.
Two traditions of teaching have competed with one another for millennia. Each has had a grab-bag of names over the centuries: conservative vs. liberal, hard vs. soft pedagogy, subject-centered vs. child-centered, traditional vs. progressive, teacher-centered vs. student-centered, mimetic vs. transformational.
Each tradition has its own goals (transmit knowledge to next generation vs. helping children grow into full human beings); practices (teacher-centered vs. student-centered); and desired outcomes (knowledgeable and skilled adults ready to enter the labor market and society versus an outcome of moral and civic engaged adults who use their knowledge and skills to help themselves and their communities). No evidence, then or now, has confirmed advocates’ claims for either tradition. These are choices anchored in beliefs.
Cuban goes on to suggest that the best teachers use a blend of both, but while I don't disagree with that notion, I don't see an equivalency between the two schools. The mimetic tradition is a very useful tool in achieving transformational teaching, but ultimately they are no more equivalent than a hammer and a house built with it.
The mimetic tradition is all about rules, about content set in concrete. By itself it is lifeless and inert. This is the sort of thing that Emerson railed against in "Self-Reliance"-- The ultimate traditionalist mimetic subject is Latin, a language dead and fossilized. I occasionally talk to someone who studied it and found the experience wonderful-- and invariably they are people who took the dead, dry stuff and made it transformational through their own use of it.
And yet there can be no transformation of a student into someone more fully human and completely themselves based simply on air, any more than you can build a blazing fire without anything to burn.Without the glass, the glass is neither half empty or half full-- it's just a puddle. Luke shuts down his targeting computer, but not his X-wing fighter. When I'm playing a jazz solo, I can play whatever I feel or want, but it lives or dies against the background of the chord structure.
Rules are the foundation on which everything else stands, but they are not the be-all and the end-all. They are not the purpose.Most importantly, our students are not there to serve the rules-- the rules are there to serve them.
Balancing rules and risk remains a challenge. The reformster program is all about rules-- rules piled on rules governed by rules enforced by more rules, based on a belief that we can just rulify education into a state of perfection. But perfection, like balance, is not a state-- it's a process.
Education is a balancing act performed by a teacher on a unicycle juggling twenty bowling balls with her hands while holding a long balance pole across her knees while a pack of squirrels chase each other back and forth across the pole while a flock of geese keeps flying through them all. Reformsters and other rules fans think the way to fix this system is to weld the parts of the unicycle together, put the teacher in straightjacket, and crazy-glue the pole to her knees. And if it doesn't seem to work, they think they just haven't done the welding and gluing in the right position-- but their premise is wrong.
The dynamic tension between rules and risk cannot be "settled," and more than we can devise a one-size-fits-all formula for transforming children into more fully realized grown humans. It's an ongoing process, and endless act of balance best managed by the person there on the high wire and not the clowns down on the ground.
Wednesday, February 10, 2016
Citizens United vs. Friedrich
Mark Joseph Stern, writing for Slate about Hillary Clinton's NH concession speech, notes in passing a looming contradiction between the Friedrich vs. California Teachers case and the terrible Citizen's United decision.
Folks tend to remember the bizarre reasoning that corporations are people, money is free speech, and there is no appearance of corruption when a corporation hands an elected official a giant suitcase full of money. But one of the arguments that the Supremes rejected in Citizens United was this one:
This problem arises because of the structure of corporations: the owners of the corporations, the shareholders, do not control how the assets of the corporation are used; the managers do. This separation of ownership and control is known as the agency problem in corporate law. The agency problem presents the potential for the shareholders’ agents, corporate management, to use the shareholders’ property, the assets of the corporation, for management’s own purposes. One argument made in favor of limiting corporate expenditures is that management can use the assets of the corporations to support political causes shareholders do not agree with, thereby violating the shareholders’ rights of association. The potential violation of this right gives the government a compelling interest justifying speech limitations.
In other words, shareholders could find their corporate assets being used to support a political cause they do not support.
The Supremes were unimpressed, and rejected that argument when they decided Citizens United.
Yet it is, of course, the exact argument of the plaintiffs in Friedrichs, who don't want anybody to ever give fees to unions for political purposes.
Well, actually, it's not the same argument. It's a stronger one, since the CU argument involves shareholders' actual property, while Friedrichs involves taking up a separate collection for political purposes. Friedrich's doesn't want the union to be able to ask you to kick in for cab fare to drive me to a rally for a politician you hate; Citizens United says I can take a car we jointly own and drive it through your garden.
There is no reason to expect that this inconsistency will carry the day. But if Friedrichs wins against unions, as seems likely-ish, it will be one more sign that today's court believes that all corporations are people, and some people are more equal than others.
College Board's Real Business
Here's the morning's promoted tweet from the College Board
That link takes you to the College Board page tagged with "Transformed Services for Smart Recruiting." Here you can find all sorts of useful headings like
Student Search Service (registered trademark)
Connect with students and meet recruitment goals using precise, deep data from the largest and richest database of college-bound students in the nation.
Enrollment Planning Service (trademark)
Achieve your enrollment goals with powerful data analysis tools that efficiently facilitate exploration of the student population and inform a smarter recruitment plan.
Segment Analysis Service (trademark)
Leverage sophisticated geographic, attitudinal and behavioral information to focus your enrollment efforts and achieve better yields from admission through graduation.
That last one, with its ability to leverage attitudinal and behavioral data-- how the heck do they do that? Exactly what is in the big fat College Board data base.
There's a phone number for customers to call, and of course, "customers" does not mean "students and their families." It means all the nice people who keep the College Board in business by paying for the data that they've mined from their testing products. Those folks can click over to the College Board Search Support page to learn that every high school student who ever took a College Board test product (PSAT, SAT, AP exam, or any of the many new SAT products) is in the database.
I don't know that the data miners at the College Board are any more nefarious than those at Facebook or a television network. Though those at least give the datamined subjects a free "product" to play with-- the College Board manages to mine students for data and get them to pay for the privilege.
But so many people think of the College Board and its test products as some sort of public service or educational necessity. It would be useful if we could all remember who they really are, what they really do, and how they make their money.
College Board Search's PSAT database increased 4.1%. Reach these students today! https://t.co/sOSYdVqbbY— The College Board (@CollegeBoard) January 12, 2016
That link takes you to the College Board page tagged with "Transformed Services for Smart Recruiting." Here you can find all sorts of useful headings like
Student Search Service (registered trademark)
Connect with students and meet recruitment goals using precise, deep data from the largest and richest database of college-bound students in the nation.
Enrollment Planning Service (trademark)
Achieve your enrollment goals with powerful data analysis tools that efficiently facilitate exploration of the student population and inform a smarter recruitment plan.
Segment Analysis Service (trademark)
Leverage sophisticated geographic, attitudinal and behavioral information to focus your enrollment efforts and achieve better yields from admission through graduation.
That last one, with its ability to leverage attitudinal and behavioral data-- how the heck do they do that? Exactly what is in the big fat College Board data base.
There's a phone number for customers to call, and of course, "customers" does not mean "students and their families." It means all the nice people who keep the College Board in business by paying for the data that they've mined from their testing products. Those folks can click over to the College Board Search Support page to learn that every high school student who ever took a College Board test product (PSAT, SAT, AP exam, or any of the many new SAT products) is in the database.
I don't know that the data miners at the College Board are any more nefarious than those at Facebook or a television network. Though those at least give the datamined subjects a free "product" to play with-- the College Board manages to mine students for data and get them to pay for the privilege.
But so many people think of the College Board and its test products as some sort of public service or educational necessity. It would be useful if we could all remember who they really are, what they really do, and how they make their money.
NY: Those Peoples' Kids
It doesn't get much plainer than this. The headline of the New York Chalkbeat piece is "Hoping to attract gentrifiers, a troubled school gets a makeover and new admissions policy."
The story is about Satellite West Middle School, a school redesigned to focus on science and art, renamed the Dock Street School for STEAM Studies. And it addresses this question posed by Patrick Wall in the article: How can middle-class families in gentrifying areas be convinced to send their children to local schools with less-than-stellar reputations?
Because District 13 runs on choice (parents can apply to any middle school), Dock Street must find a way to appeal to the now-increasingly-upscale parents in its community. And that means being more careful about who, exactly, they let in. Not just improving the quality of the education offered by the school, but by screening admissions. By making sure that only the Right Children get in.
It is an understandable dilemma for parents, and I'm never willing to say to a parent, "Look, you should put political and philosophical concerns ahead of your own child's concerns."
But the new development underlines two big lies about the value and benefits of charters to a city's education system.
First, it shows, once again, the one real trick that charter operators know and which some public systems have learned to adopt-- to get a better school, you need to swap out your old students for "better" ones. When a charter or turnaround specialist or state takeover district manages to improve a school with exactly the same student population that was there when the school was deemed "failing" in the first place, that will be noteworthy. But mostly they do what Dock Street is doing-- bar the door and only let in those students who will improve the school. That's exactly what Cris Barbic learned just before he gave up on Tennessee's state takeover district.
That's great for the school, and good for the newly acquired batch of students, but it still leaves a whole bunch of students in the wind, without a school intent on educating them.
Second, it shows that the power of charters and choice to "free" students from their zip code is an illusion. Charter fans will argue that wealthier parents exercise choice by sorting themselves into better neighborhoods, that housing choice is a version of school choice. So, the theory goes, we mix that up by allowing people to school outside their neighborhood. School choice can overcome the effects of real estate choice.
But there are two things going on at Dock Street. One is that school choice is struggling to keep up with real estate choice-- that affluent parents are moving into a gentrifying neighborhood and they want nicer schools to match. School choice as it emerges in District 13 is not about escaping real estate choice, but about keeping pace with it, reinforcing it.
Given the choice, parents want to make school choices that match their real estate choice, not override it.
While Dock Street plans to strive for greater diversity, [redesign team member Cynthia] McKnight said, many parents also made clear that they would not consider the school if it continued to admit any student who applied.
“A lot of parents wouldn’t send their children here if they didn’t have a screen,” she said.
More affluent people don't want to live next door to Those People, and they don't want to send their kids to school with Those Peoples' Kids. Uncoupling choice of school from choice of neighborhood just requires parents to make those two choices separately, but the notion that charter-choice systems somehow erase the class and race segregation effects of real estate-- well, that just doesn't seem to be how it works.
In fact, those non-gentry who still live in the neighborhood, who haven't been pushed out yet, now get to see their children pushed out of their neighborhood school because they just aren't the Right Sort of People.
Meanwhile, Those People and their children are pushed out of another neighborhood, and those that stick around are pushed out of their neighborhood school. And another choice system ends up pushing Those Peoples' Kids around like so many low-income hot potatoes.This is no way to run a public school system.
The story is about Satellite West Middle School, a school redesigned to focus on science and art, renamed the Dock Street School for STEAM Studies. And it addresses this question posed by Patrick Wall in the article: How can middle-class families in gentrifying areas be convinced to send their children to local schools with less-than-stellar reputations?
Because District 13 runs on choice (parents can apply to any middle school), Dock Street must find a way to appeal to the now-increasingly-upscale parents in its community. And that means being more careful about who, exactly, they let in. Not just improving the quality of the education offered by the school, but by screening admissions. By making sure that only the Right Children get in.
It is an understandable dilemma for parents, and I'm never willing to say to a parent, "Look, you should put political and philosophical concerns ahead of your own child's concerns."
But the new development underlines two big lies about the value and benefits of charters to a city's education system.
First, it shows, once again, the one real trick that charter operators know and which some public systems have learned to adopt-- to get a better school, you need to swap out your old students for "better" ones. When a charter or turnaround specialist or state takeover district manages to improve a school with exactly the same student population that was there when the school was deemed "failing" in the first place, that will be noteworthy. But mostly they do what Dock Street is doing-- bar the door and only let in those students who will improve the school. That's exactly what Cris Barbic learned just before he gave up on Tennessee's state takeover district.
That's great for the school, and good for the newly acquired batch of students, but it still leaves a whole bunch of students in the wind, without a school intent on educating them.
Second, it shows that the power of charters and choice to "free" students from their zip code is an illusion. Charter fans will argue that wealthier parents exercise choice by sorting themselves into better neighborhoods, that housing choice is a version of school choice. So, the theory goes, we mix that up by allowing people to school outside their neighborhood. School choice can overcome the effects of real estate choice.
But there are two things going on at Dock Street. One is that school choice is struggling to keep up with real estate choice-- that affluent parents are moving into a gentrifying neighborhood and they want nicer schools to match. School choice as it emerges in District 13 is not about escaping real estate choice, but about keeping pace with it, reinforcing it.
Given the choice, parents want to make school choices that match their real estate choice, not override it.
While Dock Street plans to strive for greater diversity, [redesign team member Cynthia] McKnight said, many parents also made clear that they would not consider the school if it continued to admit any student who applied.
“A lot of parents wouldn’t send their children here if they didn’t have a screen,” she said.
More affluent people don't want to live next door to Those People, and they don't want to send their kids to school with Those Peoples' Kids. Uncoupling choice of school from choice of neighborhood just requires parents to make those two choices separately, but the notion that charter-choice systems somehow erase the class and race segregation effects of real estate-- well, that just doesn't seem to be how it works.
In fact, those non-gentry who still live in the neighborhood, who haven't been pushed out yet, now get to see their children pushed out of their neighborhood school because they just aren't the Right Sort of People.
Meanwhile, Those People and their children are pushed out of another neighborhood, and those that stick around are pushed out of their neighborhood school. And another choice system ends up pushing Those Peoples' Kids around like so many low-income hot potatoes.This is no way to run a public school system.
Tuesday, February 9, 2016
Maryland University President's Loyalty Purge
Remember the story about the university president in Maryland who directed his faculty to "drown the bunnies" in order to improve their retention and graduation numbers? Well, according to Inside Higher Ed, he has gone after and fired faculty members that he considers disloyal-- including the adviser of the school paper that outed his bunny comment.
President Simon Newman was hired as the head of the small Roman Catholic university a year ago, with not an iota experience in higher education. Instead, Newman was plucked from the world of business, specializing in private equity and starting businesses.
Newman fired a tenured professor, Thane M. Naberhaus of the philosophy department, with a letter that included this rationale:
As an employee of Mount St. Mary's University, you owe a duty of loyalty to this university and to act in a manner consistent with that duty. However, your recent actions, in my opinion and that of others, have violated that duty and clearly justify your termination.
Newman seems to believe that loyalty to the university means never questioning the decisions of Newman himself. Newman blamed Naberhaus for "considerable damage" to the university, threatened him with a lawsuit, and banned him from the campus. His page has been wiped from the university website.
David Rehm, the provost who told Newman to hold off on his freshman flushing plan, was removed from his post as provost.
Newman also fired Edward Egan, a professor of law, alumnus, son of an alumnus, and former trustee of the university. Egan was the advisor of the Mountain Echo, the school newspaper that broke the story of Newman's bunny drowning instructions. A quick look at the Echo front page shows that the controversy has not died down in the last few weeks, with letters coming in from many alumni:
When I arrived on campus as a freshman in 1988, Mount St. Mary’s was featured in The Chronicle of Higher Learning for its innovative Freshman Core program. Today my mother wouldn’t enroll a dog there. It is sad to see my alma mater go downhill in this manner.
-- Laura R. Zeugner
After reading The Mountain Echo’s article, the Washington Post article, and the Board of Trustees letter to the Mountain Echo regarding the recent issues with attempts at “boosting” student retention rates, I am very disturbed not only by the initial approach, but the college’s response to the issue.
-- Ken Buckler, Editor, WashCo Chronicle
The onus is not on the newspaper to explain or defend. The paper does need to be accurate, offer all sides a chance to comment, and relate its facts in clear language, and you have done that. Yes, the result is sometimes messy and people get upset that words they thought private are now public. That is the price to pay for authority and power in a country with a free press
-- John W. Miller, Staff Reporter, Wall Street Journal
As an academic deeply invested in Catholic higher education, I wish the Mount well in every way. I thus write to assure Mr. Coyne that the Echo’s excellent reporting about student retention efforts will not in fact “render incalculable damage to the reputation of this University and its institutional integrity” (“Letter to the Editor,” 1/19/16). Quite the contrary, the fine work of the student reporters and editors is a testament to the Mount’s educational success. What would damage the institution’s reputation among other universities, both Catholic and secular, is the perception that its leaders are attempting to intimidate less powerful members of the community and stifle discussion about important matters. As every teacher knows, silencing students is incompatible with educating them.
-- Karen Stohr, Ph.D. Associate Professor, Philosophy, Senior Research Scholar, Kennedy Institute of Ethics, Georgetown University
The Mountain Echo ran one letter of support from a data analyst and alumnus in Newman's office, who said that of course, Newman never meant to push out low ability students who were hard workers. So maybe the intent was only to drown lazy bunnies?
Accounts of fallout from the article in the Mountain Echo paint a picture of a president and board chairman (John E. Coyne III) trying to browbeat the paper into silence. And now, under new advisers, the newest piece in the paper is a fun story about studying in Florence. The school paper has nothing in it about the firings.
And one other tidbit of info from the IHE report-- a dozen faculty members created a campus chapter of the American Association of University Professors less than a week before the purging of the three professors. The firings removed two of the twelve.
Meanwhile, Coyne has been trying to do damage control by offering explanation and justification for Newman's plan, like this e-mail to staff:
“We found that the retention program, as conceived, is indeed meant to retain students by identifying and helping at-risk students much earlier in their first semester — the first six weeks — than we have ever done before. It takes an innovative approach that includes gathering and analyzing information from a range of sources, including our faculty whom we have trained on how to have rich, supportive conversations with students. We also noted that the design of a (if necessary) thoughtful, eventual conversation about the student’s own discernment process and the refund of tuition was also intended to be in keeping with our Catholic identity.”
Nice try. The attempt to fire and silence dissenters shows just how co-operative and collegial university leaders are, particularly when faced with anything that doesn't go just as they want it to. The university has characterized Newman's "drown the bunnies" rhetoric as a poor metaphor choice, but it would seem to be revealing about the University's current operational philosophy.
And as far as discernment-- when you think that the cause of your bad PR is people who won't keep your secrets for you instead of your own stupid, ill-considered ideas, then you are in need of some serious discernment yourself. Mount St. Mary's may once have been a wonderful university, but right now it's an ugly, ugly mess, and the blame for that rests squarely on Newman and Coyne, which means that no matter how many people they fire for being "disloyal," Mount St. Mary's will be in trouble.
President Simon Newman was hired as the head of the small Roman Catholic university a year ago, with not an iota experience in higher education. Instead, Newman was plucked from the world of business, specializing in private equity and starting businesses.
Newman fired a tenured professor, Thane M. Naberhaus of the philosophy department, with a letter that included this rationale:
As an employee of Mount St. Mary's University, you owe a duty of loyalty to this university and to act in a manner consistent with that duty. However, your recent actions, in my opinion and that of others, have violated that duty and clearly justify your termination.
Newman seems to believe that loyalty to the university means never questioning the decisions of Newman himself. Newman blamed Naberhaus for "considerable damage" to the university, threatened him with a lawsuit, and banned him from the campus. His page has been wiped from the university website.
David Rehm, the provost who told Newman to hold off on his freshman flushing plan, was removed from his post as provost.
Newman also fired Edward Egan, a professor of law, alumnus, son of an alumnus, and former trustee of the university. Egan was the advisor of the Mountain Echo, the school newspaper that broke the story of Newman's bunny drowning instructions. A quick look at the Echo front page shows that the controversy has not died down in the last few weeks, with letters coming in from many alumni:
When I arrived on campus as a freshman in 1988, Mount St. Mary’s was featured in The Chronicle of Higher Learning for its innovative Freshman Core program. Today my mother wouldn’t enroll a dog there. It is sad to see my alma mater go downhill in this manner.
-- Laura R. Zeugner
After reading The Mountain Echo’s article, the Washington Post article, and the Board of Trustees letter to the Mountain Echo regarding the recent issues with attempts at “boosting” student retention rates, I am very disturbed not only by the initial approach, but the college’s response to the issue.
-- Ken Buckler, Editor, WashCo Chronicle
The onus is not on the newspaper to explain or defend. The paper does need to be accurate, offer all sides a chance to comment, and relate its facts in clear language, and you have done that. Yes, the result is sometimes messy and people get upset that words they thought private are now public. That is the price to pay for authority and power in a country with a free press
-- John W. Miller, Staff Reporter, Wall Street Journal
As an academic deeply invested in Catholic higher education, I wish the Mount well in every way. I thus write to assure Mr. Coyne that the Echo’s excellent reporting about student retention efforts will not in fact “render incalculable damage to the reputation of this University and its institutional integrity” (“Letter to the Editor,” 1/19/16). Quite the contrary, the fine work of the student reporters and editors is a testament to the Mount’s educational success. What would damage the institution’s reputation among other universities, both Catholic and secular, is the perception that its leaders are attempting to intimidate less powerful members of the community and stifle discussion about important matters. As every teacher knows, silencing students is incompatible with educating them.
-- Karen Stohr, Ph.D. Associate Professor, Philosophy, Senior Research Scholar, Kennedy Institute of Ethics, Georgetown University
The Mountain Echo ran one letter of support from a data analyst and alumnus in Newman's office, who said that of course, Newman never meant to push out low ability students who were hard workers. So maybe the intent was only to drown lazy bunnies?
Accounts of fallout from the article in the Mountain Echo paint a picture of a president and board chairman (John E. Coyne III) trying to browbeat the paper into silence. And now, under new advisers, the newest piece in the paper is a fun story about studying in Florence. The school paper has nothing in it about the firings.
And one other tidbit of info from the IHE report-- a dozen faculty members created a campus chapter of the American Association of University Professors less than a week before the purging of the three professors. The firings removed two of the twelve.
Meanwhile, Coyne has been trying to do damage control by offering explanation and justification for Newman's plan, like this e-mail to staff:
“We found that the retention program, as conceived, is indeed meant to retain students by identifying and helping at-risk students much earlier in their first semester — the first six weeks — than we have ever done before. It takes an innovative approach that includes gathering and analyzing information from a range of sources, including our faculty whom we have trained on how to have rich, supportive conversations with students. We also noted that the design of a (if necessary) thoughtful, eventual conversation about the student’s own discernment process and the refund of tuition was also intended to be in keeping with our Catholic identity.”
Nice try. The attempt to fire and silence dissenters shows just how co-operative and collegial university leaders are, particularly when faced with anything that doesn't go just as they want it to. The university has characterized Newman's "drown the bunnies" rhetoric as a poor metaphor choice, but it would seem to be revealing about the University's current operational philosophy.
And as far as discernment-- when you think that the cause of your bad PR is people who won't keep your secrets for you instead of your own stupid, ill-considered ideas, then you are in need of some serious discernment yourself. Mount St. Mary's may once have been a wonderful university, but right now it's an ugly, ugly mess, and the blame for that rests squarely on Newman and Coyne, which means that no matter how many people they fire for being "disloyal," Mount St. Mary's will be in trouble.
Subscribe to:
Posts (Atom)