Showing posts with label Common Core. Show all posts
Showing posts with label Common Core. Show all posts

Thursday, February 11, 2016

Fordham Provides More Core Testing PR

The Fordham Institute is back with another "study" of circular reasoning and unexamined assumptions that concludes that reformster policy is awesome. 

The Thomas B. Fordham Institute is a right-tilted thinky tank that has been one of the most faithful and diligent promoters of the reformster agenda, from charters (they run some in Ohio) to the Common Core to the business of Big Standardized Testing.

In 2009, Fordham got an almost-a-million dollar grant from the Gates Foundation to "study" Common Core Standards, the same standards that Gates was working hard to promote. They concluded that the Core was swell. Since those days, Fordham's support team has traveled across the country, swooping into various state legislators to explain the wisdom of reformster ideas.

This newest report fits right into that tradition.

Evaluating the Content and Quality of Next Generation Assessments is a big, 122-page monster of a report. But I'm not sure we need to dig down into the details, because once we understand that it's built on a cardboard foundation, we can realize that the details don't really matter.

The report is authored by Nancy Doorey and Morgan Polikoff. Doorey is the founder of her own consulting firm, and her reformy pedigree is excellent. She works as a study lead for Fordham, and she has worked with the head of Education Testing Services to develop new testing goodies. She also wrote a nice report for SBA about how good the SBA tests were. Polikoff is a testing expert and professor at USC at Rossier. He earned his PhD from UPenn in 2010 (BA at Urbana in 2006), and immediately raised his profile by working as a lead consultant on the Gates Measures of Effective Teaching project. He is in high demand as an expert on how test and implement Common Core, and he has written a ton about it.

So they have some history with the materials being studied.

So what did the study set out to study? They picked the PARCC, SBA, ACT Aspire and Massachussetts MCAS to study. Polikoff sums it up in his Brookings piece about the report.

A key hope of these new tests is that they will overcome the weaknesses of the previous generation of state tests. Among these weaknesses were poor alignment with the standards they were designed to represent and low overall levels of cognitive demand (i.e., most items requiring simple recall or procedures, rather than deeper skills such as demonstrating understanding). There was widespread belief that these features of NCLB-era state tests sent teachers conflicting messages about what to teach, undermining the standards and leading to undesired instructional responses.

Or consider this blurb from the Fordham website

Evaluating the Content and Quality of Next Generation Assessments examines previously unreleased items from three multi-state tests (ACT Aspire, PARCC, and Smarter Balanced) and one best-in-class state assessment, Massachusetts’ state exam (MCAS), to answer policymakers’ most pressing questions: Do these tests reflect strong content? Are they rigorous? What are their strengths and areas for improvement? No one has ever gotten under the hood of these tests and published an objective third-party review of their content, quality, and rigor. Until now.

So, two main questions-- are the new tests well-aligned to the Core, and do they serve as a clear "unambiguous" driver of curriculum and instruction?

We start from the very beginning with a host of unexamined assumptions. The notion that Polikoff and Doorey or the Fordham Institute are in any way an objective third parties seems absurd, but it's not possible to objectively consider the questions because that would require us to unobjectively accept the premise that national or higher standards have anything to do with educational achievement, that the Core standards are in any way connected to college and career success, that a standardized test can measure any of the important parts of an education, and that having a Big Standardized Test drive instruction and curriculum is a good idea for any reason at all. These assumptions are at best highly debatable topics and at worst unsupportable baloney, but they are all accepted as givens before this study even begins.

And on top of them, another layer of assumption-- that having instruction and curriculum driven by a standardized test is somehow a good thing. That teaching to the test is really the way to go.

But what does the report actually say? You can look at the executive summary or the full report. I am only going to hit the highlights here.

The study was built around three questions:

Do the assessments place strong emphasis on the most important content for college and career readiness(CCR), as called for by the Common Core State Standards and other CCR standards? (Content)

Do they require all students to demonstrate the range of thinking skills, including higher-order skills, called for by those standards? (Depth)

What are the overall strengths and weaknesses of each assessment relative to the examined criteria forELA/Literacy and mathematics? (Overall Strengths and Weaknesses)

The first question assumes that Common Core (and its generic replacements) actually includes anything that truly prepares students for college and career. The second question assumes that such standards include calls for higher-order thinking skills. And the third assumes that the examined criteria are a legitimate measures of how weak or strong literacy and math instruction might be.

So we're on shaky ground already. Do things get better?

Well, the methodology involves using the CCSSO “Criteria for Procuring and Evaluating High-Quality Assessments.” So, here's what we're doing. We've got a new ruler from the emperor, and we want to make sure that it really measures twelve inches, a foot. We need something to check it against, some reference. So the emperor says, "Here, check it against this." And he hands us a ruler.

















So who was selected for this objective study of the tests, and how were they selected.

We began by soliciting reviewer recommendations from each participating testing program and other sources, including content and assessment experts, individuals with experience in prior alignment studies, and several national and state organizations. 

That's right. They asked for reviewer recommendations from the test manufacturers. They picked up the phone and said, "Hey, do you anybody who would be good to use on a study of whether or not your product is any good?"

So what were the findings?

Well, that's not really the question. The question is, what were they looking for? Once they broke down the definitions from CCSSO's measure of a high-quality test, what exactly were they looking for? Because here's the problem I have with a "study" like this. You can tell me that you are hunting for bear, but if you then tell me, "Yup, and we'll know we're seeing a bear when we spot its flowing white mane and its shiny horn growing in the middle of its forehead, galloping majestically on its noble hooves while pooping rainbows."

I'm not going to report on every single criteria here-- a few will give you the idea of whether the report shows us a big old bear or a majestic, non-existent unicorn.

Do the tests place strong emphasis on the most important content etc?

When we break this down it means--

Do the tests require students to read closely and use evidence from texts to obtain and defend responses? 

The correct answer is no, because nothing resembling true close reading can be done on a short excerpt that is measured by close-ended responses that assume that all proper close readings of the text can only reach one "correct" conclusion. That is neither close reading (nor critical thinking). And before we have that conversation, we need to have the one where we discuss whether or not close reading is, in fact, a "most important" skill for college and career success.

Do the tests require students to write narrative, expository, and persuasive/argumentation essays (across each grade band, if not in each grade) in which they use evidence from sources to support their claims?

Again, the answer is no. None of the tests do this. No decent standardized test of writing exists, and the more test manufacturers try to develop one, the further into the weeds they wander, like the version of a standardized writing I've seen that involves taking an "evidence" paragraph and answering a prompt according to a method so precise that all "correct" answers will be essentially identical. If there is only one correct answer to your essay question, you are not assessing writing skills. Not to mention what bizarre sort of animal a narrative essay based on evidence must be.

Do the tests require students to demonstrate proficiency in the use of language, including academic vocabulary and language conventions, through tasks that mirror real-world activities?

None, again. Because nothing anywhere on a BS Tests mirrors real-world activities. Not to mention how "demonstrate proficiency" ends up on a test (hint: it invariably looks like a multiple choice Pick the Right Word question).

Do the tests require students to demonstrate research skills, including the ability to analyze, synthesize organize, and use information from sources?

Nope. Nope, nope, nope. We are talking about the skills involved in creating a real piece of research. We could be talking about the project my honors juniors complete in which they research a part of local history and we publish the results. Or you could be talking about a think tank putting together some experts in a field to do research and collecting it into a shiny 122-page report. But you are definitely not talking about something that can be squeezed into a twenty-minute standardized test section with all students trying to address the same "research" problem with nothing but the source material they're handed by the test. There are little-to-none research skills tested there.

How far in the weeds does this study get?

I look at the specific criteria for the "content" portion of our ELA measure, and I see nothing that a BS Test can actually provide, including the PARCC test for which I examined the sample version. But Fordham's study gives the PARCC a big fat E-for-excellent in this category.

The study "measures" other things, too.

Depth and complexity are supposed to be a thing. This turns out to be a call for higher-order thinking, as well as high quality texts on the test. We will, for the one-gazzillionth time, skip over any discussion of whether you can be talking about true high quality, deeply complex texts when none of them are ever longer than a page. How exactly do we argue that tests will cover fully complex texts without ever including an entire short story or an entire novel?

But that's what we get when testing drives the bus-- we're not asking "What would be the best assortment of complex, rich, important texts to assess students on?" We are asking "what excerpts short enough to fit in the time frame of a standardized text will be good enough to get by?"

Higher-order responses. Well, we have to have "at least one" question where the student generates rather than selects an answer. At least one?! And we do not discuss the equally important question of how that open response will be scored and evaluated (because if it's by putting a narrow rubric in the hands of a minimum-wage temp, then the test has failed yet again).

There's also math. 

But I am not a math teacher, nor do I play one on television.

Oddly enough 

When you get down to the specific comparisons of details of the four tests, you may find useful info, like how often the test has "broken" items, or how often questions allow for more than one correct answer. I'm just not sure these incidentals are worth digging past all the rest. They are signs, however, that researchers really did spend time actually looking at things, which shouldn't seem like a big deal, but in world where NCTQ can "study" teacher prep programs by looking at commencement fliers, it's actually kind of commendable that the researchers here really looked at what they were allegedly looking at. 

What else?

There are recommendations and commendations and areas of improvement (everybody sucks-- surprise-- at assessing speaking and listening skills), but it doesn't really matter. The premises of this entire study are flawed, based on assumptions that are either unproven or disproven. Fordham has insisted they are loaded for bear, when they have, in fact, gone unicorn hunting.

The premises and assumptions of the study are false, hollow, wrong, take your pick. Once again, the people who are heavily invested in selling the material of reform have gotten together and concluded once again that they are correct, as proven by them, using their own measuring sticks and their own definitions. An awful lot of time and effortappears to have gone into this report, but I'm not sure what it good it does anybody except the folks who live, eat and breathe Common Core PR and Big Standardized Testing promotion.

These are not stupid people, and this is not the kind of lazy, bogus "research" promulgated by groups like TNTP or NCTQ. But it assumes conclusions not in evidence and leaps to other conclusions that cannot be supported-- and all of these conclusions are suspiciously close to the same ideas that Fordham has been promoting all along. This is yet another study that is probably going to be passed around and will pick up some press-- PARCC and SBA in particularly will likely cling to it like the last life preserver on the Titanic. I just don't think it proves what it wants to prove.

Monday, February 1, 2016

CCSS Flunks Complexity Test

The Winter 2016 issue of the AASA Journal of Scholarship and Practice includes an important piece of research by Dario Sforza, Eunyoung Kim, and Christopher Tienken, showing that when it comes to demanding complex thinking, the Common Core Standards are neither all that nor the bag of chips.

You may recognize Tienken's name-- the Seton Hall professor previously produced research showing that demographic data was sufficient to predict results on the Big Standardized Test. He's also featured in this video from 2014 that does a pretty good job of debunking the whole magical testing biz.

The researchers in this set out to test the oft-repeated claim that The Core replaces old lower order flat-brained standards with new requirements for lots of higher-order thinking. They did this by doing a content analysis of the standards themselves and doing the same analysis of New Jersey's pre-Core standards. They focused on 9-12 standards because they're more closely associated with the end result of education; I reckon it also allowed them to sidestep questions about developmental appropriateness.

The researchers used Webb's Depth of Knowledge framework to analyze standards, and to be honest and open here, I've met the Depth of Knowledge thing (twice, actually) and remain relatively unimpressed. But the DOK measures are widely loved and accepted by Common Coresters (I had my first DOK training from a Marzano-experienced pro from the Common Core Institute), so using DOK makes more sense than using some other measure that would allow Core fans to come back with, "Well, you just didn't use the right thing to measure stuff."

DOK divides everything up into four levels of complexity, and while there's a temptation to equate complexity and difficulty, they don't necessarily go together. ("Compare and contrast the Cat in the Hat and the Grinch" is complex but not difficult, while "Find all the references to sex in Joyce's Ulysses" is difficult but not complex.) The DOK levels, as I learned them, are

Level 1: Recall
Level 2: Use a skill
Level 3: Build an argument. Strategic thinking. Give evidence.
Level 4: Connect multiple dots to create a bigger picture.

Frankly, my experience is that the harder you look at DOK, the fuzzier it gets. But generally 3 and 4 are your higher order thinking levels.

The article is for a scholarly research journal, so there is a section about How We Got Here (mainstream started clamoring for students graduating with higher order smarterness skills so that we would not be conquered by Estonia). There's also a highly detailed explanation of methodology; all I'm going to say about that is that it looks solid to me. If you don't want to take my word for it, here's the link again-- go knock yourself out.

But the bottom line?

In the ELA standards, the complexity level is low. 72% of the ELA standards were rated as Level 1 or 2. That would include such classic low-level standards like "By the end of Grade 9, read and comprehend literature, including stories, dramas, and poems, in the grades 9–10 text complexity band proficiently, with scaffolding as needed at the high end of the range." Which is pretty clearly a call for straight-up comprehension and nothing else.

Level 3 was 26% of the standards. Level 4 was  a whopping 2%, and examples of that include CCSS's notoriously vague research project standard:

Conduct short as well as more sustained research projects to answer a question (including a self-generated question) or solve a problem; narrow or broaden the inquiry when appropriate; synthesize multiple sources on the subject, demonstrating understanding of the subject under investigation

Also known as "one of those standards describing practices already followed by every competent English teacher in the country."

Math was even worse, with Level 1 and 2 accounting for a whopping 90% of the standards.

So if you want to argue that the standards are chock full of higher order thinkiness, you appear to have no legs upon which to perform your standardized happy dance.

But, hey. Maybe the pre-Core NJ standards were even worse, and CCSS, no matter how lame, are still a step up.

Sorry, no. Still no legs for you.

NJ ELA standards worked out as 66% Level 1 and 2, Level 3 with a 33%, and Level 4 a big 5%.

NJ math standards? Level 1 and 2 are 62% (and only 8% of that was Level 1). Level 3 was 28%, and Level 4 was 10%.

The researchers have arranged their data into a variety of charts and graphs, but no matter how you slice it, the Common Core pie has less high order filling than NJ's old standards. The bottom line here is that when Core fans talk about all the higher order thinking the Core has ushered into the classroom, they are wrong.

Friday, December 11, 2015

NY: Cuomo's Common Core Nothing Sundae

They said it couldn't be done, but today NY Governor Andrew Cuomo's Common Core Task Force delivered a big old report in less time than it takes my students to complete their major research project. And it's a big ole Nothing Sundae with a few scoops of Fluff on the side, with a cherry on top.

The announcement came with the same stock photo student we've seen before, and I want with all my heart to believe that his expression of, "Heh. Yeah, this is some ridiculous baloney" is the blow struck by whatever intern had to cobble this together. But the nothing in this announcement announces its nothingness right off the bat. Here's the head of the Task Force, Richard Parsons, Senior Advisor, Providence Equity Partners, LLC and former Chairman of Citigroup (because when you want to look at education policy, you call a banker):

While adoption of the Common Core was extremely well intentioned, its implementation has caused confusion and upheaval in classrooms across New York State. We believe that these recommendations, once acted on, provide a means to put things back on the right track and ensure high quality standards that meet the needs of New York’s kids. The recommendations will provide the foundation to restore public trust in the education system in New York and build on the long history of excellence that preceded this period.

So there you have it-- the purpose of the report is "to restore public trust." Which is a little different than "meet educational needs."

But that's the PR. How does the report look? Let's just see.

Getting Started 

The report kicks off with a the short summary (for those who want to skip straight to the highlights) and a recap of what the Common Core is (spoiler alert: it's exactly what all the PR from Common Core says it is, apparently, so you already know this part). From there we move to the bulk of the report, which is the findings and twenty-one recommendations. Let's see what the task force came up with, shall we? The recommendations are grouped by specific issues.

Issue One: Establish New High Quality New York Standards

Well, if you had any doubts about how deeply the task force was going to dig and how carefully they were going to probe to reach heretofore undiscovered frontiers of understanding, just look at this sentence:

The Task Force has learned that New York educators had limited input into the Common Core before their formal adoption in New York.

Stay tuned for the moment in which the Task Force learns that the sun does not revolve around the sun.

The Task Force accepted input for a whole month and heard from 10,500 respondents.But they single out the Council for a Strong America, the New York State Business Council, and an unnamed NY higher education administrator. Many people are unhappy, yet somehow there is widespread agreement that the goals of CCSS are all swell.

Recommendation One: Adopt some high quality NY standards with all stakeholders in a transparent process.

Mind you, they need to be high standards that promote college and career readiness. And they shouldn't just be a name change, and they should be New Yorky.

But-- the changes should include all the "key instructional shifts set forth in the Common Core Standards." So they should be totally different from the Core, but they should do exactly what the Core does. Got that? NY will rewrite the standards without questioning any of the foundation or goals of the standards. So, more than a name change-- there will also be wording changes. Probably fine changes, too. Just no changes to the actual goals and substance of the standards, Which will make it hard to do

Recommendation Two: Fix the early grade standards.

Well, not fix exactly. The Task Force doesn't want to lower the standards, but recognizing that children develop at different rates, they recommend "banding" to give teachers a wider time range in which to drag tony students across the finish line. They're talking Pre-K through 2. Up through grade 2, everyone can move more or less at their own pace, but by grade 3 the little slackers should be on point and meeting those one-size-fits-all standards. So what we'd like to take those special moments where live humans meet incorrectly written standards and just sort of move them to a later point in the students' lives.

Recommendation Three: Some kind of flexibility for special populations.

Basically, let's make sure that students with disabilities and ELL have more than just the option of vocational certificates instead of a regents diploma. But every student should be prepared to succeed after high school. Convene some experts and figure something out.

Recommendation Four: Ensure standards do not lead to the narrowing of the curriculum or diminish the love of reading and joy of learning.

The Task Force hasn't the foggiest notion how to actually do this, but they recognize it's an issue to many people. So they recommend that the new standards just kind of do this, somehow. It does not occur to them, for instance, that focusing all measurement of schools, teachers, and students on the results of a couple of standardized tests might have the effect of narrowing the curriculum. Nope. Like Arne Duncan, they have no idea how this happened, but they recommend that it stop happening, right now.

Recommendation Five: Establish transparent review and revision process for standards.

It's a mark of just how far the Common Core has driven us down the Crazyland Turnpike that this idea-- that there should be a way to review the standards and change what needs to be changed-- qualifies as a new recommendation. No, David Coleman saw his Creation, and he saw that it was Good, and he decreed that nobody could or should ever change it. The Task Force is not wrong, but the state of New York and a whole lot of other folks are dopes for having waiting till the end of 2015 to come up with this.

Issue Two: Develop Better Curriculum Guidance and Resources

Bzzzzt!! Wrong "issue." The issue is not, "how can the state do a better job of micromanaging classroom teachers." The issue is, "how can the state back itself up and let teachers do their jobs." But the closest the Task Force can come is acknowledging that "teachers develop and select elements of curriculum within the context of student learning goals and objectives established by state and local authorities." So while in their straightjackets, teachers are free to wiggle their noses and roll their eyes.

The Task Force also notes that EngageNY is being used as mandated curriculum in many districts, even though NYSED swears up and down it told people not to do that. Also, many people think the EngageNY modules and website suck.

Also, the Task Force is one more group that is fuzzy on the difference between standards and curriculum. For all these reasons, the following recommendations pretty much miss the point.

Recommendation Six: Educators and local districts should be free to develop and tailor curriculum to the standards.

And you can get a Model T in any color, as long as it's black. The TF actually notes that high-performing schools give teachers autonomy. And yet, somehow the recommendation "Give teachers autonomy" does not make it onto the list.

Recommendation Seven: Release New! Improved! curriculum resources.

Make a new, more better EngageNY. Oh, and occasionally collect feedback on it, just in case it's not more betterer enough.

Recommendation Eight: Set up a digital platform for teacher sharing.

Another moment of candor breaks out. "Teachers and students are not one-size-fit-all. So why are our modules?" Yeah! So let's see if teachers want to fill in the huge gaps in our materials offerings, for free. Let's see if teachers and schools want to give away materials that might help other teachers and schools beat them in the stack rankings. Using the interwebs!

Recommendation Nine: More better Professional Development 

Responding to the complaint that the Core were implemented without enough explanation of How To Do It, the TF suggests that lots of super-duper PD be deployed so that people will totally know how to do it the next time. Because implementation is always the explanation. Hey, question. Do you think anybody out there is researching better ways to spread cholera? Or could it be that some things can't be implemented well because they are inherently flawed and un-implementable?

Issue Three: Significantly Reducing Testing Time and Blah Blah Blahdy Blah

Tests are inevitable and universal, we say. People apparently have complained about Common Core testing. A lot. Who knew? (Oh, wait-- everybody who's read that at least 250,000 students in NY refused to take the test). The Task Force is aware that Pearson has been replaced and that the education chief has launched an initiative to get test compliance back up, complete with a hilariously handy propaganda kit. The Task Force is aware that nobody thinks they're getting useful information from the tests. Of course, the Task Force also accepts NYSED's estimate of how much time any of this testification sucks up, and they think that the President's Test Action Plan actually said something useful and meaningful.

Of course, the way to significantly reduce testing time, a goal everyone allegedly supports, would have been for the ESSA to NOT require the same amount of standardized testing as previously mandated. But under ESSA, states that really wanted to do something about the testing juggernaut could push the boundaries of what the tests are and what they are used for (because test prep would be less prevalent if everybody's future weren't riding on test results). But (spoiler alert) the Task Force is not going to recommend any of these obvious means of achieving their alleged goal. They are like a spouse who, caught cheating with somebody they picked up in a bar, promises not to go to that particular bar on Wednesdays.

Recommendation Ten: Involve all sorts of stakeholders in reviewing the state standardized tests.

Interesting. Does this mean that teachers and other stakeholders will actually be allowed to see test questions? I don't think this recommendation will make it past the test manufacturing lobby.

Recommendation Eleven: Gather student feedback on tests.

Good idea. I suggest checking twitter starting roughly five minutes after the test is handed out.

Recommendation Twelve: Provide ongoing transparency,

They call for releasing test items (good luck with that), the standards weighting and more detail in student scores. I'd suggest adding to the list how the tests are scored, how the test items were validated (if at all), and how the cut scores are set.

Recommendation Thirteen: Reduce number of days and duration for standardized tests

Sure. Good idea. Next, reduce punitive uses of test results so that nobody feels compelled to spend half the year doing test prep.

Recommendation Fourteen: Provide teacher flexibility to use authentic formative assessment.

What?! Trust teachers to do their jobs??!! That's crazy talk, Task Force. Unless.... Uh-oh.

The State and local school districts must support the use of standards-based formative assessments and authentic assessments woven into the routine curriculum along with periodic diagnostic and benchmark testing. The goal of these assessments is to monitor student learning to provide ongoing feedback throughout the school year that teachers can use to improve instruction and students can use to improve learning.

Okay. That could mean "let teachers teach" or it could mean "bring on the highly profitable Competency Based Education."

Recommendation Fifteen: Check out an untimed approach

Another surprising finding. When you give students a high stakes test with a time limit, they get anxious.

Recommendation Sixteen: Provide flexibility for students with disabilities.

Recommendation Seventeen: Protect and enforce accommodations for students with disabilities.

Recommendation Eighteen: Explore alternative options to assess the most severely disabled students.

These are aimed directly at the feds, who, as part of their ongoing program to make all disabilities to vanish by just expecting real hard, denied New York's request to make testing accommodations for students with disabilities. It's hard to predict how hard Acting Pretend Secretary of Education John King (whose previous job, you may recall, was making a hash of education policy in New York) may push back on this, and the real battle will come down to the future Secretary of Ed.

Recommendation Nineteen: Prevent students from being stuck in academic intervention based on one test.

Once again, we are mystified by how anybody ever put sooooo much emphasis on one standardized test. How did such a thing happen? It;s a puzzlement. But a student definitely shouldn't be automatically put in a remediation just because she did poorly on the test used to rate schools and teachers. A more holistic approach is called for, with parents and teachers working together to determine what is in the best interests of the child. And nobody should ever tell a student that the student is too unsatisfactory a student based on just one test (unless it's the state making that determination based on one test, in which case it's totes okee dokee).

Recommendation Twenty: Eliminate double testing for ELL students

New York has an exam for English Language Learners to take. The feds only give a one-year exemption for ELL students, leaving ELL students often taking double tests-- during the years that they have not yet shown English proficiency. The Task Force thinks this is dumb. They are correct.

Issue Four and Recommendation Twenty-One

"The implementation of the Common Core in New York was rushed and flawed," says the task force, which does not go one to say, "because the Common Core were the rushed, flawed work of amateurs, and you can't do a good job of implementing a bad policy." So they have this half right.

But they recommend that "until the new system is fully phased in" (which will be determined how, exactly?) test results should only be advisory and not used for any teacher or student evaluating. They are assuming it will take till 2019-2020 to get everything up to speed, which is pretty awesome, because that gives many governors, many legislatures, and many various policymakers and lobbyists ample time to do God knows what in the meantime. Might as well pick any old year, since nobody knows how long such an undertaking should, would or has taken ever.

Bottom line?

So the Task Force has basically hit three areas. They have lots of ideas to clean up the administration of testing, but nothing that addresses the fundamental problems with the testing. They have several ideas for trying to clean up the curriculum and pedagogy tied to the standards, but nothing that addresses the incorrect assumptions and ideas underlying the state's approach. And they have an idea about rewriting new standards, but nothing that would address any of the foundational problems and incorrect assumptions underlying the Common Core.

So, change without change. We'll keep the same twisted frame and try to drape it with pretty new cloth. It's a big bowl of nothing, and it's not even a new bowl.





Friday, November 27, 2015

Can Competency Based Education Be Stopped?

Over at StopCommonCoreNYS, you can find the most up-to-date cataloging of the analysis of, reaction to, and outcry over Competency Based Education.

Critics are correct in saying that CBE has been coming down the pike for a while. Pearson released an 88-page opus about the Assessment Renaissance almost a year ago (you can read much about it starting here). Critics noted way back in March of 2014 (okay, I'm the one who noted it) that Common Core standards could be better understood as data tags. And Knewton, Pearson's data-collecting wing, was explaining how it would all work back in 2012.

Every single thing a student does would be recorded, cataloged, tagged, bagged, and tossed into the bowels of the data mine, where computers will crunch data and spit out a "personalized" version of their pre-built educational program.

Right now seems like the opportune moment for selling this program, because it can be marketed as as an alternative to the Big Standardized Tests which have been crushed near to death under the wheel of public opinion. "We'll stop giving your children these stupid tests," the reformsters declare. "Just let us monitor every single thing they do every day of the year."

It's not that I don't think CBE is a terrible idea-- I do. And it's not that I don't have a healthy respect for and fear of this next wave of reformy nonsense. But I can't shake the feeling that while reformsters think they have come up with the next generation iPhone, they're actually trying to sell us a quadrophonic laser disc player.

From a sales perspective, CBE has several huge problems

Been There, Done That

Teaching machines first cropped up in the twenties, running multiple choice questions and BF Skinner-flavored drill. Ever since, the teaching machine concept has kept popping up with regularity, using whatever technology was at hand to enact the notion that students can be programmed to be educated just like a rat can be programmed to run a maze.

Remember when teaching machines caught on and swept the nation because they provided educational results that parents and students loved? Yeah, neither does anybody else, because it never happened. The teaching machine concept has been tried, each time accompanied with a chorus of technocrats saying, "Well, the last time we couldn't collect and act on enough data, but now we've solved that problem."

Well, that was never the problem. The problem is that students aren't lab rats and education isn't about learning to run a maze. The most recent iteration of this sad, cramped view of humans and education was the Rocketship Academy chain, a school system built on strapping students to screens that would collect data and deliver personalized learning. They were going to change the whole educational world. And then they didn't.

Point is, we've been trying variations on this scheme for almost 100 years, and it has never caught on. It has never won broad support. It has never been a hit.

Uncle Sam's Big Fat Brotherly Hands

Remember how inBloom had to throw up its hands in defeat because the parents of New York State would not stand for the extensive, unsecured and uncontrolled data mining of their children. inBloom tried to swear that the kind of data mining and privacy violation and unmonitored data sharing that parents feared just wouldn't happen on their watch. But the CBE sales pitch doesn't just refuse to protect students against extensively collected and widely shared data mining-- CBE claims the data grubbing is not only not a danger, but is actually a valued feature of the program.

The people who thought inBloom was a violation of privacy and the people that thought Common Core was a gross federal overreach-- those people haven't suddenly disappeared. Not only that, but when those earlier assaults on education happened people were uneducated and unorganized-- they didn't yet fully grasp what was actually happening and they didn't have any organizations or other aggrieved folks to reach out to. Now all the networks and homework are already done and in place.

I don't envision folks watching CBE's big data-grabbing minions coming to town and greeting them as liberators. CBE is more of what many many many people already oppose.

No Successes To Speak Of

This has always been a problem for reformsters. "Give me that straw," they say, "and I will spin it into gold." They've had a chance to prove themselves with every combination of programs they could ask for, and they have no successes to point to. Remember all those cool things Common Core would accomplish? Or the magic of national standardized testing? The only people who have made a respectable job of touting success are the charteristas-- and that's not because they've actually been successful, but because they've mustered enough anecdotes and data points to cobble together effective marketing. It's lies, but it's effective.


Everything else? Bupkus. This will be no different. CBE will be piloted somewhere, and it will fail. It will fail because its foundation combines ignorance of what education is, how education works, and how human beings work.

Anchored to What?

A CBE system needs to be linked to some sort of national standards, but only those who have been very well paid have a deep commitment to them are still even speaking the name of Common Core. To bag and tag a nation's worth of data, you must have common tags. But we've already allowed states to drift off into their own definitions of success, their own tests, their own benchmarks. Saying, "Hey, let's all get on the same page" is not quite as compelling as it once was, because we've tried it and it sucked. As the probably successor to ESEA says, centralized standardization of education is not a winning stance these days. So to what will the CBE be anchored?

Expensive As Hell

Remember how expensive it was to buy all new books and get enough computers so that every kid could take a BS Test? You can bet that taxpayers do. Those would be the same taxpayers who saw programs and teachers cut from their schools even as there was money, somehow, for expensive but unnecessary new texts and computers (which in some cases could be used only for testing).

When policy makers announce, "Yeah, here's all the stuff you need to buy in order to get with the CBE program," taxpayers are going to have words to say, and they won't be happy, sweet words.

If every single worksheet, test, daily assessment, check for understanding, etc is going to go through the computer, that means tons of data entry OR tons of materials on the computers, through the network, etc etc etc. The kind of IT system required by a CBE system would be daunting to many network IT guys in the private sector (all of whom are getting paid way more than a school district's IT department). It will be time-consuming, buggy, and consequently costly.

Who wants to be the superintendent who has to say, "We're cutting more music and language programs because we need the money to make sure that every piece of work your child does is recorded in a central data base." Not I.

Program Fatigue

For the first time, the general taxpaying public may really get what teachers are feeling when they roll their eyes and say, "A NEW program? Even though we haven't really finished setting up the old one?!"  

Bottom Line

I think that CBE is bad education and it needs to be opposed at every turn. But I also think that reformsters are severely miscalculating just how hard a sell it's going to be. We can help make it difficult by educating the public.

There will be problems. In particular, CBE will be a windfall for the charter industry if they play their cards right. The new administration will play a role in marketing this and I see no reason to imagine that any of the candidates won't help market this if they win. (Well, Sanders might stand up to the corporate grabbiness of it, and Trump will just blow up all the schools.)

But there will be huge challenges for the folks who want to sell us this Grade C War Surplus Baloney. It's more of a product that nobody wanted in the first place. We just have to keep reminding them why they didn't like it.

Wednesday, November 25, 2015

Has CCSS Affected Instruction

Brookings, an outfit that is usually a reliable provider of pro-reform clue-free baloney, offers an interesting question from non-resident senior fellow Tom Loveless: Has Common Core influenced instruction?

It's a worthy question. We've talked a lot about how CCSS has affected policy and evaluation and assessment, but has it actually affected what teachers do in the classroom?

The proponents of the Core never developed a way to answer that question because their assertion has always been that we would see the effects on instruction in the flowering of a million awesome test scores. But the 2015 NAEP scores turned out to be a big bowl of proofless pudding, and so now we're left to ask whether the Common Core tree fell in the classroom forest without making a sound, or if it never fell at all.

William J. Bushaw blamed "curricular uncertainty," while Arne Duncan went with the theory of an "implementation dip." Loveless's brief piece includes this masterpiece of understatement:

In the rush to argue whether CCSS has positively or negatively affected American education, these speculations are vague as to how the standards boosted or depressed learning. 

In other words, Core fans are unable to get any more specific than their original thoughts that Common Core Standards would somehow magically infuse classrooms, leading to super-duper test scores. Of course, they also assumed that teachers were blithering incompetently in their classrooms and that adhering to awesome standards would mean a change. Loveless notes a 2011 survey in which 77% of teachers said they thought the new math standards were the same as their old math standards. So there's one vote for, "No, the standards changed nothing."

Then Loveless drops this wry observation:

For teachers, the novelty of CCSS should be dissipating. 

Yes, the "novelty" is surely fading away. I could jump to the conclusion that Loveless is one more deeply clueless Brookings guy, but he follows that up with these lines:

Common Core’s advocates placed great faith in professional development to implement the standards.  Well, there’s been a lot of it.  Over the past few years, millions of teacher-hours have been devoted to CCSS training.  Whether all that activity had a lasting impact is questionable.  

Loveless cites some research that tells us what we mostly know-- after a new change is shoved on us in professional development, there's a "pop" of implementation, and then it mostly fades away.

Loveless doesn't try to explain this, but I'll go ahead and give it a shot. Every teacher is a researcher and every classroom is a laboratory. And every instructional technique, whether it's in my textbook or pushed on me by edict or sold to me in PD or is the product of my own personal research and development efforts-- every one of those techniques is subject to the same rigorous testing and data-driven evaluation.

Does it work in my classroom?

I can find you numerous elementary teachers who took their newly purchased Common Core math textbooks, tried the recycled New Math instructional methods and pacing in the texts (because most of us will try anything once or twice) and then said, "Well, my students are confused and can't do the work. So I will now add a few days to the suggested pace of the book, and I will teach them how to do this The Old Way so that they can actually get a handle on it." The "fading novelty" looks a lot like "adapting or rejecting new ideas based on the real data of the classroom." And since Common Core's novelty is the product of well-connected amateurs and their personal ideas about how school should work, the novelty has indeed faded swiftly.

To the extent that we are allowed to (and that is the huge huge huge problem facing teachers in some districts-- they are no longer allowed to exercise their professional judgment), we do what meets the needs of our students. We do what works. We don't stick with something that doesn't work just because some textbook sales rep in a PD session or some faceless bureaucratic ed amateur in an office said we should stick with it.

Loveless suggests there are two plausible hypotheses. 1) As educators get better at using CCSS techniques, results will improve. 2) CCSS has already shown all the positive effects it ever will. I'm going to say that both are correct, as long as we understand that "get better at using CCSS" means " steadily edit, revise, change and throw out pieces of the Core based on our own research and knowledge of best practices."

Loveless does highlight one measurable effect of the Common Core-- the increased emphasis on non-fiction and the concurrent de-emphasis on fiction. He has data to back this up. And he also knows what the shift really means:

Unfortunately, as Mark Bauerlein and Sandra Stotsky have pointed out, there is scant evidence that such a shift improves children’s reading.

He also notes that more non-fiction doesn't necessarily mean higher quality texts, noting that two CCSS supporting groups provide completely different ideas about curriculum.

Loveless notes that analysts tend to focus on formal channels of implementation and ignore the informal ones. It's a good catch. A top-down directive from a state department of education can carry much less power than teachers sharing the video clip of CCSS architect David Coleman explaining that "nobody gives a shit what you think or feel." And politics are involved in the Core (and always have been, since the Core was imposed by political means).

Finally, he notes that implementing top-down curriculum and instruction reforms always runs afoul of what transmitters think, and boy, do I agree with him on this one. Every top-down reform is like a game of telephone, and each person who passes the program along reads into it what they personally think should happen.

As the feds tell state departments of ed and the department tells its functionaries and they tell their training division and they tell superintendents and superintendents tell principals and principals hire professional development ronin-- at each handoff of the baton, someone is free to see what they believe the program "must" require.

Loveless uses the example of non-fiction reading, postulating that an administrator who had always wanted to dump fiction for non-fiction would be given protective cover by CCSS. But that rests on a pretty explicit reading of what CCSS says about itself. This sort of top-down implementation also gives rise to policies that involve reading between the lines, such as an administrator who wants English teachers to teach less grammar and uses Common Core as justification. And of course the standards have been completely rewritten by test manufacturers, who interpret some standards and leave others out entirely.

In fact, some folks make a curriculum argument based on what the standards don't say at all. The rich content crowd insists that implementing Common Core must involve rich, complex texts from the canon of Important Stuff, and their argument basically is that because Common Core doesn't really require rich content because otherwise, it would just be a stupid set of bad standards emphasizing "skills" while leaving a giant pedagogical hole in its heart where the richness of literature should be. They must mean for us to fill in the gaps with rich text, they argue. Because surely the standards couldn't be that stupid and empty. (Spoiler alert: yes, they are).

In fact, in an otherwise pretty thorough brief, Loveless misses another possibility-- the Common Core Standards are limited in their ability to influence instruction because they aren't very good. Can I influence the work of cabinet-makers by putting bananas in their tool boxes? Can I influence how surgery is performed by telling surgeons to wear fuzzy slippers into the operating room? The implementation problem remains unchanged-- it's impossible to have a good implementation of a bad program.

Tuesday, October 13, 2015

Politico: Wrong about Common Core

Politico scored a coup yesterday by declaring that the war is over, and Common Core won it. One can only assume that Kim Hefling's piece "How Common Core Quietly Won the War" bumped equally hard-hitting pieces such as "The Earth-- Actually Flat After All" or "The Presidential Wisdom of Harold Stassen."

Hefling's main point is that Common Core is now everywhere, so it won. But this would be tantamount to saying that Kleenex has cornered 100% of the facial tissue market because all citizens wipe their noses on something that they call "Kleenex."

Sure, there's something called Common Core almost everywhere in education. But which Common Core Ish thing would we like to talk about?

State standards? Many states have changed the name and little else, but many states have further fiddled with the everyone-forgets-their-copyrighted standards, so that none particularly match any more.

Testing standards? A variety of Common Core based Big Standardized Tests are out there, and -- for now-- every state has to have one. But what those tests cover does not in any case correspond fully with the Common Core standards as originally written (for extreme instance, speaking and listening standards are not and likely never will be tested). And in many, if not most, school districts, curriculum and instruction are driven by the test, not the standards.

Curriculum standards? Most districts have "aligned" their curricula to the Common Core-- but that process looks a lot like taking what you already do anyway and assigning various standards to it until your paperwork looks good.

Textbook standards? One of the biggest effects of Common Core was the huge windfall for textbook publishers as schools rushed to get textbook programs with "Common Core ready" stamped on them somewhere. But every publisher has their own idea about what the standards look like when interpreted on the textbook level-- and absolutely nobody is in position to check their work, leading many analysts to conclude that many textbooks are not particularly "Common Core" at all.

Classroom standards? The final editor of all these programs is the teacher, who retains (in most districts) the ability to say, "While the Common Core Textbook/Curriculum/Script says to teach it this way, I'm looking at these kids and my professional judgment says we're doing something else, instead."

Add to these the consultants, college ed profs, and clueless politicians who all think they are talking about Common Core and you have a brand that has absolutely lost its identity. You remember the blind men touching the tail, leg and trunk of the elephant? Well, in Common Core land they're touching the leg of the elephant, a Victorian living room sofa, and a plastic grocery bag filled with steamed cockroaches.

Hefling tries to skirt the issue by not really addressing what the success of Common Core was supposed to look like. She refers to CCSS as "the math and English standards designed to develop critical thinking" which is A) baloney and B) unnecessary. Show me the CCSS standards that require critical thinking, and then explain to me why anybody needed CCSS to promote critical thinking in the first place.

She also references the idea that Common Core allows teachers to share ideas, as if that was somehow impossible before. She includes a testimonial from a Florida principal who provides the six zillionth iteration of the "Before we had the Common Core, we didn't know how the hell to do our jobs" narrative.

If the picture of success was supposed to be that everyone in the public education system (not the private schools! never the private schools!) had to deal with something that had the words "Common Core" attached to it, then yes, CCSS has won.

But if, as was actually the case, the goal was to have identical standards pursued and measured in every public classroom in the country, with teachers working in virtual lockstep to pursue exactly the same goals-- then, no-- the Common Core lost. It failed. It was a sledgehammer that was supposed to beat open the brick wall of US schooling, and instead shattered into a million different bits.

And Hefling doesn't even talk about the other promise of the Core-- that all students would be college and career ready. We supposedly have several years' worth of Common Core grads out there now-- how are they doing? Are colleges reporting an uptick in well-prepared freshmen? Are businesses reporting a drop in their training needs? Hefling and her Core-adoring sources don't address that at all. Can you guess why?


Sunday, September 13, 2015

David Coleman's Master Plan

David Coleman, the architect of the Common Core and current head of the College Board and the guy who decided he was the man to single-handedly redefine what it means to be an educated American, has spoken many times about what the long view of education reform would be. One frequently quoted speech was his keynote address at the Institute for Leadership Senior Leadership Meeting in December of 2011.

The seventy-minute presentation is a lot to watch, but I recently stumbled over a transcript of the whole mess, hosted online by the nice folks at Truth in Education. This was Coleman in 2011 delivering a speech entitled "What Must Be Done in the Next Two Years" at a time before reformsters had learned to be more careful about concealing the details of what they had in mind. The transcript is twenty-six pages long, so we're just going to skip through highlights.

The Testing Smoking Gun

It was Lauren who propounded the great rule that I think is a statement of reality, though not a pretty one, which is teachers will teach towards the test. There is no force strong enough on this earth to prevent that. There is no amount of hand-waving, there's no amount of saying, “They teach to the standards, not the test; we don't do that here.” Whatever. The truth is and if I misrepresent you, you are welcome to take the mic back. But the truth is teachers do. Tests exert an enormous effect on instructional practice,direct and indirect, and it's hence our obligation to make tests that are worthy of that kind of attention. It is in my judgment the single most important work we have to do over the next two years to ensure that that is so, period. So when you ask me, “What do we have to do over the next years?” we gotta do that. If we do anything else over the next two years and don't do that, we are stupid and shall be betrayed again by shallow tests that demean the quality of classroom practice, period.

So, there was no question, no doubt that the standards were about creating tests that would drive instruction and write curriculum.

Coleman outlines some of the issues, joshing and shmoozing his audience. You've got your new standards and your old standards and it's going to be a mess. "My friends from Texas in the back are like, 'Can we leave now and go to a bar? 'Cause we didn't even adopt these stupid standards yet." Oh, the yucks.

But Coleman promises details and specifics and evidence and support. And he'll get to that in a moment, but first he wants to offer a plug for his group Student Achievement Partners.

The Unqualified Leaders

This is the moment where Coleman famously describes his crew as a group of "unqualified people," adding that their qualification was their "attention to and command of the evidence behind" the standards. Nothing made it into the standards without support and evidence. Totally not based just on what people in the room thought students should know. Given this evidence-based approach, one wonders why the CCSS don't come with extensive footnotes delineating the exact support for each and every standard.

Coleman next goes on to make a less commonly-repeated point-- the Core aren't just about what is added in, but what is taken out. Coleman wants to be clear that it's not just a matter of what the standards command teachers to start doing, but also a matter of what they are supposed to stop doing.

SAP is composed mostly of the people who wrote the Core, though Coleman wants to remind us that teachers' unions, teachers, parents, all sorts of folks "was involved in" writing the standards. They followed three principles while doing the work--

First, never take money from any publishers or test manufacturers. Second, they will not compete for any state RFP's. Third, we won't possess any intellectual property. Which raises of the question of why the Core are copyrighted. Coleman wants the crowd to understand that any mistakes he makes are the result of stupidity, not avarice. So, no money was involved back then, though of course the companies involved seemed to have made out okay in the financial windfall of the standards, and Coleman has landed a pretty sweet job. Perhaps he meant to say, "We all agreed that our big paydays would come later."

If you've had the feeling that the Core feel like a big wet blanket thrown over any sort of creative spark, here's a quote from Coleman as he starts to talk about how Eastern Asian countries are "beating the pants off us."

They're working harder than we are, their kids work harder, they may not be quite as creative but that's only gonna last for so long, and this country's best days-- we're gonna get overwhelmed by this kind of tidal wave of harder work. 

So there you have it. Creativity is all well and good, but only for a brief time.

The Math Piece

Coleman spends some time selling the math portion with a bunch of jargonesque talk about doing more with less and fluency and how key fluencies will make math whizes, dragging learning French into the fray which leads me to wonder if learning a language the Common Core math way would mean learning only a tiny bit of vocabulary really well, which doesn't sound like fluency to me. But I digress. Coleman does say these key fluencies are basically one or two things you must learn by rote every year, and mentions that "people on the left" don't like that and, well, yes, liberals are known for their dislike of memorizing the times table.

In the end, he wants application and understanding so that (his example) when you're negotiating a mortgage, it occurs to you to get out your calculator and figure out if you're getting shafted.

Coleman next explains why current tests are bad, though his Powers of Explaining Clearly have been seriously weakened at this point. His point seems to be that since the test covers so much stuff, a student can look like they're "passing" when they haven't shown mastery of the parts that are Really Important. All tests do this? Coleman knows this? And actual math teachers don't create tests or other ways of measurement to factor into passing students on? This just seems like a huge statement, requiring Coleman to know both which math skills are the Really Important ones and what every math test in the world covers. But the picture is clear-- Coleman wanted math teachers to stop covering a bunch of extra stuff and letting students sneak by who don't know the Really Important stuff.

Coleman talks about what to do in the next two years about math, and he makes fun of the fact that publishers already claim to have aligned materials developed before the Core were even finished. So go through every grade with every teacher and make sure they know what the Really Important parts are. This should be easy because, Coleman says, and I'm not kidding, "It's like a couple of sentences long." Coleman also says that all PD should be focused only on the Really Important parts, period, full stop.And somehow we get back to how Hong Kong does better on the TIMMS.

There's now a break for audience participation, during which Coleman notes, "I find the softer I speak, the less people can argue with me."

He clarifies for an audience member that this approach doesn't really require teachers to be experts, in year one, but in year two, that should start to happen.

Now for Literacy

First, Coleman talks about "literacy" through all of this, suggesting that speaking and listening weren't really on his radar.

He opens with references to "haunting data" which amount to saying that over forty years we've spent more and more money but the eighth grade NAEP reading scores have stayed flat for those forty years. This is "devastating" because if  students don't get past eighth grade reading level "they're obviously doomed in terms of career and college readiness and all we hoped for them." Are they "obviously doomed"? As the scores have stayed flat for the last forty years, has US economic history been marked by an unrelenting downward spiral that can be traced to a nation of eighth grade readers? Coleman doesn't offer any data, but he has highlighted one of the ongoing unplugged holes of the reformy argument. If I've been eating a bagel for breakfast for forty years, and you want to tell me that I must change my diet because otherwise the bagel will give me a terrible disease, I'm going to need a little more proof than your panicky announcement because, so far, so good. That doesn't mean I should eat bagels forever just because it's what I've always done. But I have forty years of data on the effects of bagel breakfasts, and you have zero years. Which one of us is making a data driven decision?

So I want you to look at the core standards for a moment as a battering ram, as an engine to take down that wall.

Nice simile. I can't imagine why so many teachers have viewed the Core as an assault on public education. But Coleman proceeds to lay out the shifts that must happen in the next two years.

First, K-5 have to read for knowledge. Coleman finds it "shocking" that only 7-15% of the reading they do is informational-- the rest is stories. And not for the first time, I am amazed that someone who studied literature at Oxford somehow remains ignorant of the role of story in human civilization and the individual psyche. I am less amazed that someone who has no educational experience doesn't seem to know anything about how small children are best engaged to learn about reading.

But Coleman says the data is overwhelming that the knowledge and vocabulary acquired in Pre-k through 5 is absolutely essential for reading more complex texts going forward. So he demands 50% informational texts, and he equates "informational texts" with "learning about the world," as if stories do not teach anybody anything about the world.

So focusing on testing, he points out that elementary testing was reading and math, and since the reading portion was all "literature," everybody dropped science and history to spend more time test-prepping reading. He absolutely has a point, but since he's wedded to the idea of using the Big Standardized Test to drive curriculum, he comes up with absolutely the wrong solution. The correct solution was to look at NCLB's test-and-punish regime and say, "Wow, this is really screwing up schools. We should stop with the test and punish." But Coleman takes a flier and lands on, "We should test and punish a different range of things." Which I'm going equate with an abuser having the epiphany, "I kept hitting my partners with a stick, and then they'd always leave me and call the cops. So going forward I'm going to hit them with my fists, instead. That'll fix the problem."

Coleman says these standards should be exciting for elementary teachers because "they re-inaugurate elementary school teachers' rightful role as guides to the world." In this, whether he understood it or not, Coleman is dead wrong-- the Core inaugurated teachers as Content Delivery Specialists chained to crappy curriculum materials designed to teach to a test.

Coleman on Reading Across the Curriculum

I am sick of people, to be rather frank with you, who tell me that art teachers don‟t want to teach this, 'cause our kids have to be able to do it, period, for their success. And what‟s interesting about the standards is rather than saying to social studies and history teachers that they should become reading teachers, which I think is a losing game, it says instead they must–they must–enable their students to evaluate and analyze primary and secondary sources. Science teachers must not become literary teachers. What they must become is teachers who enable their students to read primary sources of the sort of direct experimental results as well as reference documents to build their knowledge of science. But what is not allowed is a content teacher to think that if they just tell their students enough content and their students have no independent capacity to analyze and build that content knowledge, that they are a success. 

I'm now going to say something shocking-- it's possible that Coleman has a point here. But it crashes directly into the wall of the Big Standardized Test, which insists that critical thinking is when you look at the evidence and reach exactly the conclusion that I think you should. Coleman's goals are not out of line, but the BS Tests cannot, and will not, test for this, so if he really wants to see this, he has to let go of his test and punish obsession. But we know he hasn't, because the new SAT that he has overseen has a writing element that enshrines this exact fallacy about what it means to examine evidence and draw conclusions.

Nor, as always, does Coleman have a clue what to do with low-ability students. And as always, Coleman seems to believe that nobody anywhere is already doing any of this, which is unvarnished baloney. Coleman remains that guy who thinks that because he just had a Big Thought, he must be the first and only person to ever have that thought.

Also, because of that overwhelming (but still to this day secret and unseen) data again, Coleman is sure that academic literacy in these areas must be achieved by ninth grade, or the child is doomed.

Evidence

Coleman's second literacy shift is to focus on evidence. This is one of his best-known hobby horses-- writing must be done within the four corners of the text, and while this is not the speech in which he said it, he comes close to his classic "no one gives a shit what you think" line about writing. He also gets in a shot at how Kids These Days are all up in the texting, which may seem inconsequential, but speaks to the thread running through reformsterism about how modern kids are just awful and need to be whipped into shape.

But here Coleman again assumes the notion that education is only about preparing for the workplace or college (which is where you go to win access to a better workplace), and we don't need that creativity shit there.

Text Complexity

Coleman's third big shift is toward text complexity, and back in 2011 he thinks that there are people who can actually measure this fuzzy and ill-defined quality of a text. He acknowledges that leveled reading is important for developing reading vocabulary and a love of reading, and see-- this is why we go back and look at these old documents because sometimes we discover things that were lost in translation. Coleman seems to be saying that core instruction has to be "complex" in order for behinder students to catch up, level-wise, but it's important that other stuff meets the students whereever they are. Which seems different from the more recent policy of making students read above frustration level all the time.

Of course, Coleman's original idea is baloney as well. The plan is we'll find the slowest runners in the race, and we'll get them to run not only as fast as the race leaders, but actually faster so that they can catch up. This seems.... unlikely.

So What To Do Now for Literacy Education?

So what are these education leaders supposed to do over the next two years?

First, be all evidence-based, all the time, and by making students cite evidence for every answer, you'll also push teachers to only ask questions that can be answered with evidence. Because opinions are for dopes. Also, this is as good a time as any to note that after all these years, we are still waiting for any of the evidence and data that allegedly supports the Core, as well as any evidence that BS Testing improves education, as well as evidence that any of these reforms have done any good anywhere. Always remember, boys and girls-- if you're powerful and sure you're right, you don't have to provide evidence to your Lessers. Evidence is for the common people (kind of like Common Core).

Second, rip all those damn storybooks out of the kids' hands and "flood" your schools with informational texts. On the high school level Coleman has some specific ideas about how to "challenge" teachers on the literacy front, and it's in line with what we've heard before. I've responded to Coleman's essay "Cultivating Wonder" before, and if you want to see me rant about his ignorance of how to address reading, you can take a look at that.

Fixing Teachers

For the umpteenth time Coleman transitions by noting that he's saying controversial things and people don't like him because of it, ha ha, and he reminds me mostly of the guy who posts on Facebook, "Most of you aren't going to read this, but--" as a way to humblebrag about how he's so special that most people just don't get him, but it's a cross you have to bear when you're awesome.

Anyway, he pooh-poohs traditional teacher eval language like "use data to inform instruction" and "plan, engage, revise" and offers his own superior plan. Focus on these five areas:

1) Is a high-quality complex text under discussion?
2) Are high quality text-dependent questions being asked?
3) Is there evidence of students drawing from the text in their answers and writing?
4) How diverse a set of students are providing your evidence?
5) What is the quality of teacher feedback?

And then Coleman puts his own severe conceptual limitations on display, worth noting not as a way of picking on Coleman, but because these shortcomings are hardwired into the Core, the Core tests, and the evaluations based on the Core tests.

To me that is a much more exciting set of criteria to engage with a literacy teacher about than, “Did you have a plan? How were your objectives? Were your students engaged?” Who can determine these things? The things I just described to you are countable. That is, in the best meaning of accountable, they are literally things you can count. And so I‟d ask you to think about literacy in this way. While literacy seems like the most mysterious and vague and kind of touchy-feely of our disciplines, I think it can be much improved by daring to count within literacy, and by daring to observe the accumulation of these kinds of facts. 

To insist that only things that matter are those which can be assigned a number is symptomatic of a tiny, tiny frame of reference, a deeply limited view of what it means to be human. But the answer to the question, "Who can determine these things" is simple-- trained, experienced, professional educators. Coleman's real problem (and that of the reformster movement as a whole) is not that nobody can determine such things, but that it's hard to put them in a frame of reference that makes sense to somebody whose cramped and meager understanding of education and humanity can only grasp numerical values and concrete nouns.

Exemplars

Coleman figures after a year you'll have a collection of exemplars, such as the legendary Gettysburg Address lesson, in which it takes us three to five days to pick apart the rhetorical tricks of the speech without ever touching the historical background or the human implications of the war, Lincoln's choices, and our character as a nation.


There is actually some discussion at the close that gets back to that lesson and the questions of scaffolding, but Coleman doesn't really add anything useful. But then someone brings up

English Language Learners

Coleman says that they expect him to address adaptations, but instead he's going to call for an ELL Bill of Rights, which basically says that ELL learners have the right to be faced with the exact same work that all the other students are learning. So back in 2011, we've already perfected the rhetorical trick of saying that we are doing students a favor by demanding they do work beyond their capabilities, a piece of educational malpractice still enshrined in federal policySo pretty much the same policy as Sarah Palin's "if you come to America, speak American" only with a smile and some complimentary words attached.

Enough already

I agree. That brings us to the end of the transcript. Though Coleman provides some strokes for the  events' organizers, as always, he leaves the audience with the impression that he pretty much whipped up the whole Common Core himself. And though he talks a lot about the evidence and support and data that undergirds the Core, he doesn't actually specifically mention any of it.

And if you think you haven't suffered enough, here's the actual video of the event. But don't say you weren't warned.


Saturday, September 12, 2015

Implementationism and Barber

This week, the Education Delivery Institute is delighted to announce a new book/marketing initiative co-authored by Nick Rodriguez, Ellyn Artis, and Sir Michael Barber. Rodriguez and Artis may not be familiar to you, but Barber is best known as the head honcho of Pearson. So you know where this is headed.








 This is Nick Rodriguez, a personal trainer in Houston. Not the same guy.



Their new book has the more-than-a-mouthful title Deliverology in Practice: How Education Leaders Are Improving Student Outcomes, and it sets out to answer the Big Question:

Why, with all the policy changes in education over the past five years, has progress in raising student achievement and reducing inequalities been so slow?

In other words-- since we've had full-on reformsterism running for five years, why can't they yet point to any clear successes? They said this stuff was going to make the world of education awesome. Why isn't it happening?

Now, you or I might think the answer to that question could be "Because the reformy ideas are actually bad ideas" or "The premises of the reforms are flawed" or "The people who said this stuff would work turn out to be just plain wrong." But no-- that's not where Barber et al are headed at all. Instead, they turn back to what has long been a popular excuse explanation for the authors of failed education reforms.

Implementation. 

"Well, my idea is genius. You're just doing it wrong!" is the cry of many a failed geniuses in many fields of human endeavor, and education reformsters have been no exception.




Just an implementation problem. As in, don't implement these into your digestive system.





I have trouble wrapping my head around the notion that implementation is somehow separate from conception. Cold fusion is a neat concept, but the fact that it cannot actually be implemented in the world we physically inhabit renders it kind of useless. Politics are particularly susceptible to the fallacy that My Idea Is Pure Genius and if people would just behave the way I want them to, it will all work out brilliantly.






"Millions starving? Don't worry-- it's just an implementation problem."








The implementation fallacy has created all sorts of complicated messes, but the fallacy itself is simply expressed:

There is no good way to implement a bad idea. 

Barber, described in this article as "a monkish former teacher,"  has been a champion of bad ideas. He has a fetish for data that is positively Newtonian. If we just learn all the data and plug it into the right equations, we will know everything, which makes Michael Barber a visionary for the nineteenth century. Unfortunately for Barber, in this century, we're well past the work of Einstein and the chaoticians and the folks who have poked around in quantum mechanics, and from those folks we learn things like what really is or isn't a solid immutable quality of the universe and how complex systems (like those involving humans) experience wide shifts based on small variables and how it's impossible to collect data without changing the activity from which the data is being collected.

Barber's belief in standardization and data collection are in direct conflict with the nature of human beings and the physical universe as we currently understand it. Other than that, they're just as great as they were 200 years ago. But Barber is a True Believer, which is how he can say things like this: 

“Those who don’t want a given target will argue that it will have perverse or unintended consequences,” Sir Michael says, “most of which will never occur.” 

Yup. Barber fully understands how the world works, and if programs don't perform properly, it's because people are failing to implement correctly.

  


Nothing wrong with the suit. It's just an implementation problem.









Fortunately, Barber has a system for fixing the implementation issue.

Deliverology 

According to this piece in the Economist, Barber was early on inspired by a 1995 book by Mark Moore, Creating Public Value, a work also popular in the Clinton administration. Pearson went on to develop his own version of How To Get Things Done, which supposedly was at first mockingly called Deliverology, a term that barber embraced. Google it and find it everywhere, generally accompanied by some version of these steps (here taken from a review of the new book):

  • Set clear goals for students, establish a Delivery Unit to help your system stay focused on them, and build the coalition that will back your reforms.
  • Analyze the data and evidence to get a sense of your current progress and the biggest barriers to achieving your goals.
  • Develop a plan that will guide your day-to-day work by explicitly defining what you are implementing, how it will reach the field at scale, and how it will achieve the desired impact on your goals.
  • Monitor progress against your plan, make course corrections, and build and sustain momentum to achieve your goals.
  • Identify and address the change management challenges that come with any reform and attend to them throughout your delivery effort.



 You're going to what with a delivery unit??









There are so many things not to love about this approach. Personally, I'm very excited about working as part of a Delivery Unit, and look forward to adding Delivery Unit to my resume. And "a coalition that will back your reforms" sounds so much nicer than "posse of yes-persons." I don't really know what "reach the field at scale" is supposed to mean, but it sounds important! Nor has it escaped my notice that this whole procedure can be used whether you are teaching humans, training weasels, or manufacturing widgets. 

But the most startlingly terrible thing about deliverology is that it allows absolutely no place for reflection or evaluation of your program. Surround yourself with those who agree. Anything that gets in your way is an "obstacle." And at no point in the deliverology loop do I see a moment in which one stops to ask, "So, is our set of goals actually doing anybody any good? Let's take a moment to ask if what we're trying to do is what we should be trying to do."

This is another problem of implementationism-- the belief that implementing a program is completely separate from designing and creating it in the first place. Implementation should be an important feedback loop. If you start petting your dog with a rake and the dog starts crying and bleeding, the correct response is not, "We have an implemention problem. We'll need to hold down and silence the animal so that it doesn't provide a barrier to implementing our rake-petting program."

No, the proper response is, "Holy hell! Petting my dog with a rake is a turning out to be a terrible idea! I should start over with some other idea entirely!"




My dates all end badly, but I'm sure it's not me. Must be an implementation problem.









Implementationism and Deliverology Misdirection

What these ill-fated approaches do is allow guys like Barber to focus attention everywhere except the place where the problem actually lies. 

If I develop a cool new unit for my classroom, and it bombs terribly, I can certainly look at how I implemented and presented the unit. But I would be a fool (and a terrible teacher) not to consider the possibility that the unit just needs to be heavily tweaked or just plain scrapped. As long as Barber and his acolytes insist that there's nothing wrong with Common Core or high-stakes testing or massive data collection to feed a system that will allow us to tell students what breakfast they should eat, they will face an endless collection of implementation problems.



 Just a little implementation problem




If the Titanic had never hit an iceberg on her maiden voyage, she might have looked like she was having no implementation problems at all. But between her bad design and inadequate safety measures, some sort of disaster was going to happen sooner or later. 

Deeply flawed design yields deeply flawed results, and quality of implementation won't change that a bit. 

Five years (at least-- depending on how you count) of reformster programs have yielded no real success stories. This is not an implementation problem, and reformsters have to look at that and consider the possibility that their beloved reformy ideas have fundamental problems. To be fair, some have. But those who don't simply can't be taken seriously. Even if they write books.