Thursday, February 4, 2016

Breaking Down the Walls for CBE

In the discussions of Competency Based Learning (or Outcomes Based Education or Performance Based Stuff), a support that emerges from time to time is that CBE will "break down the walls between curriculum and assessment."

On the one hand, I see the appeal. In a perfect world, education shouldn't really have to stop cold for assessment, and the burden really should be on the teacher to discover what the student knows and can do, rather than putting the burden on the student to sing and dance her Proof of Achievement. Just keep learning, students, and the teacher will figure out what you know and what you can do by using the Power of Watching.

This, in fact, is what the best teachers do-- constant monitoring and collection of data, gathered by our eyes and ears, and stored and processed in our brains. That's a huge part of the job, and we've already been doing it for ages.

The unspoken issue here is that it's not enough for some folks that the teacher and the student know what's happening-- it has to be made visible to an assortment of third parties. Some of those third parties like, say, building administrators, are not a stretch. But having to make learning visible to third parties such as Pearson or a far-off government bureaucrat is more of a challenge, not unlike having to prove to a complete stranger that you have a good marriage. Actually doing the thing (teaching, learning, marriaging) is one challenge; giving outward and visible proof of the thing to other separate people is a whole other challenge.

In other words, breaking down the wall between curriculum and assessment for students, teachers, and maybe even building admins-- that's easy. Breaking it down in a way that still leaves a big fat data trail for off-site lookie-loos is more problematic.

Now the data and progress largely carried in my head and my classroom records, folders, portfolios, etc isn't good enough. I have to create some sort of digitized data collection, and that means one of two things has to happen:

1) Data via clerical work. Part of my job becomes data entry, repeatedly and relentlessly and daily plugging the data that I've collected via quiz and worksheet and exercise and observation and clickity-clacking away at my computer to get it all recorded in whatever format the provided software (because you know nobody is letting me pick that out myself-- it'll have to be compatible with all manner of systems) demands of me.

2) Direct data collection. All of the student learning activities are done on computer, so that all the data stirred up by whatever company-provided activities are involved will be automatically harvested while the student works. Doing all of her significant classwork on the computer.

There is a third option--

3) Worst of both worlds. In a nightmare scenario, my district gets a data harvesting system and I am required to digitize all of my teacher-created assignments, quizzes, tests, etc so I get the pleasure of hours and hours of mind-and-finger numbing clerical work, while my students still get to enjoy education-by-screen.

All of these options suck. Option one represents a huge increase in the work hours of a teacher, which means either blowing off your family or cutting back on actual instruction or, most likely, both. More data entry, less actual teaching. This is not a win for teachers or students. Option two has already been tried in various forms, most notably the Rocketship Academies that were going to change the education world by plunking students at computers all day. That was a fail. Creating a system in which all student educational activities must come via computer is expensive, frustrating, and counterproductive.

Both methods of data collection also pressure the process to create materials and activities that fit the limitations of the computers, which means, among other things, no real writing instruction and no critical thinking. Because the center of this system is a number-crunching computer-driven data-gobbling monster, it can't help but replicate all the shortcomings and failings of Big Standardized Tests on a large scale.

Advocates will claim that all this data collection will help teachers teach better. They are full of baloney. Any teacher who is any good at all already does all the data collection possible, and there is nothing that running it through the computer will help that teacher do. Conversely, teachers that are Not So Great will not be improved by giving them big data printouts to examine.

I don't mean to diss this kind of data collection entirely-- there are some very specific, very focused areas in which having the data-crunching assistance of a computer can be helpful for a teacher and her students. But as an approach to the Whole Educational System, it's baloney.

Breaking down the wall between curriculum and assessment is a very worthy goal. That's why teachers have been doing it since the invention of dirt, and all without the benefit of any highly-marketed highly-profitable software.

Wednesday, February 3, 2016

The Search for Great Teachers

Bellwether Partners is a right-leaning pro-reform outfit that often comes across as the Fordham Institute's little brother. Like most such outfits, they like to crank out the occasional "report," and their latest is an interesting read. "No Guarantees" by Chad Aldeman and Asley LiBetti Mitchel is a look at the teacher creation pipeline that asks the subheading question, "Is it possible to ensure that teachers are ready on day one?"

The introduction sets the tone for the piece:

The single best predictor of who will be a great teacher next year is who was a great teacher this year.
The second best predictor is... Well, there really isn’t one that’s close. 

And that carries right through to the title of the first section-- "We Don't Know How to Train Good Teachers."

Let me be clear right up front. My own teacher training came from a not-so-traditional program, and my experience with student teachers over the decades does not make me inclined to give uncritical spirited defense of our current techniques for preparing teachers for the classroom. So I'm not unsympathetic to some of Bellwether's concerns. I just think they miss a few critical points. Okay, several. Let's take a look at what they have to say.

What We Don't Know

The authors note that teacher preparation has always focused on inputs, and those inputs include a lot of time and a buttload of money. But there's not much research basis to support those inputs. And they break down the various points at which we don't know things.

"We don't know which candidates to admit." Tightening admission requirements, checking SAT scores, tough admission tests-- these all seem like swell ideas to some folks, but there's no proof that  tougher admissions policies lead to better teachers. This makes sense-- why would things like SAT scores, which are not highly predictive of much of anything,

"We don't know what coursework to require-- if any." On the one hand, there are many teacher preparation programs that involve ridiculous, time-wasting courses. I'd bet that almost every teacher who ever worked with a student teacher has stories of playing that game where, during a supervisory visit from the college, the student and co-operating teacher pretend to be using some method endorsed by the university and implemented by approximately zero real live classroom teachers. On the other hand, if you think a teacher can be adequately prepared without any methods courses at all, or courses dealing with child development-- that any random assortment of courses is as good as any other assortment-- then you are just being silly.

"We don't know what the right certification requirements are." The authors don't have an actual point here other than, "Why shouldn't people who have been through a short-- say, five weekish-- training program be just as certifiable as people who studied teaching?" The reformster vision is deeply devoted to the idea that The Right People don't need any of that fancy-pants teacher training, and even when they are being relatively even-handed, they can't get past that bias.

"We don't know how to help teachers improve once they begin teaching."  This has been covered before, in the TNTP "report" The Mirage.The short answer is that the most effective professional development happens when it control of it is in the hands of the teachers themselves. The disappointing or non-existent results are not so much related to Professional Development as they are related to Programmed Attempts To Get Teachers To Do What Policymakers Want Them To, Even If The Ideas Are Stupid or Bad Practice.

What We Really Don't Know

What Bellwether and other reformsters really don't know is how to tell whether any of these factors make a difference or not. What they really don't know is how to identify a great teacher. Every one of the items above are dismissed on the grounds of showing no discernible effect on "student achievement" or "teacher effectiveness" or other phrases that are euphemisms for "student scores on standardized tests."

This is a fair and useful measure only if you think the only purpose of a teacher, the only goal of teaching as a profession, is to get students to score higher on standardized tests. This is a view of teaching the virtually nobody at all agrees with (and I include in that "nobody" reformsters themselves, who do NOT go searching for private schools for their children based on standardized test scores).

Bellwether's metric and criticism is the equivalent of benching NBA players based on how well their wives do at macrame. The Bellwether criticism only seems more legit because it overlaps with some issues that deserve some thoughtful attention. The problem is that all the thoughtful attention in the world won't do any good if we are using a lousy metric to measure success. Student standardized test scores are a lousy metric for almost anything, but they are a spectacularly lousy metric for finding great teachers.

So Let's Talk About Outcomes

Next up, we contemplate the idea of measuring teacher preparation programs by looking at their "outcomes." This has taken a variety of forms, the most odious of which is measuring a college teaching program by looking at the standardized test results of the students in the classrooms of the graduates of the program, which (particularly if you throw some VAM junk science on top) makes a huge baloney sandwich that can't be seriously promoted as proof of anything at all. This is judging an NBA player based on the math skills of the clerk in the store that sells the wife-made macrame.

Another outcome to consider is employment rates, which is actually not as crazy as it seems; at the lowest ebb of one local college's program, my district stopped sending them notices of vacancies because their graduates were so uniformly unprepared for a classroom. But of course graduates' employment prospects can be affected by many factors far outside the university's control.

Aldeman and Mitchel provide a good survey of the research covering interest in outcomes, and they fairly note that efforts at outcome-based program evaluations have run aground on a variety of issues, not the least of which is that the various models don't really find any significant differences between teacher prep programs. Focusing on outcomes, they conclude, seems to be a good idea right up to the point you try to actually, practically do it.

What Might Actually Work

All of this means that policymakers are still looking for the right way to identify effective teacher preparation and predict who will be an effective teacher. Nothing tried so far guarantees effective teachers. Yet there are breadcrumbs that could lead to a better approach. 

Aldeman and Mitchel have several breadcrumbs that strike them as tasty. In particular, they note that teacher quality is fairly predictable from day one-- the point at which teachers are actually in a classroom with actual students. Which-- well, yes. That's the point of student teaching. But I agree-- among first year teachers I think you find a small percentage who are excellent from day one, a smaller percentage that will be dreadful (the percentage is smaller because student teaching, done right, will chase away the worst prospects), and a fair number who can learn to be good with proper mentoring and assistance.

But Bellwether has four recommendations. They make their case, and they note possible objections.

Make it easier to get in

Right now getting into teaching is high risk, high cost, and low reward. There's little chance for advancement. There is considerable real cost and opportunity cost for entering the profession, which one might suppose makes fewer people likely to do so.

Drop the certification requirements, knock off foolishness like EdTPA, punt the Praxis, and just let anybody who has a hankering into the profession. Local schools would hire whoever they felt inclined to hire. Teachers might still enroll in university programs in hopes that it will improve their chances-- "add value" as these folks like to put it. But the market would still be flooded with plenty of teacher wanna-bes. And I'm sure that if any of these were open to working for lower pay because it hadn't cost them that much to walk into the profession, plenty of charter and private and criminally underfunded public schools would be happy to hire these proto-teachers.

The authors note the objection to untrained teachers in the classroom, and generally lowering the regard for the profession by turning it into a job that literally anybody can claim to be qualified for. The "untrained teacher" objection is dismissed by repeating that there's no proof that "training" does any good. At least, no proof that matches their idea of proof.  As for the regard for the profession, the authors wax philosophical-- who really knows where regard for a profession comes from, anyway??

What did they miss here? Well, they continue to miss the value of good teacher preparation programs which do a good job of preparing teachers for the classroom. But even the worst programs screen for an important feature-- how badly do you want it? One of the most important qualities needed to be a good teacher is a burning, relentless desire to be a good teacher, to be in that classroom. Even if a program requires candidates to climb a mountain of cowpies to then fill out meaningless paperwork at the top, it would be marginally useful because it would answer the question, "Do you really, really want to be a teacher?"

The teaching profession has no room for people who are just trying it out, thought it might be interesting, figured they might give it a shot, want to try it for a while, or couldn't think of anything else to do. Lowering the barriers to the profession lets more of those people in, and we don't need any of them.

Make schools and districts responsible for licensing teachers

Again, this is an idea that would make life so much easier for the charters that Bellwether loves so much. It's still an interesting idea-- the authors are certainly correct to note that nobody sees the teacher being a teacher more clearly or closely than the school in which that teacher works. The authors suggest that proto-teachers start out in low stakes environment like summer school or after school tutoring, both of which are so far removed from an actual classroom experience as to be unhelpful for our purposes. On top of that, it would seriously limit the number of new teachers that a district could take on, while requiring them to somehow bring those proto-teachers on a few years before they were actually needed for a real classroom, requiring a special school administrators crystal ball.

In other words, this idea is an interesting idea, but it will not successfully substitute for making sure that a candidate has real teacher training in the first place.

The other huge problem, which they sort of acknowledge in their objections list, is that this only works if the school or district are run by administrators who know what the hell they're doing and who aren't working some sort of other agenda. A lousy or vindictive or just plain messed up administrator could have a field day with this sort of power. Possible abuses range from "you'll work an extra eight hours a week for free in exchange for certification" to "you'll serve as the building janitor for free to earn your certification" to "come see if you can find your teaching certification in my pants."

Measure and Publicize Results 

Baloney. This is the notion of a market-driven new business model for teacher preparation, and it's baloney. We've already established that states can't collect meaningful on teacher programs, and Bellwether wants to see the data collection expanded to all the various faux teacher programs. They've already said that nobody has managed to scarf up data in useful or reliable quantities; now they're saying, well, maybe someone will figure out how soon. Nope.

Unpack the Black Box of Good Teaching

This boils down to "More research is required. We should do some." But this is problematic. We can't agree on what a good teacher looks like, or even what they are supposed to be doing. Bellwether becomes the gazillionth voice to call for "new assessments that measures [sic] higher-order thinking," which is just unicorn farming. Those tests do not exist, and they will never exist. And their suggestion of using Teach for America research as a clue to great teaching is ludicrous as well. There is no evidence outside of TFA's own PR to suggest that TFA knows a single thing about teaching that is not already taught in teaching prep programs across the country-- and that several things they think they know are just not true.

Another huge problem with unpacking the black box is the assumption that the only thing inside that box is a teacher. But all teachers operate in a relationship with their students, their school setting, their community, and the material they teach. The continued assumption that a great teacher is always a great teacher no matter what, and so this fixed and constant quality can be measured and dissected-- that's all just wrong. It's like believing that a great husband would be a great husband no matter which spouse he was paired up with, that based on my performance as a husband to my wife, I could be an equally great partner for Hillary Clinton or Taylor Swift or Elton John or Ellen Degeneres. I'm a pretty good teacher of high school English, but I'm pretty sure I would be a lousy teacher of fifth grade science.

Great teaching is complex and multifaceted and on top of everything else, a moving target. It deserves constant and thorough study because such research will help practitioners fit more tools into their toolbox, but there will never be enough research completed to reduce teaching to a simple recipe that allows any program to reliably cook up an endless supply of super-teachers suitable for any and all schools. And more to the point, the research seems unlikely to reveal that yes, anybody chosen randomly off the street, can be a great teacher.

Operating at that busy and complicated intersection requires a variety of personal qualities, professional skills, and specialized knowledge.

Bottom Line

There are plenty of interesting questions and criticisms raised by this report, but the conclusions and recommendations are less interesting and less likely to be useful for anyone except charters and privatizers who want easier access to a pliable and renewable workforce. Dumping everything into the pool and just buying a bigger filter is not a solution. Tearing down the profession and pretending that no training really matters is silly. We do need to talk about teacher preparation in this country, but one of the things we need to talk about is how to keep from poisoning the well with the bad policies and unfounded assumptions of the reformster camp.

There are some good questions raised by this report, but we will still need to search for answers.



USED Supports Unicorn Testing (With an Irony Saddle)

Acting Pretend Secretary of Education John King has offered further guidance as a follow-up to last year's Testing Action Plan, and it provides a slightly clearer picture of the imaginary tests that the department wants to see.

Here are the characteristics of the Big Testing Unicorn that King wants to see:

Worth taking: By "worth taking," King means aligned to the actual classroom, and requiring "the same kind of complex work students do in an effective classroom and the real world, and provide timely, actionable feedback." There are several things to parse here, not the least of which is "timely, actionable feedback" for whom, and for what purpose? Is King's ideal test a formative assessment, and if so, is the implication that it shouldn't be used for actions such as grading at all?

"Worth taking" is one of those chummy phrases that sounds like it means something until you are pinned between the rubber and the road trying to figure out what it means exactly. In my own classroom, I certainly have standards for whether or not an assessment is worth giving, but that decision rests heavily on my particular students, the particular subject matter, and the particular place we are in our journey, all of which also connects to how heavily weighted the grade is and if, in fact, there will be a grade at all.

But King's vision of a test aligned to both classroom and the real world is a bit mysterious and not very helpful.

High quality: This means we hit the full range of standards and "elicits complex student demonstrations of knowledge" and is supposed to measure both achievement and growth. That is a huge challenge, since complex constellations of skills and knowledge are not always easily comparable to each other. Your basketball-playing child got better at foul shots and dribbling, but worse at passing and footwork. She scores more points but is worse at teamwork. Is she a better player or not?

Time-limited: "States and districts must determine how to best balance instructional time and the need for high-quality assessments by considering whether each assessment serves a unique, essential role in ensuring all students are learning."

So, wait. The purpose of an assessment is to ensure that all students learn? How exactly does a test ensure learning? It can measure it, somewhat. But ensure it?  Do you guys still not get that testing is not teaching?

This appears to say, "Don't let testing eat up too much instructional time." Sure. Of course, really good testing eats up almost no instructional time at all. On this point, the Competency Based Learning folks are correct.

Fair: The assessments are supposed to "provide fair measures of what all students, including students with disabilities and English learners, are learning." So this uber-test will accurately assess all levels of ability, from the very basement to the educational penthouse. King doesn't have any idea of how to do this, but he does throw the word "robust" in here.

Fully transparent to students and parents: King lists every form of transparency except the one that matters-- showing exact item by item results that include te question, the answers, and an explanation of why the test manufacturer believes their answer is the correct one. What KIng wants to make transparent is the testing PR-- reasons for the test, source of the mandate for the test, broad ungranulated reports of results, what parents can do even though we won't tell them exactly how their child's test went.

BS Tests currently provide almost no useful information, primarily because the testing system is organized around protecting the intellectual property rights of the test manufacturers. Until we address that, King's call for transparency is empty nonsense.

Just one of multiple measures: No single assessment should decide anything important. I look forward to the feds telling some states that they are not allowed to hold third graders back because of results on the BS reading test.

Tied to improved learning: "In a well-designed testing strategy, assessment outcomes should be used not only to identify what students know, but also to inform and guide additional teaching, supports, and interventions." No kidding. You know what my unattainable unicorn is? A world in which powerful amateurs don't make a big deal out of telling me what I already know as if they just discovered it themselves.

And your saddle of irony: Every working teacher reading this or the original letter has had exactly the same thought-- BS Tests like the PARCC and SBA and all the rest of them absolutely fail this list. The BS Tests don't measure the full range of standards, don't require complex, higher-order responses, suck up far too much time, cannot measure the full range of student ability, are supremely opaque, are given way too much weight as single measures, and are useless as tools for improving instruction. They are, in fact, not worth taking at all. Under this test action plan, they should be the first to go.

More swell ideas.

The letter comes with a five-page PS, ideas from the feds about how to improve your testing picture, or at least ways to score money from the department for that alleged purpose.

You could audit your state tests. You could come up with cool data-management systems, because bad, useless data is always magically transformed when you run it through computer systems. You might train teachers more in "assessment literacy," because we am dummies who need to learn how to squint at the ugly tests in order to see their beauty. You could increase transparency, but you won't. You could increase the reliability and validity of the tests-- or at least check and see if they have any at all to start with.

Or you could just take a whole bunch of testing materials and smack yourself over the head with them. Any of these seem like viable options for running your own personal state-level unicorn farm.

Tuesday, February 2, 2016

NC: TeachStrong Solves The Mystery of Teaching

Remember TeachStrong? It was launched by the folks at CAP to create some tasty PR about fixing teachers, complete with a not-very-impressive list of Ways To Make Teachers Swell. They rounded up most of the usual Faux-Lefty Reformster Suspects, including virulently anti-teacher and anti-teacher-union groups like DFER, and despite all this, the initiative also suckered in NEA and AFT into joining, a decision so...um, let's say "counter-intuitive" that Randi Weingarten had to write a whole post explaining WTF she was thinking. (Plus, I stand by my theory that this group is about covering Hilary Clinton's education flank).

Well, TeacherStrong is up to things. Specifically, they are going to host a moderated discussion in North Carolina on February 17th (roughly a month before the primary election) to discuss "the importance of modernizing and elevating the teaching profession." They will even follow it up with some local educators (including the 2014 Teacher of the Year, and an association president) who will wax poetic about "the impact that TeachStrong's principles would have on their career and the entire teaching profession." Moderators include a director from Project LIFT, a "pubic-private" turnaround biz, and CAP.

TeachStrong's message that we must work to modernize and elevate the teaching profession is especially relevant in North Carolina. The Charlotte area alone had nearly 1,000 teachers resign before the 2015 school year, and the state has experienced a 20 percent drop in enrollment in teacher preparation programs over the last 3 years. 

Yes, the exodus of teachers from North Carolina and the reluctance of new recruits to join up-- that is a real puzzler, that is. Regular readers of this space know that I have a few theories. North Carolina has been hammering away at its educational foundation with big heavy hammers. Let's see. They tried to do away with tenure and froze wages for years, then cleverly tried to throttle two birds with one heavy fist by trying to make teachers choose between a (possible) raise and job security. Eventually, they created a new insulting salary schedule. Meanwhile, the state's Lt. Governor required them to rewrite a report about their crappy charters schools so that it was instead about how wonderful their charter schools are. They have cut school budgets, fired aids by the thousands, and installed terrible punitive regulations such as Pass-This-Standardized-Test-or-Fail-Third-Grade rules.

In other words, while TeachStrong is concerned about bringing the teaching profession into the future, in North Carolina, it's going to take some work just to bring the teaching profession into the present.

Anything that would advance the cause of teaching and public education in North Carolina would be welcome, but I'm not so sure that TeachStrong is the outfit to do it. This discussion could theoretically involve a head-on hit at the huge bad moves that North Carolina has made in education, or it could end up being pretty words to use while tap-dancing around the landmines that North Carolina has strewn around the public school landscape. But I'm not encouraged that they discuss the drop in the teacher supply as if it's some sort of mysterious inexplicable random act of nature, rather than the fairly predictable outcome of years of anti-teacher, anti-student, anti-education policies in the state. There are plenty of good, caring, dedicated teachers in North Carolina (I know-- I talk to some of them), and they deserve far better than what the state has been dumping upon them. TeachStrong's panel discussion should start with that.



    

NPE: National Public Ed Report Card

Every reformy group in the country regularly issues "report cards" about how well states are pursuing one reformster policy or another. We have been long overdue for a report card for how well states are defending and supporting the public education system that is one of the pillars of democracy. Now that wait is over.

The Network for Public Education today releases its 50 State Report Card, providing a quick, clear, simple look at how the various states are doing when it comes to supporting public education.

NPE has developed the grade based on six criteria; the actual research and point breakdown were done with the assistance of Francesca Lopez, Ph. D. and a research team at the University of Arizona. And yes, NPE is aware of the irony of using letter grades, a rather odious tool of reformsters.

As a matter of principle, NPE does not believe in assigning a single letter grade for evaluation purposes. We are opposed to such simplistic methods when used, for example, to evaluate schools. In this case, our letter grades carry no stakes. No states will be rewarded or punished as a result of our judgment about their support or lack of support for public education.

States ended up with a GPA based on the six factors. The top state score was a 2.5 (Iowa, Nebraska, and Vermont) and the lowest was Mississippi with a 0.50. Let's look at the best and the dimmest in each category.

No High Stakes Testing

NPE looked for states that rejected the use of the Big Standardized Test for a graduation exam, a requirement for student promotion and a factor in teacher evaluation.

Grade A: Alabama, Montana, Nebraska, New Hampshire, and Vermont

Flunkeroonies: Mississippi

Professionalization of Teaching

Here NPE looked at nine factors, including experienced teacher pool, average early and mid-career salaries, rejection of merit pay, teacher attrition and retention rates, tenured teachers, high requirements for certification, and proportion of teachers prepared in university programs. In other words, is teaching actually treated like a life-long profession for trained professionals, or a quick pass-through temp job for anybody off the street?

Grade A: Well, that's depressing. Nobody. Iowa and New York scored B's.

Bottom of the Barrel: Arizona, Colorado, Florida, Indiana, North Carolina, and Texas. No surprises here, particularly with North Carolina and Florida, which have gone way out of their way to trash teaching.

Resistance to Privatization

Of course, dismantling public education and selling off the parts to profiteers has been a signature feature of reformster policies. So NPE looked at resistance to choice in all its various porcine lipstickery formats, resistance to using public tax dollars to pay for private schools, controls on charter growth, and rejection of the parent trigger laws.

Grade A: Alabama, Kentucky, Montana, Nebraska, North Dakota, South Dakota, and West Virginia

The Pits: Arizona, California, Colorado, Florida, Georgia, Indiana, Louisiana, Minnesota, Mississippi, North Carolina, Ohio, Tennessee, and Texas. Ka-ching.

School Finance 

Equitable and adequate funding is the great white whale of education. Even when states put better funding formulas in place or are forced and fine by the courts to get their act together (looking at you, Washington), there's a whole lot of fail out there. NPE looked at per-pupil expenditures adjusted for poverty and district size, school funding as a part of state gross product, and how well the state addresses the need for extra resources for high-poverty areas.

Grade A: New Jersey. That's it.

Stingy McUnderfunding: Alabama, Arizona, Idaho, Nevada, North Dakota

Spending Taxpayer Resources Wisely 

This is where NPE sets its spending priorities (contrary to some critical opinion, pubic ed supporters do not simply believe that public ed should have All The Money). The priorities that NPE focused on were lower class size, less variation in class size by school type, more pre-K and full day K, and few students in cyber schools.

Grade A: Well, nobody. Montana gets a B.

Centers of Foolishness: Idaho, Nevada, and Washington

Chance for Success

This category looks at societal factors that can have an impact on student success. NPE researchers focused on proportion of students not living in low-income households, proportion of students living in households with full-time employment that lands above the poverty line, and how extensively schools are integrated by race and ethnicity.

Grade A: None. But ten B's, so there's some hope here.

Failureville: Alabama, California, Georgia, Mississippi, Montana, and Texas


The report comes with an appendix that gets into more detail as far as specific methodologies. In fact, one of the general strengths of the report is that it's very easy to take in the results at either a quick and simple level, or to drill down for more detail. In fact, the NPE website has a handy interactive map that lets you take a quick look at each state's grade breakdown. 



The report is handy for comparison, and for a depressingly clear picture of which states are beating up public education badly. It is transparent enough that you can discuss and debate some of the factors included in the findings. I can certainly see it as a tool for young teachers looking for a place to land.

Take some time to look through the report. It's not a pretty picture, but understanding where we are will help us develop more ideas about how to get where we need to be.

Monday, February 1, 2016

CCSS Flunks Complexity Test

The Winter 2016 issue of the AASA Journal of Scholarship and Practice includes an important piece of research by Dario Sforza, Eunyoung Kim, and Christopher Tienken, showing that when it comes to demanding complex thinking, the Common Core Standards are neither all that nor the bag of chips.

You may recognize Tienken's name-- the Seton Hall professor previously produced research showing that demographic data was sufficient to predict results on the Big Standardized Test. He's also featured in this video from 2014 that does a pretty good job of debunking the whole magical testing biz.

The researchers in this set out to test the oft-repeated claim that The Core replaces old lower order flat-brained standards with new requirements for lots of higher-order thinking. They did this by doing a content analysis of the standards themselves and doing the same analysis of New Jersey's pre-Core standards. They focused on 9-12 standards because they're more closely associated with the end result of education; I reckon it also allowed them to sidestep questions about developmental appropriateness.

The researchers used Webb's Depth of Knowledge framework to analyze standards, and to be honest and open here, I've met the Depth of Knowledge thing (twice, actually) and remain relatively unimpressed. But the DOK measures are widely loved and accepted by Common Coresters (I had my first DOK training from a Marzano-experienced pro from the Common Core Institute), so using DOK makes more sense than using some other measure that would allow Core fans to come back with, "Well, you just didn't use the right thing to measure stuff."

DOK divides everything up into four levels of complexity, and while there's a temptation to equate complexity and difficulty, they don't necessarily go together. ("Compare and contrast the Cat in the Hat and the Grinch" is complex but not difficult, while "Find all the references to sex in Joyce's Ulysses" is difficult but not complex.) The DOK levels, as I learned them, are

Level 1: Recall
Level 2: Use a skill
Level 3: Build an argument. Strategic thinking. Give evidence.
Level 4: Connect multiple dots to create a bigger picture.

Frankly, my experience is that the harder you look at DOK, the fuzzier it gets. But generally 3 and 4 are your higher order thinking levels.

The article is for a scholarly research journal, so there is a section about How We Got Here (mainstream started clamoring for students graduating with higher order smarterness skills so that we would not be conquered by Estonia). There's also a highly detailed explanation of methodology; all I'm going to say about that is that it looks solid to me. If you don't want to take my word for it, here's the link again-- go knock yourself out.

But the bottom line?

In the ELA standards, the complexity level is low. 72% of the ELA standards were rated as Level 1 or 2. That would include such classic low-level standards like "By the end of Grade 9, read and comprehend literature, including stories, dramas, and poems, in the grades 9–10 text complexity band proficiently, with scaffolding as needed at the high end of the range." Which is pretty clearly a call for straight-up comprehension and nothing else.

Level 3 was 26% of the standards. Level 4 was  a whopping 2%, and examples of that include CCSS's notoriously vague research project standard:

Conduct short as well as more sustained research projects to answer a question (including a self-generated question) or solve a problem; narrow or broaden the inquiry when appropriate; synthesize multiple sources on the subject, demonstrating understanding of the subject under investigation

Also known as "one of those standards describing practices already followed by every competent English teacher in the country."

Math was even worse, with Level 1 and 2 accounting for a whopping 90% of the standards.

So if you want to argue that the standards are chock full of higher order thinkiness, you appear to have no legs upon which to perform your standardized happy dance.

But, hey. Maybe the pre-Core NJ standards were even worse, and CCSS, no matter how lame, are still a step up.

Sorry, no. Still no legs for you.

NJ ELA standards worked out as 66% Level 1 and 2, Level 3 with a 33%, and Level 4 a big 5%.

NJ math standards? Level 1 and 2 are 62% (and only 8% of that was Level 1). Level 3 was 28%, and Level 4 was 10%.

The researchers have arranged their data into a variety of charts and graphs, but no matter how you slice it, the Common Core pie has less high order filling than NJ's old standards. The bottom line here is that when Core fans talk about all the higher order thinking the Core has ushered into the classroom, they are wrong.

How High Are the Standards?

Raise standards. High standards. Deciding whether Core standards are higher or lower than the old standards, or the newer standards.

And nobody has any idea what any of it means.

I mean, I'm not an idiot. I understand what it means to say that I hold my students to a high standard or that my classroom is based on having high standards or hold the donuts I eat to a high standards. As a general principle, we all know what high standards are.

But as a matter of policy, "high standards" is really meaningless. In fact, it's worse than meaningless because it's a metaphor that obscures an important truth.

"High standards" suggests a two-dimensional model of education. It suggests a model in which all students are trying to climb exactly the same ladder in exactly the same direction.It's a single one-directional arrow, with all students progressing steadily, dutifully along the single path toward the single point.

It's a model that doesn't correspond to anything in human experience or behavior. Instead of the blind men and the elephant, we can tell the modern fable of thousand blind administrators and the feds.

The blind administrators were called before the Department of Education. Looking down at them from his throne made of 95% excellent mahogany, the Secretary said, "Have you all led your schools to higher standards?"

"Yes," they all roared in reply. "We are all running schools where high standards rule."

"Excellent," said the Secretary. "You must each tell me, one at a time, and in greater detail, if your school has set high standards." And so the thousand blind administrators lined up to answer his question.

"Yes," said the first blind administrator. "We require our students to get the very highest scores on a standardized English test."

"Yes," said the second blind administrator. "Our students must get the very highest scores on a standardized math test."

"Yes," said the third blind administrator. "We insist that every one of our students leave our school with a positive, happy attitude about themselves."

"Yes," said the fourth blind administrator. "Every single one of our students must be physically fit."

"Yes," said the fifth blind administrator. "We demand that every student achieve competence on a musical instrument."

"Yes," said the sixth blind administrators. ""Every one of our students must graduate with the tools to be an excellent scientist."

"Yes...."

Okay, it's a very long fable, because each one of the thousand administrators had set their school to a higher standard, and not one of them was like the other. Because when you try to fill the grand hollow platitude of "we must have high standards" with anything specific, you quickly realize that all the blather in the world can't fill that gaping cavern in a useful way.

Should we have high expectations for each of our students, demanding and encouraging that they become the best they can be? Absolutely-- that is fundamental to good classroom practice. But using "high standards" as a policy is useless, a thick slice of baloney that may make bureaucrats and politicians feel as if they're really Doing Something. You can't further a conversation with words that don't actually mean anything.