Saturday, March 22, 2014

Cloudy with a Chance of Data

There are so many reasons to be opposed to the business of mining and crunching data. We like to rail about how the data miners are oppressive and Big Brothery and overreaching. But there's another point worth making about our Data Overlords:

Data miners are not very good at their job.

My first wife and I divorced about twenty years ago. We have both since remarried and moved multiple times. And yet, I still get occasional pieces of mail for her here at my current home. The last time I looked at my credit report, it included me living at an address that she used after we split. I could try to get it changed but A) she is a responsible woman who I'm sure has excellent credit and B) have you ever tried to get info on your credit report changed?

As I work on this, several other browser windows are showing ads for K12. I cruised to some sites maybe two weeks ago doing research for some pieces about cyber charters, but now my browser and adsense are sure I'm in the market for cyberschool. It is tempting to click the ads repeatedly in order to drain k12's ad budget of another wasted 25 cents, but I would have to live with the consequences.

My brother and I have an old game we sometimes play. When pollsters call us, we answer opposite of our actual beliefs in order to feed the pollster false info. Because who says we can't or shouldn't?

Before anything of use can happen in the data cloud, two things must be true:

1) The data must be good.

The tools for collection must be accurate. Designing good data collection tools is hard. The Data Overlords are trying to convert all the tools of instruction and assessment into tools for data gathering, but that's not what they're generally designed to do. Most fundamentally, I collect data about a student to create a picture of that student, not to turn that student into one data point among millions.

But beyond the accuracy of the tool, there is the willingness of the data generators. I suspect this is a blind spot for Data Overlords-- they are so convinced of the importance of data collection that they don't necessarily understand that most of us feel no compelling reason to cooperate.

There is no moral imperative to help the Data Overlords gather accurate data.

2) The program for crunching it must be good.

In the late seventies I was studying BASIC programming language and our professor was reminding us repeatedly that computers are stupid machines that happen to possess speed and long attention spans. If we tell them to do stupid things, they will-- but really, really fast! A computer is not one whit "smarter" than the person who programmed it.

If the person writing the software believes that knowing "2 + 2 = 5" means you're ready for calculus, the program will find many six-year-olds are prepared for math courses.

Put another way, a computer doesn't know how to predict anything that no human being knows how to predict, and it particularly doesn't know how to predict anything involving a series of complicated data points that the software writer failed to anticipate. So a human being could easily figure out that my ex-wife doesn't live here, but the software lacks the complexity to pull together the right data. And a human being could figure out that I used some of my brother's airline points to get a magazine subscription, but the software thinks he might live here, too.

The software can't figure out how to put every single person together with his/her perfect romantic match. It can't figure out exactly what movie you want to watch right this minute. And it doesn't know that I hope K12 dies a permanent death.

It's as simple as GIGO-- bad data processed poorly yields no useful results. Waving your laser pointer and intoning, "Look! Compuuuuters! Data! Data made out of numbers!! It's magical!!" will not convince me to cheerfully welcome my New Data Overlords.

Who Puts the Scary in Pearson? Meet Knewton.

Behind the data generating-and-collecting behemoth that is Pearson is a company called Knewton. And here's a video from the November 2012 Education Datapallooza (a name that I did NOT make up, but was officially given the event by the Dept of Education, because they are so hip. I believe they also listen to the rap music).  In just under ten minutes, Jose Ferreira, Knewton CEO, delivers the clearest picture I've ever seen of the intentions of the Acolytes of Data. (H/T to Anne Patrick.)


He opens with the notion that in the next few decades, we will become a totally data mined world. There are plenty of reasons to be concerned about that, but that's another post. He may well be right. He believes that has big implications for education, because while everybody is just collecting data in dribs and drabs, education is the Great River O'Data.

Knewton is now (and remember-- "now" is 2012) collecting millions of data points per day per student. And they can do that because these are students who are plugged into Pearson, and Pearson has tagged every damn thing. And it was this point at which I had my first light bulb moment.

All that aligning we've been doing, all that work to mark our units and assignments and, in some places, every single work sheet and assignment so that we can show at a glance that these five sentences are tied to specific standards-- all those PD afternoons we spent marking Worksheet #3 as Standard LA.12.B.3.17-- that's not, as some of us have assumed, just the government's hamfisted way of making sure we've toed the line.

It's to generate data.

Worksheet #3 is tagged LA.12.B.3.17, so that when Pat does the sheet his score goes into the Big Data Cloud as part of the data picture of Pat's work. (If you'd already figured this out, forgive me-- I was never the fastest kid in class).

Knewton will generate this giant data picture. Ferreira presents this the same way you'd say, "Once we get milk and bread at the store," when I suspect it's really more on the order of "Once we cure cancer by using our anti-gravity skateboards," but never mind. Once the data maps are up and running, Knewton will start operating like a giant educational match.com, connecting Pat with a perfect educational match so that Pat's teacher in Iowa can use the technique that some other teacher used with some other kid in Minnesota. Because students are just data-generating widgets.

Ferreira is also impressed that the data was able to tell him that some students in a class are slow and struggling, while another student could take the final on Day 14 and get an A, and for the five billionth time I want to ask this Purveyor of Educational Revolution, "Just how stupid do you think teachers are?? Do you think we are actually incapable of figuring those sorts of things out on our own?"

But don't be insulted-- it's not just teachers who are stupid, but the students themselves. Knewton imagines a day when they can tell students how they best learn and under what conditions. Will you do best watching videos or reading? "We should be able to tell you what you should have for breakfast [to do well on a test]"

Because human beings are simple linear systems and if you measure all the inputs, you can predict all the outputs? That seems to be our assumption, and even I, a high school English teacher for crying out loud, know enough about chaos theory and the systems of complex systems to know that that is a fool's game. (If you want to read more about it, and you should, I highly recommend Chaos by James Gleick)

Beyond the privacy implications and the human-beings-as-widgets implications and the necessity to tag every damn sentence of every damn assignment so our data overlords may drink their fill-- beyond all that, there are implications for what an education means.

One aspect of becoming an educated person is getting to know yourself, to understand your strengths and weaknesses, your abilities and deficits, defining your own character, and making choices about how to be in the world as a your particular version of a human being.

How, I wonder, do we adjust to software that attempts to do most of that for you? How do you get to know who you are when you've got a software program looking over your shoulder and telling you all about who you are with implacable inhuman data-driven assurance? It's a huge question and one that I feel unsure of how to answer. I wish the guys at Knewton shared a little bit of my fear and unsureness.

UPDATE: Twitter user Barmak Nassirian directed my attention to this article, which provides an even more complete view of exactly how Knewton thinks they can accomplish their goals. It confirms the impression that these are guys who know a lot more about data systems than about carbon based life forms. It's long-- but it's interesting reading.

"The New Intelligence" by Steve Kolowich. Inside Higher Ed.

Friday, March 21, 2014

In Praise of Non-Standardization

It is hard for me to argue with fans of national standards, because we hold fundamentally different values.

I'm opposed to CCSS, but unlike many other CCSS opponents, I'm opposed to any national standards at all. But it's hard to have that conversation because it comes down to this not-very-helpful exchange:

Standards fan: But if we had national standards, everyone would be on the same page. The system would be standardized. That's a good thing.

Me: No, it's not.

I'm not advocating the destruction of all rules and order. I'm not calling for the Land of Do-As-You-Please. But let me speak in praise of non-standardization.

Standardization is safe. It's predictable. We can walk into any McDonald's in the country and it will be just like any other and we will know exactly what we will get. I am not excited about that prospect. Let me plop you into the center of any mall in the country and defy you to guess where you are. That's not a good thing.

Complete organization and standardization is complete boredom. A canvas painted by Monet is interesting precisely because it is disorganized. There's more of some paint over here, less of the other paint over there. A wall painted by Bob's House Painting is perfectly orderly and organized. It's also flat and featureless and nobody particularly wants to look at it; in fact, once it has dried, the homeowners will break up its monotony by hanging photos or decorations or a print of a Monet painting.

Take a glass of water and drop one drop of food coloring into it. At first it will be a group of stark swirls against a clear background. It will be disorganized, disorderly. It will also be cool, interesting. After a while, it will be completely organized and orderly. And boring and uniform.

Chaos and information theories tell us that disorder and entropy are not necessarily best buds, that in fact achieving order and increasing entropy actually go hand in hand. Progress and creation arise out of chaos.

We don't have to be all philosophysicsy about this. Look at the arts. Watch the following process repeat over and over and over again:

1) The prevailing standard has become moribund and stultifying.

2) A large group of alternatives suddenly arise, almost simultaneously providing a whole host of exciting alternatives

3) Eventually one or two emerge as the "winners."

4) The winners cement their status as the new standard by becoming more orderly, more formalized, more organized (but less energetic)

5) See step 1. Rinse and repeat.

This covers everything from the French Impressionist movement to the rise of varied forms of Rock and Roll and Pop in response to the easy listening of the fifties. Or the arc of the computer software and app industry.


It is not just that the non-standard makes the world beautiful and interesting. It is the non-standard that is necessary for human beings to rise and advance. It is the non-standard that allows us to be our best selves, to express whatever unique blend of human qualities that birth and circumstances bring to us.

The goal of standardization is the exact opposite of what is, I would argue, the business of human life. We exist as human beings to make our mark, to make a difference, to be agents of change, to put our unique fingerprints on the things we touch. The goal of the standardized human is to not make a difference, to not leave a mark, to interact in the world in such a way that it would not have made the slightest difference if some other standardized human had been there in our place.

Some loose standardization greases the wheels of society, gives us a common foundation to develop our individual differences. But to imagine that standardization is in and of itself a high and desirable virtue is to imagine that a foundation is the only thing we need in a house.  So no, I don't see some sort of national standard as a worthy goal.

Standardized Tests Tell Nothing

Testy stuff experts could discuss all of the following in scholarly type terms, and God bless them for that. But let me try to explain in more ordinary English why standardized tests must fail, have failed, will always fail. There's one simple truth that the masters of test-driven accountability must wrestle with, and yet fail to even acknowledge:

It is not possible to know what is in another person's head.

We cannot know, with a perfect degree of certainty, what another person knows. Here's why.

Knowledge is not a block of amber.

First, what we call knowledge is plastic and elastic.

Last night I could not for the life of me come up with the name of a guy I went to school with. This morning I know it.

Forty years ago, I "knew" Spanish (although probably not well enough to converse with a native speaker). Today I can read a bunch, understand a little, speak barely any.

I know more when I am rested, excited and interested. I know less when I am tired, frustrated, angry or bored. This is also more true by a factor of several hundred if we are talking about any one of my various skill sets.

In short, my "knowledge" is not a block of immutable amber sitting in constant and unvarying form just waiting for someone to whip out their tape measure and measure it. Measuring knowledge is a little more like trying to measure a cloud with a t-square.

We aren't measuring what we're measuring.

We cannot literally measure what is going on in a student's head (at least, not yet). We can only measure how well the student completes certain tasks. The trick-- and it is a huge, huge, immensely difficult trick-- is to design tasks that could only be completed by somebody with the desired piece of knowledge.

A task is as simple as a multiple choice question or an in-depth paper. Same rules apply. I must design a task that could only be completed by somebody who knows the difference between red and blue. Or I must design a task that could only be completed by somebody who actually read and understood all of The Sun Also Rises.

We get this wrong all the time. All. The. Time. We ask a question to check for understanding in class, but we ask it in such a tone of voice that students with a good ear can tell what the answer is supposed to be. We think we have measured knowledge of the concept. We have actually measured the ability to come up with the correct answer for the question.

All we can ever measure, EVER, is how well the student completed the task.

Performance tasks are complicated as hell.

I have been a jazz trombonist my whole adult life. You could say that I "know"many songs-- let's pick "All of Me." Can we measure how well I know the song by listening to me perform it?

Let's see. I'm a trombone guy, so I rarely play the melody, though I probably could. But I'm a jazz guy, so I won't play it straight. And how I play it will depend on a variety of factors. How are the other guys in the band playing tonight? Do I have a good thing going with the drummer tonight, or are our heads in different places? Is the crowd attentive and responsive? Did I have a good day? Am I rested? Have I played this song a lot lately, or not so much? Have I ever played with this band before-- do I know their particular arrangement of the song? Is this a more modern group, because I'm a traditional (dixie) jazz player and if you start getting all Miles on me, I'll be lost. Is my horn in good shape, or is the slide sticking?

I could go on for another fifty questions, but you get the idea. My performance of a relatively simple task that you intended to use to measure my knowledge of "All of Me" is contingent on a zillion other things above and beyond my knowledge of "All of Me."

And you know what else? Because I'm a half-decent player, if all those other factors are going my way, I'll be able to make you think I know the song even if I've never heard it before in my life.

If you sit there with a note-by-note rubric of how you think I'm supposed to play the song, or a rubric given to you to use, because even though you're tone-deaf and rhythm-impaired, with rubric in hand you should be able to make an objective assessment-- it's hopeless. Your attempt to read the song library in my head is a miserable failure. You could have found out just as much by flipping a coin. You need to be knowledgeably yourself-- you need to know music, the song, the style, in order to make a judgment about whether I know what I'm doing or not.

You can't slice up a brain.

Recognizing that performance tasks are complicated and bubble tests aren't, standardized test seemed designed to rule out as many factors as possible.

In PA, we're big fans of questions that ask students to define a word based on context alone. For these questions, we provide a selection that uses an obscure meaning of an otherwise familiar word, so that we can test students' context clue skills by making all other sources of knowledge counter-productive.

Standardized tests are loaded with "trick" questions, which I of course am forbidden to reveal, because part of the artificial nature of these tasks is that they must be handled with no preparation and within a short timespan.But here's a hypothetical that I think comes close.

We'll show a small child three pictures (since they are taken from the National Bad Test Clip Art directory, there's yet another hurdle to get over). We show a picture of a house, a tent and a cave. We ask the child which is a picture of a dirt home. But only the picture of the house has a sign that says, "Home Sweet Home" over the door. Want to guess which picture a six-year-old will pick? We're going to say the child who picked the cave failed to show understanding of the word "dirt." I'd say the test writers failed to design an assessment that will tell them whether the child knows the meaning of the word "dirt" or not.

Likewise, reading selections for standardized tests are usually chosen from The Grand Collection of Boring Material That No Live Human Being Would Ever Choose To Read. I can only assume that the reasoning here is that we want to see how well students read when they are not engaged at all. If you're reading something profoundly boring, then only your reading skills are involved, and no factors related to actual human engagement.

These are performance task strategies that require the student to only use one slice of brain while ignoring all other slices, an approach to problem solving that is used nowhere, ever, by actual real human beings.

False Positives, Too

The smartest students learn to game the system, which invariably means figuring out how to complete the task without worrying about what the task pretends to measure. For instance, for many performance tasks for a reading unit, Sparknotes will provide just as much info as the students need. Do you pull worksheets and unit quizzes from the internet? Then your students know the real task at hand is "Find Mr. Bogswaller's internet source for answer keys."

Students learn how to read teachers, how to  divine expectations, what tricks to expect and how to generally beat the system by providing the answers to the test without possessing the knowledge that the test is supposed to test for.

The Mother of all Measure

Tasks, whether bubble tests or complex papers, may assess for any number of things from students's cleverness to how well-rested they are. But they almost always test one thing above all others-

Is the student any good at thinking like the person who designed the task?

Our students do Study Island (an internet-based tutorial program) in math classes here. They may or may not learn much math on the island, but they definitely learn to think the same way the program writers think.

When we talk about factors like the colossal cultural bias of the SAT, we're talking about the fact that the well-off children of college-educated parents have an edge in thinking along the same lines as the well-off college-educated writers of the test.

You can be an idiot, but still be good at following the thoughty paths of People in Charge. You can be enormously knowledgeable and fail miserably at thinking like the person who's testing you.

And the Father of all Measure

Do I care to bother? When you try to measure me, do I feel even the slightest urge to co-operate?

Standardized tests are a joke

For all these reasons, standardized tests are a waste of everybody's time. They cannot measure the things they claim to measure any better than tea leaves or rice thrown on the floor.

People in the testing industry have spent so much time convincing themselves that aspects of human intelligence can be measured (and then using their own measurements of measurement to create self-justifying prophecies) that they've lost fact of that simple fact:

You cannot know what's in another person's head

What goes on in my head is the last boundary I have that you cannot cross. I can lie to you. I can fake it. I can use one skill to substitute for another (like that kid in class who can barely read but remembers every word you say). Or I may not be up to the task for any number of reasons.

Standardized test fans are like people who measure the circumference of a branch from the end of a tree limb and declare they now have an exact picture of the whole forest. There are many questions I want to ask (in a very loud voice that might somewhat resemble screaming) of testmakers, but the most fundamental one is, "How can you possibly imagine that we are learning anything at all useful from the results of this test?"

Thursday, March 20, 2014

Pearson's Vision for the World

Mercedes Schneider recently directed the blogosphere's attention to a Pearson paper from February of 2014, "Impacts of the Digital Ocean on Education." If you're wondering just what Pearson (and by extension, the various government bodies that they own) envisions for our collective future, this document sheds plenty of light.

There are 44 pages, and I'm not going to address them all at once for a variety of reasons, not the leas of which is that the document is exceptionally depressing. Let's just look at the front end today.

The intro page lists the actual authors (we'll get to them another day), a paragraph "About Pearson" ("the world's leading learning company") and an introduction to the series. "Sir Michael Barber, on behalf of Pearson, is commissioning a series of independent, open, and practical publications containing new ideas and evidence about what works in education."  Since both authors of this paper work for Pearson, I'm not sure what "independent" means in this context. There is also a Creative Commons license.

After acknowledgements and a table of contents, we arrive at the Foreword by Sir Michael Barber. If you don't know about Barber, you should. A top honcho at Pearson, he has also worked as head of global education practice at McKinsey and was an advisor to PM Tony Blair. Three Moms Against Common Core ranked him #7 of the Ten Scariest People in Education Reform.

Here's Barber's opening paragraph:

The "digital ocean" that this paper introduces is coming. Just as "big data" is transforming other industries such as insurance, finance, retail, and professional sport, in time, it will transform education. And when it does, it will resolve long-standing dilemmas for educators and enable that long-term aspiration for evidence-informed policy at every level, from classroom to the whole system, to be realized.

Got that? Big data is coming, and it will save us all.The "dilemmas," Barber explains, are the limits of formal testing in gathering data.

Barber reminisces about his time in the UK DOE and the groundbreaking national data system he pioneered there. But that was then. The now offers more data-riffic promises of aweomeness:

Once much of teaching and learning becomes digital, data will be available not just from once-a-year tests, but also from the wide-ranging daily activities of individual students. Teachers will be able to see what students can and cannot do, what they have learned and what they have not,which sequences of teaching have worked well and which haven’t - and they will be able to do so in real time.

Seriously? Barber's promise is that we will be able to observe our students in real time working  at a variety of tasks and see what they can and can't do. Really?? Am I living in some magical land because THIS IS EXACTLY WHAT I DO RIGHT NOW!!

Maybe I'm extra fortunate. Maybe somewhere there are teachers who work from some bunker in the back of the room where they can neither see nor hear the students in their classrooms. Or maybe they are asking the students questions but the students record their answers in special lockboxes that can't be opened for 126 hours. Or maybe the only way these poor benighted teachers can collect data is by giving one or two mammoth tests per year that provide results without specifics and not until a year later oh no wait, that's what Pearson's various acolytes are trying to get us to do NOW!

Barber assures us that personalized learning at scale will be possible, and again I want to point out that we already have a system that can totally do that (though of course the present system does not provide corporations such as Pearson nearly enough money). I will not pretend that the traditional US public ed system always provides the personalized learning it should, but when reformy types suggest that's a reason to scrap the whole system, I wonder if they also buy a new car every time the old car runs out of gas (plus, in that metaphor, government is repeatedly pouring sand into the gas tank).

But no. There will have to be revolution:

...schools will need to have digital materials of high quality, teachers will have to change how they teach and how they themselves learn... 

 This shtick I recognize, because it is as old as education technology. Every software salesman who ever set foot in a school used this one-- "This will be really great tool if you just change everything about how you work." No. No, no, no. You do not tell a carpenter, "Hey, newspaper is a great building material as long as you change your expectations about how strong and protective a house is supposed to be."

You pick a tool because it can help you do the job. You do not change the job so that it will fit the tool. This backward thinking is the heart of what's wrong with the CCSS. The Core are not about defining what are the most critical qualities of an excellent education. The Core are about codifying the qualities of education that will most work with our measuring tools.

Barber praises the authors of the paper for their "aspirational vision" of what success in schools would look like.

They see teaching,learning and assessment as different aspects of one integrated process, complementing each other at all times, in real time; 

To which I reply, "Wow! Amazing! Do they also envision water that is wet? Wheels that are round? That is some real visionary shit there!" (To be clear, this is what every competent teacher already does!) Of course, they also see this:

sophisticated student profiles allowing teachers and students to make informed and precise decisions about next steps

So bring on that Big Brother stuff. Oh, and they see this, too 

more complex educational outcomes, such as inter- and intra-personal skills, becoming assessable, teachable, and learnable

So in the Brave New Pearson World, we will not only turn you into our idea of an educated person, but our idea of a good and sociable person. We will let you know which interpersonal skills you must learn, and we will tell you whether you are an acceptable human being or not. Well, actually, not tell YOU so much-- the people who really want to know are your future employers and landlords and bank officers and health insurers.

Barber acknowledges that there's much to do to make this "a reality across education systems." Science, data, validity, knowledge processes, plus structural and cultural changes at systemic and pedagogical levels. Barber admits that this will be difficult, but there's this:

Be that as it may, the aspiration to meet these challenges is right

Make no mistake-- Pearson's aspiration is to remake the world and the people who live in it into the form they believe is Right. It's at this point that Pearson and its acolytes appear to cross the line from simply trying to sell a profitable program of educational malpractice to resemble an immoral crusade to circumvent the governments, institutions, and freedoms of the human beings who live on this planet.

Wednesday, March 19, 2014

No Good Metrics??!!

The Ed Week account of a snippy meeting between Randi Weingarten, Dennis van Roekel, and the CCSSO included one quote that came roaring out at me. Randi and Dennis, bless their hearts, were just trying to deliver the news that the CCSS are not playing well in Peoria. The CCSSO, co-sponsors of the standards, were just not having it. The unions should get their people in line. The public wasn't getting the correct picture.

Melody Schopp, South Dakota Ed Secretary, was bemoaning the lack of press coverage for positive CCSS success stories. Mike Cohen, from Achieve (the accountable-to-nobody organization that helped birth, groom and market CCSS) chimed in that too much of the positive CCSS spin was anecdotal, and then let loose with this gem.

"We don't have any kind of good metrics" for measuring common-core implementation's success.

What??!! Really? Because I am pretty sure the sales pitch for CCSS involved the following:

1) See how bad our test scores are?!

2) Creating and installing CCSS will make our test scores not suck.

3) The success of CCSS will be obvious because test scores will rise.

The PARCC and SBA tests have been sold specifically on their merits as a metric for measuring the success of the CCSS. If that's not what they're measuring, why the heck are we bothering with them??

Not that I'm necessarily disagreeing with Cohen. It's possible there is no metric that could measure the success of CCSS. I can think of two possible reasons:

1) It is hard to measure the height of magical unicorns or Bigfoot.

2) It is hard to tell when a child abuse has been "successful."

But that's not the most important point-- the most important point is this:

Mr. Cohen-- if we have no metric for measuring the success of CCSS, that means we never had a metric for measuring the need for it, either.

Duncan Checks in with Race Results

The US DOE released reports Wednesday, March 19, to update us on how well the Race to the Top winners are doing (because in US education, we only want some states to be winners). The full collection of reports is here, but Arne wanted to let everyone know about his four superstars in Top Racing.

This year is the final year for implementing RTTT, and at this point we might expect to see some payoff from the investment of $4 Billion-with-a-B. According to Michele McNeil at EdWeek, Duncan says we are seeing those investments "enter the classroom" despite some "contention and chaos" in various states.

The area of improvement that needs the most improvement in its area is, apparently, teacher improvement. For improvement in this area, Duncan singled out North Carolina and Delaware.

This is astonishing. North Carolina has become the poster child for teacher beat-downs in the Eastern US, a state where teachers are leaving by busloads, floundering in debt after years without a raise, and facing the end of any sorts of job protection. This is the state where new teacher pay went up, but not anybody else's. This is the state where districts have been directed to offer their top 25% of teachers $500 in exchange for giving up tenure.This is the state whose leaders have seriously considered putting a twenty-year cap on the length of a teacher's career. This is the state that Virginia has started poaching teachers from simply by offering a decent wage and work conditions.

If this is a state that matches Duncan's idea of how to improve the profession, heaven help us all.

Beyond these four awesome examples of how to up teachers' games, US DOE displayed concern over some other states.

Ohio cannot interest districts in the state's great ideas. Florida's new evaluation system didn't give any different results from its old system, so clearly it's not working, because a new evaluation system apparently should show that there are lots of lousy teachers in the state. DC apparently no longer basks in the warm glow of Rhee-initiated teacher fixiness. Georgia is the very back of the pack-- so far back that they might lose their 9-million-dollar grant.

The DOE is allowing freebie extensions for a fifth year; eleven of the twelve have applied so far.

The full report offers a state-by-state reports that will take a little time and attention to unpack. I look forward to the data nuggets contained therein.