Thursday, November 19, 2015

More Evidence That Tests Measure SES

Want more proof, again, some more, of the connection between socio-economic status and standardized test results? Twitter follower Joseph Robertshaw pointed me at a pair of studies by Randy Hoover, PhD, at the Department of Teacher Education, Beeghly College of Education, Youngstown State University.

Hoover is now a professor emeritus, but the validity of standardized testing and the search for a valid and reliable accountability system. He now runs a website called the Teacher Advocate and it's worth a look.

Hoover released two studies-- one in 2000, and one in 2007-- that looked at the validity of the Ohio Achievement Tests and the Ohio Graduate Test, and while there are no surprises here, you can add these to your file of scientific debunking of standardized testing. We're just going to look at the 2007 study, which was in part intended to check on the results of the 2000 study.

The bottom line of the earlier study appears right up front in the first paragraph of the 2007 paper:

The primary finding of this previous study was that student performance on the tests was most significantly (r = 0.80) affected by the non-school variables within the student social-economic living conditions. Indeed, the statistical significance of the predictive power of SES led to the inescapable conclusion that the tests had no academic accountability or validity whatsoever.

The 2007 study wanted to re-examine the findings, check the fairness and validity of the tests, and draw conclusions about what those findings meant to the Ohio School Report Card.

So what did Hoover find? Well, mostly that he was right the first time. He does take the time to offer a short lesson in statistical correlation analysis, which will be helpful if, like me, you are not a research scholar. Basically, the thing to remember is that a perfect correlation is 1.0 (or -1,0). So, getting punched in the nose correlates about 1.0 to feeling pain.

Hoover is out to find the correlation between what he calls the students' "lived experience" to district level performance is 0.78. Which is high.

If you like scatterplot charts (calling Jersey Jazzman), then Hoover has some of those for you, all driving home the same point. For instance, here's one looking at the percent of economically disadvantaged students as a predictor of district performance.




 














That's an r value of -0.75, which means you can do a pretty good job of predicting how a district will do based on how few or many economically disadvantaged students there are.

Hoover crunched together three factors to create what he calls a Lived Experience Index that shows, in fact, a 0.78 r value. Like Chris Tienken, Hoover has shown that we can pretty well assign a school or district a rating based on their demographics and just skip the whole testing business entirely.

Hoover takes things a step further, and reverse-maths the results to a plot of results with his live experience index factored out-- a sort of crude VAM sauce. He has a chart for those results, showing that there are poor schools performing well and rich schools performing poorly. Frankly, I think he's probably on shakier ground here, but it does support his conclusion about the Ohio school accountability system of the time to be "grossly misleading at best and grossly unfair at worst," a system that "perpetuates the political fiction that poor children can't learn and teachers in schools with poor children can't teach."

That was back in 2007, so some of the landscape such as the Ohio school accountability system (well, public school accountability-- Ohio charters are apparently not accountable to anybody) has changed, along with many reformster advances of the past eight years.

But this research does stand as one more data point regarding standardized tests and their ability to measure SES far better than they measure anything else. 

Wednesday, November 18, 2015

Gates Takes Aim at Teacher Education

As noted today at Education Week, the Gates Foundation has fastened its aim on teacher preparation programs. The Bill & Melinda Gates Foundation is ready to drop $34 million cool ones on "cooperative initiatives designed to improve teacher-preparation programs' overall effectiveness."

So what does that mean? Good news? Bad news?

The three year grants are based on four principles:

* developing strong partnerships with school districts
* giving teacher-candidates opportunities to refine a specific set of teaching skills
* using data for improvement and accountability
* ensuring that faculty mentors are effective at guiding novices into the profession

The first sounds great. The second sounds... well, I don't know. Exactly what specific set are we talking about, and what does that even mean? Becoming an interrogatory specialist? Learning to be excellent at teaching fractions? I'm worried that the Gates tendency to believe that all complex activities can be broken down into disconnected, context-free skills is at play here, in which case I'm doubting this will be useful.

Third? Well, if I thought "data" meant what I mean by "data, I'd think this was fine. I use data every minute of every day. But since this is Gates, I'm afraid that "data" means "results from a computer-based bunch of competency-based-baloney" or even "more of the useless data from those dreadful Big Standardized Tests."

Fourth point. Yes, excellent idea, if in fact you have any idea of how to tell that mentors are effective at guiding etc etc. Which I'm betting you don't, or worse, you have some sort of "based on student test scores VAM sauce baloney," which won't do anyone any good.

But hey, maybe the recipients of the Gates money will give us a clue about where this is headed.












Grantee #1 is TeacherSquared (you know-- a place that makes teacher teachers) which is mostly "nontraditional" preparation programs. In fact, it's mostly RelayGSE, a fake teacher school set up by charters so that non-teachers with a little experience could teach non-teachers with no experience how to be teachers. So that is not a good sign.

#2 Texas Tech University, "which will head the University-School Partnerships for the Renewal of Educator Preparation National Center" which is six Southern universities welded together. Lord only knows what that will look like.

#3 Massachusetts Department of Education, which will head up an EPIC (Elevate Preparation, Impact Children) center to work with all the teacher ed programs in the state. This is just going to be confusing, because the EPIC acronym has been used before-- including by charter schools in Massachusetts (Effective Practice Incentive Community). But the Massachusetts DoE has a mixed track record on reformy issues, so we'll see.

#4 National Center for Teacher Residencies, which is promoting a full-year residency model which has been popping up around the country and which I think could actually be a great idea.

TeachingWorks at University of Michigan will be a coordinating hub for all the cool things these other grantees will come up with.

According to EdWeek's Stephen Sawchuk, Gates wants each of these "centers" to crank out 2,500 teachers per year which is-- well, that is huge. I'm pretty sure that's more than most entire states produce. It is a grand total of 10,000 teachers. Per year. At a time when enrollment in teacher education programs is plummeting. The USPREPNC would have to get upwards of 600 teacher-grads per year out of its six member universities. I mean, we can turn this number around many ways, and from every angle, it's a huge number. Of the four grantees, only the state of Massachusetts seems likely to handle that kind of capacity.

Want more bad signs? Here's a quote from Vicki Phillips:

“The timing is great because of having great consistent, high standards in the country and more meaningful, actionable teacher-feedback systems and some clear definitions about what excellence in teaching looks like,” said Vicki Phillips, the Gates Foundation’s director of college-ready programs.

In other words, this is way to drive Common Core up into teacher education programs, where it can do more damage.


Anissa Listak  of the NCTR points out that making sure clinical faculty (i.e. co-operating teachers) are top notch will be a game changer, and I don't disagree. But it sidesteps the question of how the top notch faculty will be identified, and it really side steps the issue of how the program will find 10,000 master teachers who want to share their classes with a student teacher for a whole year-- especially in locations where test scores will reflect on their own teacher ratings (including, perhaps, the ratings that marked them as "qualified" to host a teacher-resident in the first place).

The Gates has identified a need here-- evaluating teacher preparation programs. Nobody is doing it (well, nobody except the scam artists at NCTQ who do it by reading commencement programs and syllabi), and if we had a legitimate method of measuring program quality, it could be helpful to aspiring teachers. But we don't, and it's not clear that any of these grantees have a clue, either.

It all rests on knowing exactly how to measure and quantify teacher excellence. With data. And boy, there's no way that can end badly.

Will the Gates money be well-spent? I'm not optimistic-- particularly not with an outfit like Relay GSE on the list of recipients. And the Gates has a bad history of using grants to push a narrow and unbending agenda that it has already formed rather than truly exploring an issue or trying to get ideas from people who might know something. In other words, if this is all just a way for Gates to impose his own ideas of what teacher training should look like, then it's likely to be as wasteful and destructive as his championing of Common Core.

Evil L.A. Teacher Unions

The Center for Education Reform is a charter promotion group, perhaps one of the most cynical and self-serving of the reformster groups. Search their website for information or ideas about education-- the actual pedagogy and instruction in a classroom-- and you will find nothing, because the Center has no actual interest in education.

Check out their board of directors-- you will find a combination of money managers and charter school operators. That is where the Center's interest lies-- in getting more money into more charters.

And what stands in the way of these corporate interests making a better, bigger buck? Well, those damn unions, of course. The Center may not have any section devoted to actually educating children, but they have a whole tab devoted to those damn unions, and here's What They Believe:

We believe that the special interests that draw funds from the tax dollars funding public education, and that have become an intransient [sic-- pretty sure they mean "intransigent," though "intransient" as in "won't move away to some other place" might suit them as well] force in political and policy circles, have outlived the usefulness of the associations they once had and have become obstacles to programs and activities that can best and most judiciously serve children. Such groups—from teachers unions, to the associations of administrators, principals, school boards and hybrids of all (e.g., “The Blob”)—should be free to organize but without access to the dollars that are spent to fund schools and should be free to recruit but not mandate members, but they should not have a public stream of money that permits the dues of members to subsidize their defense of the status quo.

The Center is currently excited with itself because it placed a quote in a Wall Street Journal article. The piece (behind a paywall) discusses the desire of some charter teachers to unionize. Or, as the Center headlined it in their regular email, "Teachers at Successful Los Angeles Charter School Organization Being Manipulated by Union Leaders."

The charter in question is the Alliance charter, a chain run by rich folks like a former mayor of LA and the owner of the Atlanta Hawks. Alliance is a big gun in the LA charter scene, and seventy of its 500-person teacher workforce started pushing for a union last spring.

"We believe that when teachers have a respected voice in policymaking it leads to school sustainability and teacher retention," said Elana Goldbaum, who teaches history at Gertz-Ressler High School, a member of the Alliance group. "We have a lot of talent and we want to see that stay. We want to see our teachers be a part of the decision-making and we want to advocate for our students and ourselves."

The union movement has sparked controversy, with the LA union claiming interference on the part of charter management and Alliance saying the teachers feel harassed by the union. The struggle escalated at the end of October when the California Public Employment Relations Board sued Alliance for engaging in anti-union activity.

All of this, somehow, is the evil union pulling the wool over the eyes of the poor, hapless teachers.

Look, the big unions are no angels, and the big-city unions are probably the least angelic of all. But you know that teachers need some kind of union when the charters are letting loose with baloney like this, the quote from the WSJ of which the Center is so proud:

“It’s not surprising that teachers that work at charter schools would not want to join a union,” said Alison Zgainer, executive vice president of the Center for Education Reform, a pro-charter organization in Washington, D.C. “They want more autonomy in the classroom, and being part of a union you lose that autonomy.”

I guess Zgainer is referring to "autonomy" as defined by charter operators-- the autonomy to be told you must work long hours over a long week. The autonomy to have instruction strictly dictated. The autonomy to be paid as little as the charter wants to pay you. The autonomy to be fired any time the charter feels like it. The autonomy to be trained in "no excuse" techniques that are just as prescriptive of teacher behavior as they are of student behavior. That autonomy.

The autonomy that business-driven charters care about is the autonomy of management. Their dream is the same dream as that of the 19th century robber barons who fought unions tooth and nail. It's a dream where a CEO sits in his office and runs his company with complete freedom to hire and fire, raise and lower salaries, and change the work hours (or any other terms of employment) at will. It's a dream of a business where the CEO is a visionary free to seek his vision (and profit from it) without having anyone ever say "no" to him.

That's the autonomy that folks like the Center for Education Reform are interested in.

In the CEO-centered vision of school, unions are bad. Unions are evil obstacles that dare to make rules by which the CEO must abide (they are often aided by Big Government, which also dares to interfere with the CEO). I think these folks believe in the myth of the Hero Teacher because it echoes the myth of the Hero CEO-- a bold genius who makes the world a better place by pushing aside all obstacles, including the people who don't recognize his genius, until he arrives at the mountain top, loved and praised by all the Little People who are grateful that he saved them. Compromise and collaboration are for the weak, and unions are just weaklings who want to drag down the Hero CEO because they are jealous of his awesomeness and afraid that their undeserved power will be stripped from them by his deserving might.

In this topsy-turvy world, unions must be crushed not just because they set up rules to thwart the Hero CEO, but because they are holding captive all the teachers who really want to give themselves body and soul to the Hero CEO's genius vision, but the union won't let them. Damn you, evil unions.

This does not explain all charter supporters (it does not, for instance, reflect the motivations of the social justice warrior school of charter support). But it sure does explain some, even as it is oddly reminiscent of "We'll be greeted as liberators" and the tantrums of any three-year-old. But I hope that the Center for Education Reform has to live impotently with the threat of evil unions for years to come.

Tuesday, November 17, 2015

Accelerated Reader's Ridiculous Research

If you are not familiar with Renaissance Learning and their flagship product, Accelerated Reader, count yourself lucky.

Accelerated Reading bills itself as a reading program, but it would be more accurate to call it a huge library of reading quizzes, with a reading level assessment component thrown in. That's it. It doesn't teach children how to read; it just puts them in a computerized Skinner box that feeds them points instead of pellets for performing some simple tasks repeatedly.

Pick a book (but only one on the approved AR list). Read it. As soon as you've read it, you can take the computer quiz and earn points. AR is a great demonstration of the Law of Unintended Consequences as well as Campbell's Law, because it ends up teaching students all sorts of unproductive attitudes about reading while twisting the very reading process itself. Only read books on the approved list. Don't read long books-- it will take you too long to get to your next quiz to earn points. If you're lagging in points, pick short books that are easy for you. Because the AR quizzes are largely recalling questions, learn what superficial features of the book to read for and skip everything else. And while AR doesn't explicitly encourage it, this is a program that lends itself easily to other layers of abuse, like classroom prizes for hitting certain point goals. Remember kids-- there is no intrinsic reward or joy in reading. You read only so that somebody will give you a prize.

While AR has been adopted in a huge number of classrooms, it's not hard to find folks who do not love it. Look at some articles like "3  Reasons I Loathe Accelerated Reader" or "Accelerated Reader: Undermining Literacy While Undermining Library Budgets" or "Accelerated Reader Is Not a Reading Program" or "The 18 Reasons Not To Use Accelerated Reader." Or read Alfie Kohn's "A Closer Look at Reading Incentive Programs." So, a wide consensus that the Accelerated Reading program gets some very basic things wrong about reading.

But while AR sells itself to parents and schools as a reading program, it also does a huge amount of work as a data mining operation. Annually the Renaissance people scrape together the data that they have mined through AR and they issue a report. You can get at this year's report by way of this website.

The eyebrow raising headline from this year's report is that a mere 4.7 minutes of reading per day separate the reading stars from the reading goats. Or, as US News headlined it, "Just a Few More Minutes Daily May Help Struggling Readers Catch Up." Why, that's incredible. So incredible that one might conclude that such a finding is actually bunk.

Now, we can first put some blame on the media's regular issues with reporting sciency stories. US News simply ran a story from the Hechinger Report, and when Hechinger originally ran it, they accompanied it with much more restrained heading "Mining online data on struggling readers who catch up: A tiny difference in daily reading habits is associated with giant improvements." But what does the report actually say?

I think it's possible that the main finding of this study is that Renaissance is a very silly business. I'm no research scientist, but here are several reasons that I'm pretty sure that this "research" doesn't have anything useful to tell us.

1) Renaissance thinks reading is word encounter.

The first chunk of the report is devoted to "an analysis of reading practice." I have made fun of the Common Core approach of treating reading as a set of contextless skills, free-floating abilities that are unrelated to the content. But Renaissance doesn't see any skills involved in reading at all. Here's their breakdown of reading practice:

* the more time you practice reading, the more vocabulary words you encounter
* students who spend more time on our test-preppy program do better on SAB and PARCC tests
* students get out of the bottom quartile by encountering more words
* setting goals to read more leads to reading more

They repeatedly interpret stats in terms of "number of words," as if simply battering a student with a dictionary would automatically improve reading. 

2) Renaissance thinks PARCC and SBA are benchmarks of college and career readiness

There is no evidence to support this. Also, while this assumption pops up in the report, there's a vagueness surrounding the idea of "success." Are they also using success at their own program as proof of growing student reading swellness? Because that would be lazy and unsupportable, and argument that the more students do AR activities, the better they get at AR activities.

No, if you want to prove that AR stuff makes students better at reading, you'll need a separate independent measure. And there's no reason to think that the SBA or PARCC constitute valid, reliable measures of reading abilities.

Bottom line: when Renaissance says that students "experienced higher reading achievement," there's no reason to believe that the phrase means anything.

3) About the time spent.

Much ado is made in the report about the amount of time a student spends on independent reading, but I cannot find anything to indicate how they are arriving at these numbers. How exactly do they know that Chris read fifteen minutes every day but Pat read thirty. There are only a few possible answers, and they all raise huge questions.

In Jill Barshaw's Hechinger piece, the phrase "an average of 19 minutes a day on the software"crops up. But surely the independent reading time isn't based on time on the computer-- not when so much independent reading occurs elsewhere.

The student's minutes reading could be self-reported, or parent-reported. But how can we possibly trust those numbers? How many parents or children would accurately report, "Chris hasn't read a single minute all week."

Or those numbers could be based on independent reading time as scheduled by the teacher in the classroom, in which case we're really talking about how a student reads (or doesn't) in a very specific environment that is neither chosen nor controlled by the student. Can we really assume that Chris reading in his comfy chair at home is the same as Chris reading in an uncomfortable school chair next to the window?

Nor is there any way that any of these techniques would consider the quality of reading-- intensely engaged with the text versus staring in the general direction of the page versus skimming quickly for basic facts likely to be on a multiple choice quiz about the text. 

The only other possibility I can think of is some sort of implanted electrodes that monitor Chris's brain-level reading activity, and I'm pretty sure we're not there yet. Which means that anybody who wants to tell me that Chris spent nineteen minutes reading (not twenty, and not eighteen) is being ridiculous.

(Update: The AR twitter account directed me to a clarification on this point of sorts. The truth is actually worse than any of my guesses.)

4) Correlation and causation

Barshay quotes University of Michigan professor Nell Duke, who points out what should not need to be pointed out-- correlation is not causation and "we cannot tell from this study whether the extra five minutes a day is causing kids to make dramatic improvements." So it may be

that stronger readers spend more time reading. So we don’t know if extra reading practice causes growth, or if students naturally want to read a few minutes more a day after they become better readers. “It is possible that some other factor, such as increased parental involvement, caused both,” the reading growth, and the desire to read more, she wrote.

But "discovering" that students who like to read tend to read more often and are better at it-- well, that's not exactly big headline material.

5) Non-random subjects

In her coverage of last year's report, Barshay noted a caveat. The AR program is not distributed uniformly across the country, and in fact seems to skew rural. So while some demographic characteristics do at least superficially match the national student demographics, it is not a perfect match, and so not a random, representative sampling.

So what can we conclude

Some students, who may or may not be representative of all students, read for some amount of time that we can't really substantiate tend to read at some level of achievement that we can't really verify. 

A few things we can learn

The data mining that goes into this report does generate some interesting lists of reading materials. John Green is the king of high school readers, and all the YA dystopic novels are still huge, mixed in with the classics like Frankensein, MacBeth, the Crucible, and Huck Finn. Scanning the lists also gives you an idea of how well Renaissance's proprietary reading level software ATOS works. For instance, the Crucible scores a lowly 4.9-- lower than the Fault in our Stars (5.5) or Frankenstein (12.4) but still higher than Of Mice and Men (4.5). Most of the Diary of a Wimpy Kid books come in in the mid-5.somethings. So if the wimpy kid books are too tough for your students, hit them with Lord of the Flies which is a mere 5.0 even.

Also, while Renaissance shares the David Coleman-infused Common Core love of non-fiction ("The majority of texts students encounter as they progress through college or move into the workforce are nonfiction"), the AR non-fiction collection is strictly articles. So I guess there are no book length non-fiction texts to be read in the Accelerated Reader 360 world.

Is the reading tough enough?

Renaissance is concerned about its discovery that high school students are reading work that doesn't rank highly enough on the ATOS scale. By which they mean "not up to the level of college and career texts." It is possible this is true. It is also possible that the ATOS scale, the scale that thinks The Catcher in the Rye is a 4.7, is messed up. Just saying.

The final big question 

Does the Accelerated Reader program do any good?

Findings from prior research have detected a tipping point around a comprehension level of about 85% (i.e., students averaging 85% or higher on Accelerated Reader 360 quizzes taken after reading a book or article). Students who maintain this level of success over a quarter, semester, or school year are likely to experience above-average achievement growth.

Remember that "student achievement" means "standardized test score." So what we have is proof that students who do well on the AR battery of multiple choice questions also do well on the battery of PARCC and SBA standardized test questions. So at least we have another correlation, and at most we have proof that AR is effective test prep.

Oddly enough, there is nothing in the report about how AR influences joy, independence, excitement, or lifelong enthusiasm for reading. Nor does it address the use of reading to learn things. Granted, that would all be hard to prove conclusively with research, but then, this report is 64 pages of unsupported, hard-to-prove assertions, so why not throw in one more? The fact that the folks at Renaissance Learning found some results important enough to fake but other results not even worth mentioning-- that tells us as much about their priorities and their program as all their pages of bogus research.

Monday, November 16, 2015

USED Goes Open Source, Stabs Pearson in the Back for a Change

The United States Department of Education announced at the end of last month its new #GoOpen campaign, a program in support of using "openly licensed" aka open source materials for schools. Word of this is only slowly leaking into the media, which is odd, because unless I'm missing something here, this is kind of huge. Open sourced material does not have traditional copyright restrictions and so can be shared by anybody and modified by anybody (to really drive that point home, I'll link to Wikipedia).

Is the USED just dropping hints that we are potentially reading too much into? I don't think so. Here's the second paragraph from the USED's own press release:

“In order to ensure that all students – no matter their zip code – have access to high-quality learning resources, we are encouraging districts and states to move away from traditional textbooks and toward freely accessible, openly-licensed materials,” U.S. Education Secretary Arne Duncan said. “Districts across the country are transforming learning by using materials that can be constantly updated and adjusted to meet students’ needs.”

Yeah, that message is pretty unambiguous-- stop buying your textbooks from Pearson and grab a nice online open-source free text instead.

And if that still seems ambiguous, here's something that isn't-- a proposed rules change for competitive grants. 

In plain English, the proposed rule "would require intellectual property created with Department of Education grant funding to be openly licensed to the public. This includes both software and instructional materials." The policy parallels similar policies in other government departments.

The represents such a change of direction for the department that I still suspect there's something about this I'm either not seeing or not understanding. We've operated so long under the theory that the way government gets things done is to hand a stack of money to a private company, allowing them both to profit and to maintain their corporate independence. You get federal funds to help you develop a cool new idea, then you turn around and market that cool idea to make yourself rich. That was old school. That was "unleashing the power of the free market."

But imagine if this new policy had been the rule for the last fifteen years. If any grant money had touched the development of Common Core, the standards would have been open source, free and editable to anyone in the country. If any grant money touched the development of the SBA and PARCC tests, they would be open and editable for every school in America. And if USED money was tracked as it trickled down through the states- the mind reels. If, for instance, any federal grant money found its way to a charter school, all of that schools instructional ideas and educational materials would have become property of all US citizens.

As a classroom teacher, I find the idea of having the federal government confiscate all my work because federal grant money somehow touched my classroom-- well, that's kind of appalling. But I confess-- the image of Eva Moskowitz having to not only open her books but hand over all her proprietary materials to the feds is a little delicious.

Corporations no doubt know how to build firewalls that allow them to glom up federal money while protecting intellectual property. And those that don't may just stop taking federal money to fuel their innovation-- after all, what else is a Gates or a Walton foundation for?

And realistically speaking, this will not have a super-broad impact because it refers only to competitive grants, which account for about $3 billion of the $67 billion that the department throws around. 

So who knows if anything will actually come of this. Still, the prospect of the feds standing in front of a big rack of textbooks and software published by Pearson et al and declaring, "Stop! Don't waste your money on this stuff!" Well, that's just special.

And in case you're wondering if this will survive the transition coming up in a month, the USED also quotes the hilariously-titled John King:

“By requiring an open license, we will ensure that high-quality resources created through our public funds are shared with the public, thereby ensuring equal access for all teachers and students regardless of their location or background,” said John King, senior advisor delegated the duty of the Deputy Secretary of Education. “We are excited to join other federal agencies leading on this work to ensure that we are part of the solution to helping classrooms transition to next generation materials.”

The proposed change will be open for thirty days of comment as soon as it's published at the regulations site. In the meantime, we can ponder what curious conditions lead to fans of the free market declaring their love for just plain free. But hey-- we know they're serious because they wrote a hashtag for it.

Sunday, November 15, 2015

KY: Big Data in Action

If you've been following the discussions of Competency Based Education and personalized education and huge new data mining, and you've been wondering what it would all look like on the ground--well, let's go to Kentucky!

The US Department of Education is might proud of Kentucky and their embrace of a one-stop shop for data about students and teachers. That stop is called the Continuous Instructional Improvement Technology System, and yes, there are so many naming and branding problems with the system that it is almost endearing in its clunkiness. I would not be surprised for a moment if I learned that Kentucky teachers are in-serviced by watching a filmstrip accompanied by a cassette that includes droning narration and a beep every time the filmstrip is supposed to be advanced. The sort-of-logo is a misshapen star that is clearing racing across something, carrying the words "Unbridled learning" on its...um... back. I presume that's some sort of Kentucky horsey reference. On top of that, nobody seems to know what to do with the name, which I have now seen rendered as "CIITS" or "CiiTS" in a variety of fonts and, well, it comes across anywhere between awkward and grossly inappropriate. And how is it pronounced? Apparently "sits," which is kind of awesome, because now when a Kentucky teacher gets a lousy rating through the system, colleagues can say the teacher took a real sitz bath.

All I'm saying is that somebody did not perform due diligence on the naming of this thing.

So what is this thing actually?

It gives teachers ready access to student data, customizable lessons and assessments, and a growing selection of professional development resources, such as training videos and goal-setting tools.

Folks praise it with the same sort of language usually used to laud CBE efforts-- "before I'd have to use a one-size-fits-all assessment, but now the computer administers one and gives me results for each student so I can design exactly what they need" and if you're thinking that sounds like regular teacher stuff, just with a computer, I'm right there with you.

But as we dig into CiiTS, we find an awful lot of plain old teacher stuff is now supposed to be done with computer.

For instance, here's a video showing how to load student assignments into The System. You will notice that The System is particularly well-suited to loading multiple choice question based materials, so if I were teaching in Kentucky, I'm sure I'd want to cut back on all those subjective writing thinky type assignments and stick with stuff that doesn't give The System gas. So here's our seventy-gazillionth example of how designing education systems backwards warps the function of the system. In other words, a teacher ought to be asking, "What's the best way to check for understanding? How can I best check for the most high-order, critical thinking understanding and skills." A teacher should not be asking, "What kind of assessment can I whip up that will fit the computer's data collection software?"

Oh, but CiiTS has more to offer than just recording every single grade for each student. Let's give that some context by feeding the computer all the lesson plans, linked to all the materials.

"Well, gee," you may ask. "If CiiTS is so loaded with data, it seems like I could keep an eye on everything." And indeed you could. Here's a power point presentation from the beginning of 2015 that looks at, among other things, getting people aligned to their correct job category so that CiiTS data can be properly deployed. So we have the capability of holding teachers accountable not just for one Big Standardized Test, but all those assignments the students did while they were still trying to learn the concepts. So remember, teachers-- when you design those materials, don't just remember to first consider the needs of the computer, but also remember that the assignment results will be part of your own personal record.

The presentation also reminds us that newer browsers are experiencing some conflicts with CiiTS, which is not surprising since CiiTS was rolled out in 2011.

The presentation also shares some of the states use numbers for the program, which include 47,524 unique teacher and leader logins. Kentucky has "over 40,000" teachers, so it looks like CiiTS is in wide use, with those 47,524 logins signing in almost 28 million times in 2014.

The slide show also indicates that teachers can load personal growth goals into the system, and so can students (who can record the self-reflection). So here's a system that can log in and assess every single assignment for every single student and track it against the standards, all stored up by individual.

USED thinks this all sounds swell. They say things like "more complete picture of student learning" and "more targeted support." Students can move from district to district and have their complete record follow them. Anywhere. And there are banks of videos, materials, assessments, and other swell things that are already pre-keyed to the system. True, there have been technical glitches along the way, but the IT guys are always improving. Meanwhile, the teacher evaluation portion (KY is the only state to go full Orwell on teacher evals so far) may soon be upgraded to include student surveys. And of course all of that is carefully stored as well. I wonder if any Kentucky teacher will ever have to fill out a job application ever again.

Just saying that if you've been worried that Big Data will get the tools in place to suck up every piece of personal data from your child in school, and that we have to really worry about Big Data getting their hands on too much data some day, I am sorry to tell you that apparently some day arrived in Kentucky four years ago.

It sounds kind of like hell, but if any Kentucky teachers want to enlighten me further, I'd love to hear more. Because, yeah, it sounds pretty much like hell.

Guest Post: No Excuse, Deceptive Metrics and School Success

Emily Kaplan is an elementary school teacher in the Boston area. She's currently teaching in a public school, but her previous experience is with one of the region's high-achieving charter chains. She has written here about both her experience and some lessons from it, and I'm pleased to publish this here with her permission.

NO EXCUSE: AN ARGUMENT AGAINST DECEPTIVE METRICS OF SCHOOL SUCCESS

           Sixteen seven- and eight-year olds sit in a circle on the floor. On the wall to their left— the first thing they see upon entering and exiting the classroom, always done in complete silence— is a list of individual “Assessment Goals.” (This “no excuses” charter network creates its own high-stress tests, which all students take at least five times per month, beginning in kindergarten.) One student´s math goal reads, “I only use strategies that I know.” All are written in the teacher’s handwriting. Others include, “I read my work over so I don´t make careless mistakes.” “I begin each sentence with a capital letter.” “I draw base-ten blocks to show my work.”
On the wall to their right is a list of the class averages from the last six network assessments (taken by all second graders across the charter network´s three campuses), all of which are in the 50s and 60s. Even though these two-hour tests are designed by network leaders to be exceptionally challenging— a class average of an 80% is the holy grail of teachers, who use their students´ scores to compete for status and salary increases— this class´s scores are the lowest in the school, and the students know it.
The teacher speaks to them in a slow, measured tone. “When I left school here yesterday, after working hard all day to give you a good education so you can go to college, I felt disappointed. I felt sad.”
Shoulders drop. Children put their faces in their hands.
“And do you know why?” The teacher looks around the circle; children avert their eyes.
One child raises her hand tentatively. “We didn´t do good on our tests?”
The teacher nods. “Yes, you didn´t do well on your assessments. Our class average was very low. And so I felt sad. I went home and I felt very sad for the rest of the day.”
The children nod resignedly. They´ve heard this many times before.
Suddenly, one child, an eight-year-old who has been suspended for a total of sixteen days for repeatedly failing to comply with school rules, raises his hand. The teacher looks at him. “I am noticing that there is a question.”
The child tilts his head. “What does average mean?” Several children nod; it seems that they, too, have been wondering this, but have been too afraid to ask.
The teacher sighs. “It´s a way to tell if everyone in this room is showing self-determination. And what I saw yesterday is that we are not. Scholars in Connecticut College” —at the school, children are “scholars,” and classrooms are named after four-year colleges— “are not less smart than scholars in UMass. But the scholars in UMass got a 78% average.”
One girl pipes up. “And we only got a 65%!”
The teacher moves the child´s clothespin a rung down on the “choice stick” for speaking out of turn. “And the scholars in Lesley got a 79%. The scholars in UMass and the scholars in Lesley are not smarter than you are. They do not know how to read better than you.” She looks around. “They do not know how to write better than you.” Suddenly, her voice rises in volume. “Scholars, what can we do to show UMass and Lesley that we are just as smart as they are?”
The children look to the list of “assessment goals” posted on the wall. They raise their hands, one by one.
“I will read my work over so I don´t make mistakes.”
The teacher nods.
“I will begin every sentence with a capital letter.”
“I will do my best work so you don´t get sad anymore.”
The teacher smiles. “Good.”

            This teacher— with whom I co-taught a second grade class— is now a high-level administrator and “instructional coach” at the school. It is her job to ensure that the school’s instructors (almost all of whom are white) to “teach” using these dehumanizing, teacher-focused tactics with their students (almost all of whom are children of color from low-income families.) The school is one of several Boston-area “no excuses” charters that receive major accolades (and many hundreds of thousands of dollars in grants and prizes) for their high scores on state standardized tests. Supporters and leaders of these schools claim that the high scores extracted using these methods prove that the schools are “closing the achievement gap.” Look, they say, pointing to the score reports: poor black kids in Boston are outperforming rich white kids in Newton and Brookline and Wellesley.
            And, indeed, this data is compelling. Its very existence teaches a powerful lesson that this country needs to hear: children of color from low-income homes can outperform wealthy white children on standardized tests, which are the metrics that we as a society have decided mean…well, something.
            The problem is that standardized test scores mean very little. On the only tests that do mean a tremendous amount for these students— the SSATs— students at the school I taught at perform abysmally. Subsequently, these same middle schoolers who often dramatically outperform their wealthy white peers on these tests are not accepted in large numbers to the most selective high schools (and most of those who do struggle socially and emotionally when thrust into student bodies that aren’t upwards of 98% students of color); struggle to succeed academically in high school (81% earn high school grade-point averages below 3.0 in the first semester); and certainly do not thrive after high school, graduating from college at very low rates and, among those who don’t go to college, failing in large numbers to secure full-time employment.
Correlation is not causation, after all; the fact that those wealthy white students who do well on state standardized tests go on to enjoy tremendous opportunities, in education and in life, does not mean that these scores cause these outcomes. This fallacy, however, constitutes the fuel of the no-excuses runaway train, and leads to the dehumanization of children of color at schools like the one at which I taught. At this school, children are deprived of a comprehensive, developmentally appropriate, and humane education; instead, they are subjected to militaristic discipline, excessive amounts of testing (well beyond that which is already mandated by the state), a criminally deficient amount of playtime (in a nine-hour school day, kindergartners have twenty minutes of recess), and lack of access to social-emotional curricula— all so that the people who run their schools can make a political point.
            If we are to improve the educational prospects of this country’s most at-risk students, we need to examine our educational practices and institutions using metrics that matter. Standardized test scores are easier to obtain and compare than data which are nuanced, holistic, and, to the extent possible, representative of aspects of K-12 education which enable and predict access to higher education and opportunities in life. (The fact that we have not yet found the perfect embodiment of the latter by no means excuses the continued use of the former.) Our obsession with meaningless, deceptive standardized test scores creates schools, like the “no excuses” charter at which I taught, which seem to excel— but fail in the ways that truly matter. There is simply no excuse.