Showing posts with label Pearson. Show all posts
Showing posts with label Pearson. Show all posts

Sunday, February 7, 2016

ICYMI: Your Sunday Halftime Reading

Just kidding. At my house, the game will not even be on, and I'm pretty sure life will go on. But here are a few pieces to read today.

The Real Issue with Teacher Pay

The North Carolina 2015 Teacher of the Year has a few things to say about respect for the profession (and if you've been paying attention to North Carolina, you know why)

Alice's Adventures in Public Education

Turns out Lewis Carroll was writing about the future, and here we are. 

The Classroom Door Is Always Open

A visit to one of the few old-style schools of choice still operating out there. This is what it should be about.

Reforminess IS the Status Quo 

Jersey Jazzman continues his frustrated attempts to ground the education discussion in reality.

Why Aren't Public Schools Too Big To Fail? 

Steven Singer wonders why our response to failing schools is to abandon them, rather than attempt a rescue.

Cook for 17 minutes at 350 degrees

Frozen pizza instructions prompt a reflection on teaching skills in the English classroom.

George Orwell's Ed Conference 

Morna McDermott looks at the incredible, astonishing education conference coming up, courtesy our good friends at Pearson

Friday, November 27, 2015

Can Competency Based Education Be Stopped?

Over at StopCommonCoreNYS, you can find the most up-to-date cataloging of the analysis of, reaction to, and outcry over Competency Based Education.

Critics are correct in saying that CBE has been coming down the pike for a while. Pearson released an 88-page opus about the Assessment Renaissance almost a year ago (you can read much about it starting here). Critics noted way back in March of 2014 (okay, I'm the one who noted it) that Common Core standards could be better understood as data tags. And Knewton, Pearson's data-collecting wing, was explaining how it would all work back in 2012.

Every single thing a student does would be recorded, cataloged, tagged, bagged, and tossed into the bowels of the data mine, where computers will crunch data and spit out a "personalized" version of their pre-built educational program.

Right now seems like the opportune moment for selling this program, because it can be marketed as as an alternative to the Big Standardized Tests which have been crushed near to death under the wheel of public opinion. "We'll stop giving your children these stupid tests," the reformsters declare. "Just let us monitor every single thing they do every day of the year."

It's not that I don't think CBE is a terrible idea-- I do. And it's not that I don't have a healthy respect for and fear of this next wave of reformy nonsense. But I can't shake the feeling that while reformsters think they have come up with the next generation iPhone, they're actually trying to sell us a quadrophonic laser disc player.

From a sales perspective, CBE has several huge problems

Been There, Done That

Teaching machines first cropped up in the twenties, running multiple choice questions and BF Skinner-flavored drill. Ever since, the teaching machine concept has kept popping up with regularity, using whatever technology was at hand to enact the notion that students can be programmed to be educated just like a rat can be programmed to run a maze.

Remember when teaching machines caught on and swept the nation because they provided educational results that parents and students loved? Yeah, neither does anybody else, because it never happened. The teaching machine concept has been tried, each time accompanied with a chorus of technocrats saying, "Well, the last time we couldn't collect and act on enough data, but now we've solved that problem."

Well, that was never the problem. The problem is that students aren't lab rats and education isn't about learning to run a maze. The most recent iteration of this sad, cramped view of humans and education was the Rocketship Academy chain, a school system built on strapping students to screens that would collect data and deliver personalized learning. They were going to change the whole educational world. And then they didn't.

Point is, we've been trying variations on this scheme for almost 100 years, and it has never caught on. It has never won broad support. It has never been a hit.

Uncle Sam's Big Fat Brotherly Hands

Remember how inBloom had to throw up its hands in defeat because the parents of New York State would not stand for the extensive, unsecured and uncontrolled data mining of their children. inBloom tried to swear that the kind of data mining and privacy violation and unmonitored data sharing that parents feared just wouldn't happen on their watch. But the CBE sales pitch doesn't just refuse to protect students against extensively collected and widely shared data mining-- CBE claims the data grubbing is not only not a danger, but is actually a valued feature of the program.

The people who thought inBloom was a violation of privacy and the people that thought Common Core was a gross federal overreach-- those people haven't suddenly disappeared. Not only that, but when those earlier assaults on education happened people were uneducated and unorganized-- they didn't yet fully grasp what was actually happening and they didn't have any organizations or other aggrieved folks to reach out to. Now all the networks and homework are already done and in place.

I don't envision folks watching CBE's big data-grabbing minions coming to town and greeting them as liberators. CBE is more of what many many many people already oppose.

No Successes To Speak Of

This has always been a problem for reformsters. "Give me that straw," they say, "and I will spin it into gold." They've had a chance to prove themselves with every combination of programs they could ask for, and they have no successes to point to. Remember all those cool things Common Core would accomplish? Or the magic of national standardized testing? The only people who have made a respectable job of touting success are the charteristas-- and that's not because they've actually been successful, but because they've mustered enough anecdotes and data points to cobble together effective marketing. It's lies, but it's effective.


Everything else? Bupkus. This will be no different. CBE will be piloted somewhere, and it will fail. It will fail because its foundation combines ignorance of what education is, how education works, and how human beings work.

Anchored to What?

A CBE system needs to be linked to some sort of national standards, but only those who have been very well paid have a deep commitment to them are still even speaking the name of Common Core. To bag and tag a nation's worth of data, you must have common tags. But we've already allowed states to drift off into their own definitions of success, their own tests, their own benchmarks. Saying, "Hey, let's all get on the same page" is not quite as compelling as it once was, because we've tried it and it sucked. As the probably successor to ESEA says, centralized standardization of education is not a winning stance these days. So to what will the CBE be anchored?

Expensive As Hell

Remember how expensive it was to buy all new books and get enough computers so that every kid could take a BS Test? You can bet that taxpayers do. Those would be the same taxpayers who saw programs and teachers cut from their schools even as there was money, somehow, for expensive but unnecessary new texts and computers (which in some cases could be used only for testing).

When policy makers announce, "Yeah, here's all the stuff you need to buy in order to get with the CBE program," taxpayers are going to have words to say, and they won't be happy, sweet words.

If every single worksheet, test, daily assessment, check for understanding, etc is going to go through the computer, that means tons of data entry OR tons of materials on the computers, through the network, etc etc etc. The kind of IT system required by a CBE system would be daunting to many network IT guys in the private sector (all of whom are getting paid way more than a school district's IT department). It will be time-consuming, buggy, and consequently costly.

Who wants to be the superintendent who has to say, "We're cutting more music and language programs because we need the money to make sure that every piece of work your child does is recorded in a central data base." Not I.

Program Fatigue

For the first time, the general taxpaying public may really get what teachers are feeling when they roll their eyes and say, "A NEW program? Even though we haven't really finished setting up the old one?!"  

Bottom Line

I think that CBE is bad education and it needs to be opposed at every turn. But I also think that reformsters are severely miscalculating just how hard a sell it's going to be. We can help make it difficult by educating the public.

There will be problems. In particular, CBE will be a windfall for the charter industry if they play their cards right. The new administration will play a role in marketing this and I see no reason to imagine that any of the candidates won't help market this if they win. (Well, Sanders might stand up to the corporate grabbiness of it, and Trump will just blow up all the schools.)

But there will be huge challenges for the folks who want to sell us this Grade C War Surplus Baloney. It's more of a product that nobody wanted in the first place. We just have to keep reminding them why they didn't like it.

Monday, November 16, 2015

USED Goes Open Source, Stabs Pearson in the Back for a Change

The United States Department of Education announced at the end of last month its new #GoOpen campaign, a program in support of using "openly licensed" aka open source materials for schools. Word of this is only slowly leaking into the media, which is odd, because unless I'm missing something here, this is kind of huge. Open sourced material does not have traditional copyright restrictions and so can be shared by anybody and modified by anybody (to really drive that point home, I'll link to Wikipedia).

Is the USED just dropping hints that we are potentially reading too much into? I don't think so. Here's the second paragraph from the USED's own press release:

“In order to ensure that all students – no matter their zip code – have access to high-quality learning resources, we are encouraging districts and states to move away from traditional textbooks and toward freely accessible, openly-licensed materials,” U.S. Education Secretary Arne Duncan said. “Districts across the country are transforming learning by using materials that can be constantly updated and adjusted to meet students’ needs.”

Yeah, that message is pretty unambiguous-- stop buying your textbooks from Pearson and grab a nice online open-source free text instead.

And if that still seems ambiguous, here's something that isn't-- a proposed rules change for competitive grants. 

In plain English, the proposed rule "would require intellectual property created with Department of Education grant funding to be openly licensed to the public. This includes both software and instructional materials." The policy parallels similar policies in other government departments.

The represents such a change of direction for the department that I still suspect there's something about this I'm either not seeing or not understanding. We've operated so long under the theory that the way government gets things done is to hand a stack of money to a private company, allowing them both to profit and to maintain their corporate independence. You get federal funds to help you develop a cool new idea, then you turn around and market that cool idea to make yourself rich. That was old school. That was "unleashing the power of the free market."

But imagine if this new policy had been the rule for the last fifteen years. If any grant money had touched the development of Common Core, the standards would have been open source, free and editable to anyone in the country. If any grant money touched the development of the SBA and PARCC tests, they would be open and editable for every school in America. And if USED money was tracked as it trickled down through the states- the mind reels. If, for instance, any federal grant money found its way to a charter school, all of that schools instructional ideas and educational materials would have become property of all US citizens.

As a classroom teacher, I find the idea of having the federal government confiscate all my work because federal grant money somehow touched my classroom-- well, that's kind of appalling. But I confess-- the image of Eva Moskowitz having to not only open her books but hand over all her proprietary materials to the feds is a little delicious.

Corporations no doubt know how to build firewalls that allow them to glom up federal money while protecting intellectual property. And those that don't may just stop taking federal money to fuel their innovation-- after all, what else is a Gates or a Walton foundation for?

And realistically speaking, this will not have a super-broad impact because it refers only to competitive grants, which account for about $3 billion of the $67 billion that the department throws around. 

So who knows if anything will actually come of this. Still, the prospect of the feds standing in front of a big rack of textbooks and software published by Pearson et al and declaring, "Stop! Don't waste your money on this stuff!" Well, that's just special.

And in case you're wondering if this will survive the transition coming up in a month, the USED also quotes the hilariously-titled John King:

“By requiring an open license, we will ensure that high-quality resources created through our public funds are shared with the public, thereby ensuring equal access for all teachers and students regardless of their location or background,” said John King, senior advisor delegated the duty of the Deputy Secretary of Education. “We are excited to join other federal agencies leading on this work to ensure that we are part of the solution to helping classrooms transition to next generation materials.”

The proposed change will be open for thirty days of comment as soon as it's published at the regulations site. In the meantime, we can ponder what curious conditions lead to fans of the free market declaring their love for just plain free. But hey-- we know they're serious because they wrote a hashtag for it.

Saturday, September 12, 2015

Implementationism and Barber

This week, the Education Delivery Institute is delighted to announce a new book/marketing initiative co-authored by Nick Rodriguez, Ellyn Artis, and Sir Michael Barber. Rodriguez and Artis may not be familiar to you, but Barber is best known as the head honcho of Pearson. So you know where this is headed.








 This is Nick Rodriguez, a personal trainer in Houston. Not the same guy.



Their new book has the more-than-a-mouthful title Deliverology in Practice: How Education Leaders Are Improving Student Outcomes, and it sets out to answer the Big Question:

Why, with all the policy changes in education over the past five years, has progress in raising student achievement and reducing inequalities been so slow?

In other words-- since we've had full-on reformsterism running for five years, why can't they yet point to any clear successes? They said this stuff was going to make the world of education awesome. Why isn't it happening?

Now, you or I might think the answer to that question could be "Because the reformy ideas are actually bad ideas" or "The premises of the reforms are flawed" or "The people who said this stuff would work turn out to be just plain wrong." But no-- that's not where Barber et al are headed at all. Instead, they turn back to what has long been a popular excuse explanation for the authors of failed education reforms.

Implementation. 

"Well, my idea is genius. You're just doing it wrong!" is the cry of many a failed geniuses in many fields of human endeavor, and education reformsters have been no exception.




Just an implementation problem. As in, don't implement these into your digestive system.





I have trouble wrapping my head around the notion that implementation is somehow separate from conception. Cold fusion is a neat concept, but the fact that it cannot actually be implemented in the world we physically inhabit renders it kind of useless. Politics are particularly susceptible to the fallacy that My Idea Is Pure Genius and if people would just behave the way I want them to, it will all work out brilliantly.






"Millions starving? Don't worry-- it's just an implementation problem."








The implementation fallacy has created all sorts of complicated messes, but the fallacy itself is simply expressed:

There is no good way to implement a bad idea. 

Barber, described in this article as "a monkish former teacher,"  has been a champion of bad ideas. He has a fetish for data that is positively Newtonian. If we just learn all the data and plug it into the right equations, we will know everything, which makes Michael Barber a visionary for the nineteenth century. Unfortunately for Barber, in this century, we're well past the work of Einstein and the chaoticians and the folks who have poked around in quantum mechanics, and from those folks we learn things like what really is or isn't a solid immutable quality of the universe and how complex systems (like those involving humans) experience wide shifts based on small variables and how it's impossible to collect data without changing the activity from which the data is being collected.

Barber's belief in standardization and data collection are in direct conflict with the nature of human beings and the physical universe as we currently understand it. Other than that, they're just as great as they were 200 years ago. But Barber is a True Believer, which is how he can say things like this: 

“Those who don’t want a given target will argue that it will have perverse or unintended consequences,” Sir Michael says, “most of which will never occur.” 

Yup. Barber fully understands how the world works, and if programs don't perform properly, it's because people are failing to implement correctly.

  


Nothing wrong with the suit. It's just an implementation problem.









Fortunately, Barber has a system for fixing the implementation issue.

Deliverology 

According to this piece in the Economist, Barber was early on inspired by a 1995 book by Mark Moore, Creating Public Value, a work also popular in the Clinton administration. Pearson went on to develop his own version of How To Get Things Done, which supposedly was at first mockingly called Deliverology, a term that barber embraced. Google it and find it everywhere, generally accompanied by some version of these steps (here taken from a review of the new book):

  • Set clear goals for students, establish a Delivery Unit to help your system stay focused on them, and build the coalition that will back your reforms.
  • Analyze the data and evidence to get a sense of your current progress and the biggest barriers to achieving your goals.
  • Develop a plan that will guide your day-to-day work by explicitly defining what you are implementing, how it will reach the field at scale, and how it will achieve the desired impact on your goals.
  • Monitor progress against your plan, make course corrections, and build and sustain momentum to achieve your goals.
  • Identify and address the change management challenges that come with any reform and attend to them throughout your delivery effort.



 You're going to what with a delivery unit??









There are so many things not to love about this approach. Personally, I'm very excited about working as part of a Delivery Unit, and look forward to adding Delivery Unit to my resume. And "a coalition that will back your reforms" sounds so much nicer than "posse of yes-persons." I don't really know what "reach the field at scale" is supposed to mean, but it sounds important! Nor has it escaped my notice that this whole procedure can be used whether you are teaching humans, training weasels, or manufacturing widgets. 

But the most startlingly terrible thing about deliverology is that it allows absolutely no place for reflection or evaluation of your program. Surround yourself with those who agree. Anything that gets in your way is an "obstacle." And at no point in the deliverology loop do I see a moment in which one stops to ask, "So, is our set of goals actually doing anybody any good? Let's take a moment to ask if what we're trying to do is what we should be trying to do."

This is another problem of implementationism-- the belief that implementing a program is completely separate from designing and creating it in the first place. Implementation should be an important feedback loop. If you start petting your dog with a rake and the dog starts crying and bleeding, the correct response is not, "We have an implemention problem. We'll need to hold down and silence the animal so that it doesn't provide a barrier to implementing our rake-petting program."

No, the proper response is, "Holy hell! Petting my dog with a rake is a turning out to be a terrible idea! I should start over with some other idea entirely!"




My dates all end badly, but I'm sure it's not me. Must be an implementation problem.









Implementationism and Deliverology Misdirection

What these ill-fated approaches do is allow guys like Barber to focus attention everywhere except the place where the problem actually lies. 

If I develop a cool new unit for my classroom, and it bombs terribly, I can certainly look at how I implemented and presented the unit. But I would be a fool (and a terrible teacher) not to consider the possibility that the unit just needs to be heavily tweaked or just plain scrapped. As long as Barber and his acolytes insist that there's nothing wrong with Common Core or high-stakes testing or massive data collection to feed a system that will allow us to tell students what breakfast they should eat, they will face an endless collection of implementation problems.



 Just a little implementation problem




If the Titanic had never hit an iceberg on her maiden voyage, she might have looked like she was having no implementation problems at all. But between her bad design and inadequate safety measures, some sort of disaster was going to happen sooner or later. 

Deeply flawed design yields deeply flawed results, and quality of implementation won't change that a bit. 

Five years (at least-- depending on how you count) of reformster programs have yielded no real success stories. This is not an implementation problem, and reformsters have to look at that and consider the possibility that their beloved reformy ideas have fundamental problems. To be fair, some have. But those who don't simply can't be taken seriously. Even if they write books.

Wednesday, June 10, 2015

Pearson Wants To Check Your Glasses

You just can't make this stuff up.

Pearson VUE is the division of the massive corporation that actually delivers tests to a computer screen near you. They are, for instance, the folks who handle the actually administration of the GED, but they also handle nursing exams and many financial industry clients.

But you don't stay on top of that industry without being on top of things. So here's a new policy that came out in a February circular from Pearson VUE:

Pearson VUE upholds a high level of security for safeguarding the testing programs offered by our exam sponsors. To maintain this high level, we are continually evaluating our technology and processes to ensure that we are adequately addressing existing and emerging security threats. New technology advancements in eyewear, such as Google Glass, camera glasses and spy glasses,and the availability of this technology have been identified as security risks. 

As a result, we conducted a pilot to improve our processes to visually inspect candidate eyeglasses during the admissions process and created specific training on how to identify eyeglasses with built-in technology. The purpose of the pilot was to field test the change in process for visually inspecting all candidate glasses for built-in technology.

Yes, the next time you go to take the GED, you'll have to present your eyeglasses for inspection (though the test administrator is not to actually touch them) to determine that you are not using any spywear. 

No sign yet that we'll be imposing similar security measures on students taking the PARCC, but I am now officially not going to be shocked when it happens. Because when you're protecting something as precious as proprietary test information, you just can't be too careful.

Tuesday, March 17, 2015

More Social Media Stalking: Meet Tracx

My blog-colleague Daniel Katz reported a curious follow-up to his recent post about the Pearson-Tracx-Social media monitoring flap-- a piece of spambot advertising for Tracx that popped up in his comments. "Hilarious," I thought. "A company spams a blog that is actually holding it up as an example of bad behavior."

But lo and behold, this afternoon I find that I have two of the identical message on one of my own posts about Pearson's Big Brotherly behavior.

Tracx offers a unified, enterprise-scale, social media management platform. We help brands and organizations from around the world listen and learn about issues related to their products and services so that they can provide a better customer experience and reach new audiences. To learn more about Tracx visit http://www.tracx.com #customerexperience #betterservice #bettersupport #betterproducts​ #engagingnewaudiences 

The comment was "signed" by Benjamin Foley. Now put on your water-wings, boys and girls, as we head down the Tracx rabbit hole.


Turns out that Ben Foley is a person. In fact, he's the person who writes for the blog Making Tracx, all about the awesomeness of Tracx as a means of stalking customers on the interwebs. Come on! Let's learn more!

Meet Ben Foley

Foley's LinkedIn account announces that he is "a creative, results-oriented, energetic and highly motivated marketing leader. " Now, Ben is a young fella-- he's only been at his current post at Tracx since July of 2014, where he does cool things like leading "content creation activities (blog posts, social posts, thought leadership pieces/whitepapers)." Before that he spent nine months as a Tracx sales associate. Before that he was an intern at the Concord, MA, district courthouse (4 months). Before that four months at Vector Marketing, before that four months at the Nauset surf shop (summer of 2010), and before that, a little over two years as a grocery manager at Chatham Village Market in Chatham, MA.


Ben is a 2013 graduate of St. Lawrence University where-- well, I'll just let him tell you in his special prose style:


His relentless analytical interest in the forces that drive people’s emotions and behaviors, both at the individual and group level, resulted in his obtainment of a B.S. in Psychology from St. Lawrence University. Since then Ben has applied his passion for people, their motivations, and their interests to the social media analytics industry by influencing strategic vision and executing multifaceted marketing campaigns.

I don't want to indulge in too much mockation of Young Ben; I remember the days after my obtainment of my degreeification, and the deep pleasure I took in spouting college level gibberish. But this is a man who has clearly found a home in an industry that rewards garblizationizing. I mean, here's another one of his achievements as marketing coordinator:

 Drives multichannel lead generation activities which have yielded significant YOY increase in growth, coupled with a YOY increase in marketing generated pipeline.

This is the guy that Tracx has pushing Tracx out into the world and representing their consummate social mediazation skills. I'm feeling a little YOY myself.

What does he say about Tracx?

I happened to land on this blog post--Out of The Dark Ages: The Rise of Social Media Sentiment Analysis, Part 2-- The Renaissance.

Major irony alert: This is all about how media monitoring software can successfully read the sentiment of a post. In other words, the bots can tell whether your post is filled with love or deeply infusified with hatred and anger-- admittedly useful for companies to know. The post is from 2014, and so one would think that this wonderful software would be able to tell the difference between a blog post that says, "Shame on you, you terrible stalking big brothery corporate stooges" and one that says, "I love being stalked on line. Please tell me more about how I can be stalked more effectively." But apparently that feature is not yet available.

The latest post on the blog reports breathlessly that Edison Partners has featured Tracx CEO Eran Gilad because he's awesome. Eran first met the Tracx team when a friend asked him to validate their business plan, which I guess is how you "meet cute" in the tech world.

Eran has some business background beyond grocery management.


After 8 years as VP, Business Development at Comverse, a large telecom vendor, I decided to tap into the Israeli start-up nation scene where you can freely make no money and still be considered a local hero. Go figure.

He also spent five years in the military and worked with "the Nordics." His recommended reading for every executive is Siddhartha. And -- make of this what you will-- if given the chance to have a super-power, he would choose the power to cancel everyone else's super power.

The blog also trumpets that Tracx ("the global leader in social listening and engagement platforms for Fortune 1000 companies") has integrated image and text analytics, the better to figure out what we're all up to. For this post, Young Ben throws in another description of the Tracx brand:

Tracx is the next generation social enterprise platform that empowers brands to manage, monetize, and optimize their business. The technology refines and analyzes masses of data across all social channels, providing deep insights into customer, competitor, and influencer behaviors. It delivers the most relevant, high impact audiences and conversations by capturing a 360-degree view of activity around a brand, product, or ecosystem. With Tracx, companies obtain geographic, demographic, and psychographic insights to identify and target influencers, improve planning, enhance monitoring, and effectively focused engagement. Tracx is headquartered in New York City with offices in Tel Aviv and London .

YOY, indeed.

I don't want to pick on Young Ben personally, but spamming my blog is kind of asking for it. I just want us all to remember, the next time we're contemplating the kinds of companies that make a living finding more effective ways to monitor our behavior and sell the data to our corporate overlords-- let's all just remember that this is also the kind of company that hires and empowers the kind of fresh-faced young kid who writes about the obtainment of his degree, and if we found him in our high school parking lot monitoring the comings and goings of our students, it would make us sad. 

 




Saturday, March 14, 2015

Pearson Proves PARCC Stinks

When I was in tenth grade, I took a course called Biological Sciences Curriculum Studies (BSCS). It was a course known for its rigor and for its exceedingly tough tests.

The security on these tests? Absolutely zero. We took them as take-home tests. We had test-taking parties. We called up older siblings who were biology majors. The teacher knew we did these things. The teacher did not care, and it did not matter, because the tests required reasoning and application of the basic understanding of the scientific concepts. It wasn't enough, for instance, to know the parts of a single-celled organism-- you had to work out how those parts were analogous to the various parts of a city where the residents made pottery. You had to break down the implications of experimental design. And as an extra touch, after taking the test for a week outside of class, you had to take a different version of the same test (basically the same questions in a different order) in class.

Did people fail these zero-security take home tests? Oh, yes. They did.

I often think of those tests these days, because they were everything that modern standardized test manufacturers claim their tests are.

Test manufacturers and their proxies tell us repeatedly that their tests require critical thinking, rigorous mental application, answering questions with more than just rote knowledge.

They are lying.

They prove they are lying with their relentless emphasis on test security. Teachers may not look at the test, cannot so much as read questions enough to understand the essence of them. Students, teacher, and parents are not allowed to know anything specific about student responses after the fact (making the tests even less useful than the could possibly be).

And now, of course, we've learned that Pearson apparently has a super-secret cyber-security squad that just cruises the interwebs, looking for any miscreant teens who are violating the security of the test and calling the state and local authorities to have that student punished(and, perhaps, mounting denial of service attacks on any bloggers who dare to blog about it).

This shows a number of things, not the least of which is what everyone should already have know-- Pearson puts its own business interests ahead of anything and everything.

But it also tells us something about the test.

You know what kind of test need this sort of extreme security? A crappy one.

Questions that test "critical thinking" do not test it by saying, "Okay, you can only have a couple of minutes to read and think about this because if you had time to think about it, that wouldn't be critical thinking." A good, solid critical thinking question could take weeks to answer.

Test manufacturers and their cheerleaders like to say that these tests are impervious to test prep-- but if that were true, no security would be necessary. If the tests were impervious to any kind of advance preparation aimed directly at those tests, test manufacturers would be able to throw the tests out there in plain sight, like my tenth grade biology teacher did.

A good assessment has no shortcuts and needs no security. Look at performance-based measures-- no athlete shows up at an event and discovers at that moment, "Surprise! Today you're jumping over that bar!"

Authentic assessment is no surprise at all. It is exactly what you expect because it is exactly what yo prepared for, exactly what you've been doing all along-- just, this time, for a grade.

Big Stupid Test manufacturers insist that their test must be a surprise, that nobody can know anything about it, is a giant, screaming red alarm signal that these tests are crap. In what other industry can you sell a customer a product and refuse to allow them to look at it! It's like selling the emperor his new clothes and telling him they have to stay in the factory closet. Who falls for this kind of bad sales pitch? "Let me sell you this awesome new car, but you can never drive it and it will stay parked in our factory garage. We will drive you around in it, but you must be blindfolded. Trust us. It's a great car." Who falls for that??!!

The fact that they will go to such extreme and indefensible lengths to preserve the security of their product is just further proof that their product cannot survive even the simplest scrutiny.

The fact that product security trumps use of the product just raises this all to a super-kafka-esque level. It is more important that test security be maintained than it is that teachers and parents get any detailed and useful information from it. Test fans like to compare these tests to, say, tests at a doctor's office. That's a bogus comparison, but even if it weren't, test manufacturers have created a doctors office in which the doctor won't tell you what test you're getting, and when the test results come back STILL won't tell you what kind of test they gave you and will only tell you whether you're sick or well-- but nothing else because the details of your test results are proprietary and must remain a secret.

Test manufacturers like Pearson are right about one thing-- we don't need the tests to know how badly they suck, because this crazy-pants emphasis on product security tells us all we need to know. These are tests that can't survive the light of day, that are so frail and fragile and ineffectual that these tests can never be tested, seen, examined, or even, apparently, discussed.

Test manufacturers are telling us, via their security measures, just how badly these tests suck. People just have to start listening.

Pearson Is Big Brother

You've already heard the story by now-- Pearson has been found monitoring students on social media in New Jersey, catching them tweeting about the PARCC test, and contacting the state Department of Education so that the DOE can contact the local school district to get the students in trouble.

You can read the story here at the blog of NJ journalist Bob Braun. Well, unless the site is down again. Since posting the story, Braun's site has gone down twice that I know of. Initially it looked like Braun had simply broken the internet, as readers flocked to the report. Late last night Braun took to facebook to report that the site was under attack and that he had taken it down to stop the attack. As I write this (6:17 AM Saturday) the site and the story are up, though loading slowly.

The story was broken by Superintendent Elizabeth Jewett of Watchung Hills Regional High School district in an email to her colleagues.  But in contacting Jewett he has learned that she confirmed three instances in which Pearson contacted the NJDOE to turn over miscreant students for the state to track down and punish. [Update: Jewett here authenticates the email that Braun ran.]

Meanwhile, many alert eyes turned up this: Pearson's Tracx, a program that may or may not allow the kind of monitoring we're talking about here.

Several thoughts occur. First, under exactly whose policy are these students to be punished. Does the PARCC involve them taking the same kind of high security secrecy pledge that teachers are required to take, and would such a pledge signed by a minor, anyway?

How does this fit with the ample case law already establishing that, for instance, students can go on line and create websites or fake facebook accounts mocking school administrators? They can mock their schools, but they have to leave important corporations alone?

I'm also wondering, again, how any test that requires this much tight security could not suck. Seriously.

How much of the massive chunk of money paid by NJ went to the line item "keep an eye on students on line?"

Granted, the use of the word "spying" is a bit much-- social media are not exactly secret places where the expectation of privacy is reasonable or enforceable, and spying on someone there is a little like spying on someone in a Wal-mart. But it's still creepy, and it's still one more clear indicator that Pearson's number one concern is Pearson's business interests, not students or schools or anything else. And while this is not exactly spying, the fact that Pearson never said a public word about their special test police cyber-squad, not even to spin it in some useful way, shows just how far above student, school, and state government they envision themselves to be.

Pearson really is Big Brother-- and not just to students, but to their parents, their schools, and their state government. It's time to put some serious pressure on politicians. If they're even able to stand up to Pearson at this point, now is the time for them to show us.

Saturday, February 14, 2015

Working within the College Marketplace

Looking at a Pearson employment-ish opportunity took me down a whole new rabbit hole. If you think of college campuses as a sort of oasis in our otherwise sales-obsessed culture, I have bad news for you.

The actual name of the job is Pearson Campus Ambassador. You can find the whole pitch here, but let me walk you through the highlights.

The job description is a bit fuzzy. Right up front the pitch is

We're hiring students to work part-time with Pearson on campus and help students do well in their courses. You get work experience, and we help more students succeed. It's a win-win! 

But among the touted benefits are "building valuable business and career skills like problem-solving, public speaking and communication." Ambassadors can also look forward to "working side by side with faculty, Pearson staff, and others."

Under a "Responsibilities" tab, a clearer picture begins to emerge. Here's what the job involves:


Lead Pearson technology demonstrations and/or presentations for students and faculty.
Create events, activities, and opportunities that help students best use Pearson technologies.
Collect campus feedback through focus groups, surveys, and individual interviews.
Work with professionals at Pearson to promote current products and shape future ones.
Participate in conference calls, team trainings, and regularly scheduled team meetings.

And from elsewhere in the website, another description of what the job looks like:

Pearson Campus Ambassadors are instrumental in leading classroom presentations on Pearson technologies, hosting tables at book fairs, and capturing video testimonials. They also facilitate focus groups, conduct student surveys, and create projects and events specific to the student experiences on their campuses.


So, basically, the job is be a Mary Kay lady who throws Tupperware parties for Pearson on your college campus.

Pearson's been doing this for a while. Actually, everybody's been doing this for a while. Some of the obvious players like Google and Microsoft hire student "ambassadors," but I found information about ambassadors for Barnes & Noble and General Mills (trying to get your college buddies to eat breakfast strikes me as an impressive challenge).

Here's Isa Adney back in 2012 talking about her experience as a Pearson ambassador back in 2012:

I actually consult with a wonderful student ambassador program with Pearson (the leading education services company), where students are paid to be on campus to help teach students how to get the most out of their new learning materials such as Pearson's MyMathLab) through class presentations and tutoring hours. These student ambassadors get to know more people on campus, earn money to help them in school, build connections and meet mentors within the company, and gain some really great professional experience. This doesn't always happen while working at a fast-food restaurant in college. 

One can see that this works well for the company-- a low-cost low-maintenance high-impact marketing strategy that doubles as a recruitment program, while helping get young people at the very beginning of their adult careers to think of corporations as their good buddies. And you don't have to sell your soul-- just your friends.



Wednesday, January 21, 2015

Good vs. Uniform

There's an interesting and fairly well-balanced article in February's Forbes about Pearson, focusing on CEO John Fallon. "Everybody Hates Pearson" is worth a read, but I'm going to pull a pair of quotes from it today.

At one point, writer Jennifer Reingold says this about Fallon:

He emphasizes that the company’s goal is to help students succeed...

That's not entirely true. Part of Pearson's job as the international behemoth of education is to define what "succeed" means. Which is precisely why an international behemoth of education poses a danger to education. Earlier in the article, Reingold offers this quote:

“It doesn’t matter to us whether our customers are hundreds of thousands of individual students and their parents in China, or thousands of school districts in America,” says Fallon. “What we’re trying to do is the same thing—to help improve learning outcomes.”

There's your problem. If you're trying to do "the same thing," for a a student in the US and a student in China, and if "it doesn't matter" to you which is which, then something is wrong.

A Pearson fan is going to protest, "Well, not exactly the same thing. No, obviously not that." And I'm sure that Pearson makes sure to change the language of the test and adjust the price for the local currency. But if your focus is on a fundamental sameness, if you are looking to create a uniform approach to education that can be used all across the globe, then you're doing it wrong.

What we're talking about is uniformity, standardization-- and uniformity is the enemy of excellence.

Jack Teagarden, George Brunis, and J. J. Johnson were great jazz trombone players, and it takes me about two seconds of listening to a recording to know which one I'm hearing play, because they are completely different. All excellent. All different.

Now, I could say that they're all essentially doing the same thing-- playing jazz trombone. But as soon as I try to come up with an "objective" measure of good jazz trombone playing, one that would fit all three of them plus Miff Mole or Urbie Green or some guy in China that I don't even know about, I would choose one of two options.

A) Declare that one behavior is defined as success, in which case I could end up declaring that Teagarden is not a great jazz trombonist because he doesn't use Johnson's be-bop licks. Any system that calls Teagarden a failure as a jazz player is patently absurd.

B) Select a definition of success that includes only traits that all jazz trombonists share. Or to put it another way, come up with a definition of success that deliberately excludes all the traits that make particular jazz trombonists great. This is also deeply backwards.

Uniformity and standardization do not just fail to embrace excellence; they actively reject it. Excellence, difference, variation, individuality-- all must be marked as failure because all violate the standard of uniformity.

When a cook at a McFastfood McRestaurant cooks a stunning chicken cordon bleu, he doesn't get a commndation. He gets fired. When a flight attendant shows up for work in uniform clothing that she has torn up and resewn into a stylish gown, she doesn't get a special prize.

There's only one way to create an educational system that can be marketed all around the globe, only one way to create a system that doesn't care whether you're using it in Nanking or Omaha. The only way to create such a system is to define success as something uniform, bland, and mediocre. The only way to create such a system is to use a definition of success that rejects excellence or any other sort of difference.

You can have an educational system that is good, or one that is uniform. You can't have both.




Sunday, December 21, 2014

Defending The Test

Feeling feisty after a successful election run, Republicans are reportedly gunning for various limbs of the reformster octopus, and reformsters are circling the wagons for strategic defense of those sucker-covered limbs.

People are finally remembering that it's the ESEA, due to be transformed from No Child Left Behind into something new since 2007, which gives current reformster wave of waivers its power. Fix the ESEA properly and you cut the legs out from under the current non-laws governing K-12 education in this country. At Ed Week, Klein and Camera report that some GOP aides are already drafting a version of an ESEA rewrite that removes the federal testing mandate. I'm a fan of the idea; months ago, I picked high stakes testing as the reformy thing I'd most like to see die.

Massive high stakes testing is at the center of the reformster program, but it's also one of the most visible and widely hated features of reformsterism. Duncan and other bureaucrats have been issuing word salads aimed at changing the optics since last summer, but nothing of substance has been done to lessen the impact of high stakes testing. Duncan saying, "Schools shouldn't focus on testing so much" without changing any of the policies related to testing is like a mugger saying, "Don't be so pre-occupied with my gun" while he continues to take your wallet.

Our current system is positively Kafkaesque, or possibly Dilbertesque. Schools have literally stopped doing our jobs full time so that we can devote more time to generating reports on how well we're doing our job. Even if the Big Test were an accurate measure of how well we're doing our job (which they are most certainly not), the current set-up is unequivocally absolutely stupid. It is like having welders spend half as many hours welding so that they can write up reports on output of the welding unit in the factory. It's like having your boyfriend go on half as many dates so that he can stay home and write notes about how much he misses you. It's like feeding your baby half as many meals because you need to keep him on the scale to check if he's gaining enough weight.

Actually-- it's worse than all of those. It is supervisory bureaucrats believing that their part of the process-- checking on how the work is going-- is more important than actually doing the work.

Objections to cutting testing all fall into that category. They are all variations on, "But if testing is cut, how will my office know what is going on in classrooms." Well, dipstick, we are trying to tell you what is going on in classrooms-- teachers regularly stop doing actual teaching so that they can prepare for and take your damn tests.

People propose local tests. Reformsters complain that local people just don't know how to make sexy, rigorous tests as well as corporate sponsors like Pearson. People propose staggering the tests, taking only one a year, or one every couple of years. Reformsters claim that this would make it easier to game the system, as if the testing system is not one giant game right now.

In his defense of testing, Andy Smarick offers this list of benefits of annual testing:
  • It makes clear that every student matters.
  • It makes clear that the standards associated with every tested grade and subject matter.
  • It forces us to continuously track all students, preventing our claiming surprise when scores are below expectations.
  • It gives us the information needed to tailor interventions to the grades, subjects, and students in need.
  • It gives families the information needed to make the case for necessary changes.
  • It enables us to calculate student achievement growth, so schools and educators get credit for progress.
  • It forces us to acknowledge that achievement gaps exist, persist, and grow over time.
  • It prevents schools and districts from “hiding” less effective educators and programs in untested grades.
Most of these are laudable goals-- that can be accomplished in other, better ways. Please don't tell me that if we put a group of teachers and education thought leaders in a classroom and asked, "What's the best way to make it clear that every student matters?"-- please don't tell me that the first answer on everybody's lips would be, "Why, to give them all a standardized test, of course!" Some of these are marketing issues-- reformsters like Big Tests Scores as a means to push choice and charters. And some of these are bureaucratic. Smarick's last three items have nothing at all to do with schools doing their job well; they are simply about making sure Important People have data points to put on bureaucratic and political documents.

Smarick shares with Andrew Saultz and others the belief that testing is also necessary in order to target failing schools. I call baloney on this. Smarick has been a critic of lousy urban schooling for a while; I don't believe for a second that he needed standardized test scores to conclude that some poor urban schools were doing a lousy job. If my hand is resting on a red-hot electric range, and the flesh is sizzling and smoke is curling up from my hand, I'm not standing there saying, "Hey, could someone bring me a thermometer so I could check this temp? I might have a problem here."

The one argument I can concede is that terrible test scores might allow activists to light a fire under the butts of non-responsive politicians (who would not notice a burning hand unless it was holding a thick stack of $100 bills). But we've had time for that to work, and it isn't happening. Lousy scores in poor urban schools are not being used to funnel resources, make infrastructure improvements or  otherwise improve poor urban schools-- results are just being used to turn poor urban schools into investment and money-making opportunities for charter operators and investors, and after a few years those outfits have no successes to point to that aren't the result of creaming or creative number-crunching. So this pro-test argument is also invalid.

Mike Petrilli has also stepped up to defend testing. Responding to the reported rewrite initiatives he asks,

Do Republicans really want to scrap the transparency that comes from measuring student (and school and district) progress from year to year and go back to the Stone Age of judging schools based on a snapshot in time? Or worse, based on inputs, promises, and claims? Are they seriously proposing to eliminate the data that are powering great studies and new findings every day on topics from vouchers to charters to teacher effectiveness and more?

The biggest problem with Petrilli's defense is that the current battery of bad standardized tests are not accomplishing any of those things. They are not providing transparency; they are just providing more frequent bad data than the "stone age" technique. The current Big Tests get their own authority and power from nothing more than "inputs, promises and claims." For-profit corporations are really good at creating that kind of marketing copy, but that doesn't make it so. And if data from the Big Tests are powering great studies and new findings, I'd like to see just one of them, because I read up pretty extensively, and I haven't seen a thing that would match that description.

Petrilli does, however, have one interesting idea-- "kill the federal mandate around teacher evaluation and much of the over-testing will go away."

I've always said that Petrilli is no dummy (I"m sure he feels better knowing I've said it). Tying teacher (and therefore school, and, soon, the college from which the teachers graduated) evaluation to both The Test and to the teachers' career prospects guarantees that schools will be highly motivated to center much of everything around that test. This is an aspect of the testing biz that Arne either doesn't understand or is purposefully ignoring. I tend toward the latter; if we go back to the Race to the Top program, we see that teacher evaluation linked to test results is the top policy goal.

If the test result mandate didn't come from the feds, each state would come up with its own version. It might not be any better than the current situation, but we'd have fifty interesting fights instead of one big smothering federal blanket. And each state would still have to come up with some sort of answer to the question of how to evaluate a fifth grade art teacher with third grade math test results.

Of course, there's a trade-off with reducing pressure to do all testing, all the time. The less pressure associated with The Big Test, the more students will not even pretend to take the tests a little bit seriously, and the less valid the results will be (and as invalid as the results are now, there's plenty of room left for that to go further south).

Tests are going stay under the gun because they are at once both the most visible and most senseless part of reformsterism. They are an even easier target for Republicans that the Common Core itself because unlike CCSS, everybody knows exactly what they are and whether or not they've been rolled back, and their supporters can't point at a single concrete benefit to offset the anxiety, counter-intuitive results, and massive waste of school time. And tests have reached into millions of American homes to personally insult families ("You may think your child is bright and worthy, but I'm an official gummint test here to tell you that your kid is a big loser").

But tests will be vigorously defended because-- Good God!! Look at that mountain of money!! The business plan of Pearson et al is about way more testing, not less. Test data is important to create charter marketing and support voucher programs. And because technocrats need data to drive their vision of reform, so they can never admit that the emperor not only has no clothes, but also is not actually an emperor but rather a large hairless rat that has learned to walk on its hind legs.

In short, The Big Test may turn out to be the front line, the divider between people who are worried about actual live human children and people who are worried about programs and policies and -- Good God!! That mountain of money is sooooo huge!!! You can bet that as we speak, lobbyists and their ilk are being dispatched toot suite to do some 'splaining to those GOP politicians who are after the bread and butter. Keep your eyes peeled as we enter the new year to see how this plays out.

Wednesday, December 17, 2014

Who Measures the Rulers?

Nobody squawked much when it was announced that Pearson had won the bid to develop the framework for the 2018 PISA test. The PISA, you will recall, is administered by the Organization for Economic Co-operative and Development every three years, leading directly to a festival of handwringing and pearl-clutching as various politicians and bureaucrats scramble to squeeze statistical blood from the big fat turnip of test results.

And yes, Pearson just won the right to design the 2018 edition. Given that back in 2011 Pearson won the contract to develop the 2015 PISA, the new contract is not a shocker. Given that Pearson is marching toward becoming the Corporation In Control Of Universal Testing, this barely qualifies as a blip. They have the GED. They have the PARCC. They have dreams of managing via computer every test, testlet, and testicle that exists.

There are many problems with that, but one of the fundamental issues is the one raised by this post's title.

When one person with one ruler does all the measuring, how are we to know if he's correct?

If we want to confirm the accuracy of our Pearson measuring tool so we check it against our Pearson standards device and make sure those results line up with the Pearson Master Assessment-- well, at the end of all that, what do we really know?

If Pearson tells us that our six-inch long baby pig weighs 500 pounds, how are we to discover that it's a lie? If Pearson weighs our bag of gold and tells us it's worth $1.98, and they own all the scales, how do we know if we're being cheated?

It doesn't matter whether the people who make the rulers are devious or incompetent-- if there is no one left to check their work, how do we know the true dimensions of anything? If Pearson makes all the tests and keeps assuring us, "Yessiree, this test lines up with our other test and fits in with the main test, so we can assure you that this absolutely measures true learning or complete education or intelligence or character or what matters in a human brain or the strength of a nation's education program," how do we check to prove whether that is true or not?

Who watches the watchmen? Who measures the rulers? To whom does Pearson answer, other than stockholders? I'm hoping we don't wake up some morning to discover the answer is "nobody."

Tuesday, December 16, 2014

Pearson's Renaissance (1): History and Revolution

Pearson has released another essay/position paper/world conquest outline. This one comes from Peter Hill and Pearson Commandant Michael Barber, and it's entitled "Preparing for a Renaissance in Assessment." We've looked at a Pearson position paper before, and it was kind of scary, so for that reason alone, this is not for the faint of heart. The fact that this paper is eighty-some pages long is also reason to balk. But because I love you guys, I am going to wade through this so that you don't have to. Although you probably should, because it's always good to get to know your new overlords.

I'm not kidding about the 88 pages.I'm going to break this up into several posts, mostly because I know some of you read on phones and tablets and I don't want to bust your thumbs. If you would like to get just some highlights, try this post. But over here we'll power through this a bit at a time, starting with the first segment of the paper, which presents Pearson's version of History So Far, what is driving the revolution in education, and what the revolution demands.

The Preliminaries

The cover features a multi-ethnic group of teenagers sitting at school desks working on digital tablets, just so you have an inkling of where we're headed.

Inside we get the intro to Pearson and our two authors. You may be less familiar with Peter Hill unless you are Australian, in which case you may have noticed him monkeying around with your educational system making sure you suffer through the same reformy GORP as the rest of us. Michael Barber, Educationist, gets his own wikipedia page. The least you need to know about him is that he runs Pearson, and that he was a big wheel at McKinsey. He is an A-list reformster.

There are some acknowledgements, and a forward from Lee Sing Kong. He's a trained horticulturist who somehow ended up as a bigwig at the National Institute of Education in Singapore. His intro: Blah blah blah thanks you guys for writing this awesomely important paper.


I. Setting the Scene

Schooling is made out of three parts: 1) curriculum, 2) learning and teaching, 3) assessment. They work together, but we're focusing on the third because it's the "lagging" one and also there's a consensus (somewhere) that it's on the verge of a rebirth. That's what we're going to talk about. We'll cover the reasons and nature of the change, tell governments, schools and leaders what they're supposed to do, and "provide a framework for action to enable change." Because Pearson does not dream small.

We're going to try not to be all technical, and we are going to focus on fifteen- to eighteen-year-olds. And we will particularly focus on assessment used for "certification, selection, accountability, and improving learning and teaching." And to do all that, we're going to have to set the stage.

The Educational Revolution

They take a pile of words to say that in modern times, education has changed less than anything, and that what changes have occurred haven't really changed any fundamentals of schools. So the question-- is the current upheaval in education indicating real revolution? "We have concluded...that this time things are different." Which is, of course, what they always say.

But the authors argue that this real revolution is being pushed by globalization and digital technologies and being pulled by the realization that "the current paradigm is no longer working as well as it should." Both of these factors are of course just natural and spontaneous and not at all trends that Pearson and other corporations have spent a gazillion dollars trying to foster and grow.

Globalization: the Key Driver of Revolutionary Change

Globalization is driven by technology, which is changing the world into the "Knowledge Society." And as God is my witness, they call this "the new world order," because they are not Americans.

In the past, it was possible "to talk with some certainty about the kind of education needed to prepare young people for life and work." The writers are not clear about how far in the past they think this magical time was, but okay. But nowadays, all the jobs are going away. Airport counters, bank tellers, supermarket checkers-- "anything that can be automated is being automated" is what they say next, though they don't follow it with "and if we have our way, that will include teachers." Then they suggest that Europe doesn't have enough STEM grads to fill job needs. So I guess it is possible to talk with some certainty about the kind of education needed to prepare young people for life and work?

They present two educational choices: 1) traditional core of schooling and 2) non-memorizing cross-disciplinary doing-not-knowing learning. Having created this artificial divide, they then declare that they don't think it's actually a conflict.

So what do they want? They want more. More of everything. More cross-curricular skills. More twenty-first century skills. More critical thinky stuff. And more intra-personal skills. Pearson wants your whole brain.

They like the Australian scheme of seven general capabilities:

1) literacy
2) numeracy
3) information and communication technology capability
4) critical and creative thinking
5) personal and social capability
6) ethical understanding
7) intercultural understanding

Which, I have to say, is way better than the Common Core that we are saddled with. Apparently the international benchmarking that our leaders claim to have done did not include any Aussies.

The writers also note that we're talking about changing the concept of what it means to be an educated person. And then they let their old fart flags fly by suggesting that Kids These Days have a more complicated and difficult world to make sense of than anyone else ever on their road to becoming useful citizens. And they segue again into the notion that education should be designed to develop students with character, students with grit and resilience, students who are The Right Kind of People. Not for the last time, we'll note that Pearson is perfectly comfortable laying out exactly what kind of people should be designed to live in the world. If Pearson ever thinks about Big Brother at all, it must be to think about how he thought too small and achieved too little.

And as we pivot toward the next section, we'll note that globalization not only has implications for how people should know and think and feel, but also for how they should be taught (spoiler alert: with technology).

The Performance Ceiling: The Other Driver

Hill and Barber trot out the observation that student achievement has been flat for decades. I'm always curious about this observation. Do critics think that IQ's should be steadily raised, like stock market averages?  At any rate, here come NAEP and PISA results again, leading to the conclusion that the systems currently in place have gotten all they can get out of juvenile brains. I don't see any research cited here to indicate that there are untapped reserves of educatedness in those juvenile brains; we're just going to take those unplumbed depths on faith, assuming that human intelligence and educational achievement have no innate ceiling and that human beings can expect to get infinitely smarter forever, until we're all big-headed geniuses from an Outer Limits episode.

Let's follow that us with some research used to prop up the idea that teachers are responsible for the topping out. The ceiling is made neither of glass nor brick, but of inert teacher bodies, human speedbumps on the road to infinite smartitude. And here comes one of the recurring themes of the paper-- How Teaching Must Be Changed.

Teaching must be transformed from a "largely under-qualified and trained, heavily unionized, bureaucratically controlled semi-profession into a true profession with a distinctive knowledge base, framework for teaching, well-defined common terms for describing and analyzing teaching at a level of specificity and strict control." We'll be returning to this point many times, so let me just shorten it to "teachers must be converted from humans to robots." We'll learn more about this in Part 2.

The authors would also like to scrap the whole age-grade progression in favor of a system that organizes students by ability instead. This is an idea that makes a great deal of sense to anyone who has not worked with fifteen- to eighteen-year-olds. But what they want is a new paradigm that puts individual students at the center of a personalized learning system.

Because nothing would be better at developing the kind of character and personality that Pearson envisions than looking at students in no context other than the context of their academic skills.

Key Elements of the Education Revolution

Our thesis, then, is that the 'push' factor of globalization and the 'pull' factor of the performance ceiling are together giving rise to an education revolution in which certain long-held beliefs and ways of doing things are being repudiated and replaced by a new set of beliefs and practices.

There are six Old Ways that they believe are being tossed on the trash can of educational history. Here's how Pearson believes the world has changed.

First, they believe the old way was that students were treated as empty vessels with fixed capacity for learning. That has been replaced by "practices that build on prior learning" and a belief that given sufficient expectations, motivation, time and support, all students can meet high standards. I'm not sure which planet used teaching not built on prior learning, but you will recognize the high expectations part in the "one size fits all" approach of Arne Duncan and his assertion that the only thing holding back students with learning disabilities is their teachers' low expectations.

Second, they believe that curricula that emphasize rote memorization is being replaced by "deep learning of big ideas and organizing principles." Honestly, where is this school reformsters keep talking about where rote memorization is still a big thing? Because I don't think I'm in some super-progressive corner of the universe, and nobody has based their instruction on rote memorization here since 1952.

Third, shifting from the school as the focus of educational policy to focusing on the individual student. I'm sure that this has nothing to do with wanting to do more direct marketing of educational products.

Fourth, we're going to replace the old time-bound school day and year with omni-education. Students will learn in all sorts of places all the time.

Fifth, we're moving from the teacher in a classroom to online instruction with more differentiation, with learning partnerships that leave the teacher as an "activator" of the various learning partnerships, connections, cybersymbioses, etc.  Kind of like Julie on the Love Boat.

Sixth, teachers must be converted from humans to robots.

The revolution has already begun (Pearson should know-- they're paying for it), but education is sluggish. Barber backs this up (for neither the first nor the last time) by quoting himself. In the next chapter, we're going to look at how Pearson thinks teaching and assessment should really work.

Pearson's Rennaisance (2): Assessment Driving Instruction

We are reading through the four chapters of Pearson's "Preparing for a Renaissance in Assessment," Peter Hill and Michael Barber's 88-page ode to reform. In the previous chapter, we looked at how the stage had been set for revolution. Next up-- a look at exactly how assessment (and teaching) is currently coming up short.

 

2. Assessment: A Field in Need of Reform

Assessments! Man, they're a mess. Particularly because of using tests for purposes other than those for which they were designed and unanticipated side effects. Here we will look at how the world's pre-eminent test salesmen see the various purposes of testing.

Assessment for selection and certification

Like the Regents tests of NY or the SATs. Interesting history of this sort of testing. Graduation exams are controversial all over the world.

Assessment for accountability

Also much fun for everyone. A brief history, including NCLB. They believe these come from a "consensus that outcomes matter; that they should be measured and that schools and systems should be held accountable for them." Neo-liberals like them because they provide data on which to base school choice which will of course lead to great schools. "Parents believe that they are entitled to know how their child is progressing" and boy, is this one tiresome. Has anybody ever heard of a parent stomping into a school or classroom and saying, "I'm tired of living in the dark. I demand you give my child a standardized test right now, dammit!"

The authors discuss both of these testing purposes as if they sprang up like kudzu. They modestly refrain from including any sentence like "Also, we have spent a gabillion dollars lobbying and advocating and convincing  powerful people that they need more testing to make their schools better." This is like reading an objective history of smoking written by The Tobacco Institute declaring, "Man, I don't know why everyone was smoking. I guess they just wanted to be sophisticated and cool."

For these two types of assessment, we face Four Big Challenges of Testing, which are

1) Accommodating the full range of student outcomes

Can they come up with a test that will accurately measure the full range of ability. Hint: remember the standardized test that your top kids finished in fifteen minutes and your low-function students spent five hours on? It was failing in this domain.

2) Providing meaningful information on learning outcomes 

Not being able to test the full range in turn leads to reports of results that aren't exactly helpful or useful. There are many paragraphs here, but they boil down to "no matter how you statistically massage clunky data, you don't get golden eggs." Particularly when you try to draw conclusions from the data that the data was not intended to measure.

3) Assessing the full range of valued outcomes

Fancy words but mostly this about the fact that you can't measure much higher order skill and thought with a multiple choice question. Turns out there are all sorts of things that can't be cheaply and easily assessed by a standardized bubble test. Who knew?

4) Maintaining the integrity of assessments

People try to game the system. I'm shocked. Shocked!

Assessment for improving learning and teaching

Here's what we're selling next. Formative assessment is awesome. Awesome! ay attention to this next part, because although Pearson doesn't label it as such, it's Pearson's picture of

What instruction is supposed to look like

To teach the Pearson way, the teacher must--

-- have a really clear picture of what the student is supposed to learn. This should take the form of "validated maps of the sequence in which students typically learn a given curriculum outcome." These are sometimes called "learning progressions" or "critical learning paths." It's the railroad track that every student must travel down.

-- have a process to collect, store and analyze oodles of student data

-- monitor students daily with structured observation and assessment tools that are connected to objectives

-- use all that data to plan what comes next

Furthermore, Pearson wants you to know

The way you teach now sucks

Teachers mostly don't have the resources to do all of the above. "But without such a systematic, data-driven approach to instruction, teaching remains an imprecise and somewhat idiosyncratic process that is too dependent on the personal intuition and competence of individual teachers."

In other words, we need to teacher-proof classrooms. Teachers are human and variable and not reliable cogs in the educational machine. If we could get them all bound to assessments, that would tie them into a system that would be smooth and elegant. And profitable.

Assessment is the new Missing Link for transforming education into a teacher-proof, school-proof, techno-driven, highly profitable process. In the next chapter, we'll look at how assessment is supposed to be transformed.