Thursday, December 6, 2018

The Disordered Order of Competencies

Competency Based Education (or Proficiency Based Learning, or Outcome Based Education, or Mastery Learning, or whatever new name appears next week) is the up-and-coming flavor of the week in education, even though it is neither new nor well-defined by the people who promote it (or the people who are implementing it in name only). But the basic principle is simple and, really, fairly common sensical. It offers a different solution to the age-old tension at the heart of education: students should definitely learn a certain core group of competencies, and they have to learn them in 180 days.
Traditionally, we resolve the tension by siding with the 180 days, and so some students are pushed through even though they don't necessarily fully master the material. But what if we flipped that? What if we said that every student must fully master one skill or unit of content knowledge before she moved on to the next one, regardless of how much or little time it took her to do it.
There's an obvious challenge here. What if Chris only takes 30 days to complete the full list of competencies? Worse, what if Pat needs 400 days to master the same full list? But there's another, less obvious issue here.
CBE is often presented with math lessons as the examples. That's handy, because everyone understands math to be sequential (you can't do calculus if you can't add and subtract).
But what about other disciplines? Remember, the sequence is very important, because in a true competency based system, no student can move to the next unit until she has demonstrated competency (or proficiency, or mastery) in the previous unit. So where should the critical roadblocks come? Should a musician be able to play Bach before they can try Beethoven? Does a physics student have to master potential difference in electricity before she can study centrifugal force? And what about English class? Should students be required to master Romeo and Juliet before they can start working on writing paragraphs? Does it make sense for a teacher to sequence her less engaging units at the beginning of the year when students are still fresh, or at the end of the year so that students who get "stuck" on that unit aren't left quite so far behind?
CBE calls for time to be the variable while learning is the constant, but few districts that have implemented some version of CBE have been brave enough to tell parents, "Summer vacation doesn't start for your child until they've finished all their modules," so there is still a ticking clock behind all of this, meaning that a student who gets stuck on module 3 may never make it to module 20 at all.
Does it make sense to let a student sit like a potted plant for 180 days, then collect a diploma at the end even though they've learned nothing? No, but that's not the only alternative. If Chris can't get past module 3, moving Chris on allows for the possibility that modules 4-26 will actually teach Chris something. CBE assumes that all students can learn everything, so Chris should get there eventually. But eventually can be a long time, and the clock is ticking. If we're going to deny Chris that option, we'd better be absolutely certain that all other modules are hopeless and pointless if Chris didn't master 3. We'd better be comfortable saying, "This stuff is so important that we're going to deny you the chance to learn anything else until you get past this." If you're more comfortable thinking of learning as a many-threaded flexible sequence of interdependent skills and content that can be approached from many different directions in many different combinations, or if you're more comfortable thinking that some times it's better to walk away when you're stuck and come back later after you've wrestled with some different challenges, it's possible that competency based education isn't really for you.

Guest Post: Why Tests Are Boring

It's Guest Post day here, and my guest is William Bryant. Bryant is currently an edupreneur with a company focused on helping students get ready for college, but he spent a decade working in test development for the folks at ACT. He has some interesting insights to offer about why tests end up the way they do; important to understand not just because of the tests themselves, but because of the testing effect on curriculum. Read on. 

Why Are Standardized Tests So Boring?: A Sensitive Subject 

It’s a guiding principle in educational testing that test questions should not upset test-takers. Much like dinner conversation with in-laws, tests should refrain from referencing religion, or sex, or race, or politics --  anything that might provoke a heightened emotional response that could interfere with students’ ability to give their best effort.  

Attention to “sensitivity” concerns, as they’re known, makes good sense conceptually, but in practice such concerns are responsible for much of why the standardized tests kids take in school are so ridiculously bland and unengaging. The drive to avoid potentially sensitive content constrains test developers to such a degree that we might legitimately question whether the cure is at least as bad as the disease.  

So determined are test-makers to avoid triggering unwanted emotions, they end up compromising the validity of their tests by excluding essential educational content and restricting students’ opportunities to demonstrate the creative and critical thinking skills they’re actually capable of.   


No one knows for certain if the tests are better or worse for being so cautious. There is no research defining sensitivity, no evidence-based catalog of topics to avoid, no study measuring the test-taking effects of “sensitive” content. For all anyone knows, inflaming emotions might actually improve test results -- though few test-makers would risk experimenting to find out.  

No test-maker wants to hear from a teacher or parent that a student was stunned, enraged, offended, or even mildly disconcerted by something they encountered on a test. And in fairness, no test-maker wants to subject a test-taking kid to a hurtful or upsetting experience.  

Since there is no research to guide decisions on sensitivity, the rules test-makers set for themselves are based strictly on their own judgment, and on some sense of industry practice. Inevitably they default to the most conservative positions possible: if a topic might conceivably be construed as sensitive, that’s enough reason to keep it off the test.  

Typically, sensitivity guidelines steer test developers away from content focused on age, disability, gender, race, ethnicity, or sexual orientation. Test-makers also avoid subjects they deem inherently combustible, such as drugs and drinking, death and disease, religion and the occult, sexuality, current politics, race relations, and violence.  

A “bias review” process gets applied in the course of developing passages and questions for testing, to weed out anything that might be offensive or unfair to certain subgroups -- typically African Americans, Asian Americans, Latinos, Women, sometimes Native Americans. The test-maker will send prospective test materials out for review by qualified educators who belong to these subgroups. If a reviewer thinks a test item is problematic, it gets tossed.  Though this process is better than nothing, it reflects more butt-covering than enlightenment, putting test-maker and reviewer alike in the awkward position of saying, for instance, “These test items are not unfair to black students. How do we know? We had a black person look at them!” 

Judgments on topics not pertaining to identity and cultural difference rest purely on the test makers, who are as risk-averse as can be. In one example I’m familiar with, a passage about the mythological Greek figure Eurydice was rejected because the story deals with death and the underworld. Think of all the literature and art excluded from testing on that kind of criteria. Think of the impoverished portrait of human achievement and lived experience conveyed to students by such exclusions. 

In another case, a passage on ants was rejected because it reported that males get booted out of the colony and die shortly after mating. I’m still not clear on whether the basis for that judgment centered on the reference to insects mating, insects dying, or the prospect of a student projecting insect gender relations onto human relations and being thereby too disturbed to think clearly. Whatever the case, rejecting such a passage on the basis of sensitivity concerns seems downright anti-science.  

I’ve seen a pair of passages from Booker T. Washington and W. E. B. DuBois nixed out of concern for racial sensitivity: you can’t have African Americans arguing with each other on questions of race. Test-makers strive to include people of color in their test content to satisfy requirements for cultural inclusivity. But those people of color cannot be engaged in the experience of being people of color -- which renders the whole impulse toward inclusivity hollow and cynical. Such an over-abundance of caution does more to protect the test-maker than the student.  

The validity of educational assessments that cannot reference slavery, evolution, Neanderthals, extreme weather events, natural life cycles, economic inequality, illness, and other such potentially sensitive topics seems severely compromised. More concerning still is the prospect of such tests driving curriculum. With school funding and teacher accountability riding on standardized test scores, teaching to the test makes irresistibly practical sense in many educational contexts. Thus, if the tests avoid great swaths of history, science, and literature, then so will curriculum.  

The makers of the standardized tests schoolkids encounter argue that they are not interested in censoring educational content, only in recognizing that when students encounter potentially sensitive topics they need the presence of an adult to guide them through. The classroom and the dinner table are places for negotiating challenging subjects, not the testing environment, where kids are under pressure and on their own.  

This rationale should rouse everyone to question why we continue to tolerate such artificial conditions for evaluating student learning. It essentially concedes that either testing will not align with curriculum, or that curriculum will align only with the things test-makers decide are safe enough to put in front of test-taking students. Surely we can recognize in this the severe design flaw that lies at the heart of the testing problem.  

William Bryant, PhD, is founder and CEO of BetterRhetor, a company dedicated to closing the college-readiness gap. He was formerly Director of Writing Assessments at ACT, Inc. Contact him at wbryant@better-rhetor.com or visit www.better-rhetor.com.

Wednesday, December 5, 2018

Real Stupid Artificial Intelligence (Personalized Learning's Missing Link)

Good lord in heaven.



Intel would like a piece of the hot new world of Personalized [sic] Learning, and they think they have an awesome AI to help. And they have concocted a deliberately misleading video to promote it.

In the video, we see a live human teacher in a classroom full of live humans, all of whom are being monitored by some machine algorithms "that detect student emotions and behaviors" and they do it in real time. Now teachers may reply, "Well, yes, I've been doing that for years, using a technique called Using My Eyeballs, My Ears, and My Brain." But apparently teachers should not waste time looking at students when they can instead monitor a screen. And then intervene in "real time," because of course most teachers take hours to figure out that Chris looked confused by the classwork and a few days to respond to that confusion.

Oh, the stupid. It hurts.

First, of course, the machine algorithm (copywriters will be damned if they're going to write anything like "students will be monitored by computers") cannot detect student emotions. They absolutely cannot. They are programmed to use certain observable behaviors as proxies for emotions and engagement. How will Intel measure such things? We'll get there in a second. But we've already seen one version of this sort of mind-reading from NWEA, the MAP test folks, who now claim they can measure engagement simply by measuring how long it takes a student to answer a question on their tests. Because computers are magical!

Turn it around this way-- if you had actually figured out the secret of reading minds and measuring emotions just by looking at people, would your first step be to get in on the educational software biz?

In fact, Intel's algorithm looks suspiciously unimpressive. They're going to measure engagement with three primary inputs-- appearance, interaction and time to action. A camera will monitor "facial landmarks," shoulders, posture. "Interaction" actually refers to how the student interacts with input devices. And time to action is the same measurement that NWEA is using-- how long do they wait to type. Amazing, And please notice-- this means hours and hours of facial recognition monitoring and recording.

Intel is ready to back all this up with some impressive argle-bargle:

Computers in the classroom have traditionally been seen as merely tools that process inputs provided by the users. A convergence of overlapping technology is making new usages possible. Intel and partners are enabling artificial intelligence at the edge, using the computing power of Intel CPUs to support artificial intelligence innovations with deep learning capabilities that can now know users at a higher level – not merely interpreting user commands but also understanding user behaviors and emotions. The new vision is a classroom PC that collects multiple points of input and output, providing analytics in real-time that lets teachers understand student engagement.

This just sounds so much more involved and deep than "the computer will watch how they hold their lips and tell the teacher what the algorithm says that lip position means."

Who is the market for this? I want to meet the teacher who says, "Yeah, looking at the students is just too challenging. I would love to have a software program that looked at them for me so I could just keep my eyes on my screen." Who the hell is that teacher, standing in front of a classroom looking not at her students, but at her tablet? Who is the administrator saying, "Yes, the most pressing need we have is a system to help teachers look at students."

Of course, there are applications I can think of for this tech

One would be a classroom with too many students for a teacher to actually keep eyes on. Monitoring a class of 150 is hard for a human (though not impossible-- ask a band director) but easy for a bank of cameras linked to some software. Another would be a classroom without an actual teacher in it, but just a technician there to monitor the room.

Here's Intel's hint about how this would play out:

Students in the sessions were asked to work on the same online course work. Instructors, armed with a dashboard providing real-time engagement analytics, were able to detect which students required additional 1:1 instruction. By identifying a student’s emotional state, real-time analytics helped instructors pinpoint moments of confusion, and intervene students who otherwise may have been less inclined or too shy to ask for help. In turn, empowering teachers and parents to foresee at-risk students and provide support faster.

In a real classroom, teachers can gauge student reaction because the teacher is the one the students are reacting to. But if students are busy reacting to algorithm-directed mass customized delivered to their own screen, the teacher is at a disadvantage-- particular if the teacher is not an actual teacher, but just a tech there to monitor for student compliance and time on task. Having cut the person out of personalized [sic] learning, the tech wizards have to find ways to put some of the functions of a human back, like, say, paying attention to the student to see how she's doing.

The scenario depicted in the video is ridiculous, but then, it's not the actual goal here. This algorithmic software masquerading as artificial intelligence is just another part of the "solution" to the "problem" of getting rid of teachers without losing some of the utility they provide.

Intel, like others, insists on repeating a talking point about how great teachers will be aided by tech, not replaced by it, but there is not a single great teacher on the planet who needs what this software claims to provide, let alone what it can actually do. This is some terrible dystopian junk.







Education, Bad Leadership, and Harvard

We have a problem with bad management, pretending to be leadership, in this country. And it has infected education.

Even in a small area like mine, the symptoms have been plain to see. A major local oil business was put under the leadership of a man who had previously run a soap company and a toy company. He was not good for the company. In my town, the mining machinery company that employed both my father and my brother passed through the hands of several management organizations who installed top brass who knew nothing about the mining industry. It did not end well. In both cases, major employers for the area were gutted, jobs lost, local economy damaged.

I think I left some leadership over by the tea pots.
It was the smaller scale version of what we've seen with retailers like Sears and Toys R Us-- companies that lost their flexibility and edge because they were run by dopes who didn't know the business-- just how to extract value.

All indications are that our tech giants are just as terribly run, with Facebook repeatedly in the hot seat for a morally tone deaf mistake-plagued string of bad behaviors. Duff McDonald took a look at the woman taking the heat, Sheryl Sandberg (she of Lean In fame), and in particular the kind of leadenly education she got from the Harvard Business School, which McDonald marks as ground zero in this bad leadership pandemic.

The truth is, Harvard Business School, like much of the M.B.A. universe in which Sandberg was reared, has always cared less about moral leadership than career advancement and financial performance.

It is an education, McDonald says, that stresses that there are no right answers, and, he suggests, no moral dimension to making this choices. The article includes a story about Jeff Skilling, a product of Harvard Business School and McKinsey's Uber-consulting firm, operated in the same style.

One of Skilling’s H.B.S. classmates, John LeBoutillier, who went on to be a U.S. congressman, later recalled a case discussion in which the students were debating what the C.E.O. should do if he discovered that his company was producing a product that could be potentially fatal to consumers. “I’d keep making and selling the product,” he recalled Skilling saying. “My job as a businessman is to be a profit center and to maximize return to the shareholders. It’s the government’s job to step in if a product is dangerous.” Several students nodded in agreement, recalled LeBoutillier. “Neither Jeff nor the others seemed to care about the potential effects of their cavalier attitude. . . . At H.B.S. . . . you were then, and still are, considered soft or a wuss if you dwell on morality or scruples.”

Part and parcel to this approach is a devaluing of expertise. If what matters is value created and harvested, and there are no right answers to situations, then what use is industry-specific expertise? One batch of hot-shot leaders looked at the cyclic nature of mining machinery sales and decided that value wasn't being generated in the down time, and so mandated changes that someone versed in the field would describe as "stupid." I don't want to sidetrack with a discussion of the industry, so imagine this-- some takes over a strawberry patch in Maine and decides that they aren't selling enough strawberries in January, so makes fixing that a priority. That kind of stupid.

Here's an article from last month's Harvard Business Review. It says the fundamentals of leadership haven't changed. Here they are in all their jargonified glory:

1. uniting people around an exciting, aspirational vision;
2. building a strategy for achieving the vision by making choices about what to do and what not to do;
3. attracting and developing the best possible talent to implement the strategy;
4. relentlessly focusing on results in the context of the strategy;
5. creating ongoing innovation that will help reinvent the vision and strategy; and
6. “leading yourself”: knowing and growing yourself so that you can most effectively lead others and carry out these practices.

Notice what's not on the list?

Knowing what the hell you're talking about. Knowing enough to know whether  or not your  "exciting, aspirational vision" is a bunch of stupid bullshit.

And for those of us in education who have been reading and listening to reformsters for the past decade or two, don't these sound familiar? Have a bold vision, regardless of whether or not you know what the hell you're talking about. Attract the best possible talent, so get the power to hire and fire at will. Relentlessly focus on the results, even if you have to make up a toxic bullshit way of measuring them. Innovate all the time, because shiny. The other thing missing from the list? Anything about working productively and effectively with other human beings.

And most of all, pick leaders based on their leaderly superiority and never based on actual knowledge about education. Hell, a year or two in a classroom should be more than enough to make you an education visionary. But field-specific knowledge is completely unnecessary, because there are no right or wrong answers-- just your answers, your vision.

The really sad thing is that it doesn't have to be this way. I used to think W. Edward Deming was dry and cold, but he's a big woolly-hearted hippie compared to modern captains of industry. Build trust. Take care of your people. Do what's right. Know what the hell you're talking about. It's not that hard!

But no-- education reform had to be infected with the same Harvard Business bullshit that is being dropped all over the China shop of the American economy. The big question is how much more breakage has to occur before we chase the ivy-covered bull out of here.

FL: What Competition Gets You

Florida is supposed to be the Great Exemplar of ed reform. Charters, vouchers, ESAs-- every brand of reform under the sun runs free and unfettered under the bright Florida sun.

There may be no state that has more effectively set loose the Invisible Hand or market forces and competition. And what does that get you?

Well, it gets you unqualified scam artists like Eagle Arts Academy charter school hovering up tax dollars for their owners. You get thieves like the recently-convicted Marcus May, who stole over five million dollars of taxpayer money to finance his glitzy lifestyle. You get legislators who write the laws from which they themselves profit. You get tax dollars being spent just to advertise. You get schools appearing and disappearing and public schools barely surviving as their financial support is stripped. You get schools focused on their A-F grade and the test-centered culture that turns schools upside down-- if the school culture is not strong enough and resistant enough, they stop worrying about how to serve students by meeting their needs and start worrying about how to get students to serve the school by generating A-worthy data. You get schools that bar six-year-olds for wearing dreadlocks, because they have to protect their brand and make it clear to their potential customers exactly what kind of students aren't tolerated there.

It creates an atmosphere of mistrust and fear. And mistrust and fear do not make people behave better.

I'm plenty hard on charter schools, but the most massive, terrible failure of a school belongs to Marjorie Stoneman Douglas High School and the public school system of which it is a part.

It's not just that they dropped the ball with a student who went on to murder seventeen members of that school community. That they dropped it is self-evident, both in how they ignored warning signs and in how they shuffled Cruz around. We may never know exactly how the system failed; many teachers read the list of warning signs and feel a chill thinking of many of those signs they've seen in students of their own. The task that fell into their school was not an easy one, but seventeen people are dead-- there is no question that the system failed, but we can legitimately question whether any school system could have saved Nikolas Cruz or his victims.

What is absolutely inexcusable is how the school district has handled everything since the murders.

As this report from the Sun-Sentinel shows, Broward County district has tried to "hide, deny, spin, threaten" its way forward. They have dropped tons of taxpayer money on lawyers to fight information requests, PR firms to massage the message, and consultants to tell administrators and staff to keep their mouths shut. They have put forth a huge effort to keep their own hands clean of any blame in this tragic murder of seventeen innocents.

The handling of Cruz as a student was obviously flawed. The handling of his murderous rampage has been an inexcusable indefensible disaster, a display of epic wrong-headedness, a massive display of how badly a public school district can lose their way.

I want to be clear on this point-- I don't think anything excuses what Broward district officials have done. Nothing.

I don't think that, under some better circumstances the district officials might have handled this so much better. You don't make this kind of disastrously bad response unless you had already long since lost the thread. But I have to wonder how much Florida's atmosphere of distrust and fear contribute. I have to wonder how badly it breaks down the management of a district when administrators must be most concerned about the competition, about how getting caught in a single misstep could leave them in dire circumstances.

After all, isn't this what competition also gets you-- an atmosphere in which people don't dare to show vulnerability or admit a mistake because one false move and the competition will Get You. And so seriously messed up students aren't a call for extra help and support for the child, but instead represent a potential liability to the district. It's not "how do we help this children" but "how do we manage this liability." And if, God forbid, the situation blows up, you don't dare say, "We screwed up and we want to sit down with everyone and figure out what went wrong so we can do better." Instead, you stonewall and stall and defend so that your mistakes don't cause you to lose a step in the competition.

Institutions are prone to self-preservation anyway, even in the best of times. Add an atmosphere of zero-sum dog-eat-dog competition, and the institutions Number One Priority becomes not the students it serves or the taxpayers that it serves, but its own survival.

It's no excuse. Professional educators should be better, should shrug off the invisible hand of competition and stick to doing what's right. We should always expect people to do the right thing. But we should also create policy that pushes them toward the right thing-- and competition pushes schools away from it. We can do better.