Saturday, March 26, 2016

Pearson Loves Professor HAL

Here's a spoiler alert: In their recent "report" entitled "Intelligence Unleashed: An Argument for AI in Education," Pearson does not include the argument "because it would be way cheaper and easier than dealing with humans." But they have many other arguments, and none of them are as convincing as that one.

The report is sixty-ish pages, some with pretty graphics, and I've read it so that you don't have to.

Sir Michael Says Hi

No Pearson paper, not even one from their "Open Ideas" series (because I expect that "open" sounds better than "wildly speculative") is complete without Sir Michael Barber, the Grand High Chieftain of Pearson stopping by to pontificate. Here he wants to note again that education hasn't changed nearly enough in the last thirty years, and so we should demand 1) to be "empowered" by an understanding of Artificial Intelligence in Education (AIEd), 2) a clear explanation of how AI can connect to "the core of education," and 3) concrete opytions for making AIEd real.

In other words, what we need is a degree of specificity about AIEd that allows us to assess, invest, plan, deliver, and test. 

So that's where we're headed.

Introduction 

I'm not going to spend too much time here previewing the journey that I'm about to take you on, but we're excited about AI exploding computery stuff on a scale of cell phone apps, allowing an all-time awesome level of collecting and collating data (because at Pearson, we're sure that if we know everything, we can control everything).

Their larger argument is that as AI takes over the workforce, humans have to get still smarter. They suggest we become "metaphorical judo masters" and use the power of AI to build humans who kick AI's ass.

They also want us to know that this paper is spawned by a frustration that super-duper AIEd ideas never make it out of the lab. This is apparently not because these ideas actually suck when applied to actual educating, but because the funding system of such research is siloed and "shies away from dealing with the essential messiness of educational contexts." Oh, hey. Maybe it is because of the actual ed suckiness after all.













What Is AI?

This is actually a not terrible two-page explanation of AI, or at a minimum, a recognition that nobody can explain exactly what AI is. One expert explains that lots of AI work is in apps, but once AI becomes useful, we don't call it AI, we just call it a useful algorithm.

So one definition of AI is that its a useful algorithm that human beings might mistake for intelligent sentience. But it isn't sentient. Weirdly enough, AI is coming at the weaknesses of competency based education from an entirely different angle-- if a series of tasks can be designed that make it look like a computer "knows" something or "understands" something, is that intelligence? Are we trying to create artificial intelligence, or an intelligence simulation.

I don't want to wander too far down this rabbit hole, but I'd argue that these authors are going after the simulation, and that actual intelligence that learns and grows is much farther out of reach.

This is where I'll bring up Tay, a cool idea that Microsoft had for playing with AI. Tay was a twitter bot who was supposed to go online, simulate a teen girl, and learn new expressions while growing her language skills. Within twenty-four hours, she had "learned" to be a Hitler-loving, racist, obscene pig of a tweeter. Oops.

Real AI is hard. Learning to use language is hard, and trying to teach computers to use language "realistically" is a field that often overlaps with AI. But while there are limited successes here and there, most of the success has been with simulating the appearance on intelligence, not actually creating it. They'll later re-phrase their definition like this:

AI involves computer software that has been programmed to interact with the world in ways normally requiring human intelligence. This means that AI depends both on knowledge about the world, and algorithms to intelligently process that knowledge.

So, fake intelligence by using intelligence. This is how AI crashes into the larger debate about what intelligence is-- does intelligence rest simply in outward and observable behaviors, is it a series of processing rules, or is it more complicated in ways we don't fully understand yet.

AI In Education

At the heart of AIEd is the scientific goal to “make computationally precise and explicit forms of educational, psychological and social knowledge which are often left implicit.” In other words, in addition to being the engine behind much ‘smart’ ed tech, AIEd is also a powerful tool to open up what is sometimes called the ‘black box of learning,’ giving us deeper, and more fine-grained understandings of how learning actually happens (for example, how it is influenced by the learner’s socio-economic and physical context, or by technology).

We are going to try to learn how humans learn by writing computer software. K.

AIEd systems are organized around three models of education-- the pedagogical model, the domain (or content) model, and the student model. In other words, should the software focus on how to teach the material, what material is to be taught, or what knowledge the student has and what she "needs" next?

So the software "asks" the student a question, the student responds, the data is analyzed, and the software decides what question to ask the student next. Oh, sorry-- the data is subject to "deep" analysis. Also, we'd like to whip up some measures of social and emotional stuff, with a side order of meta-cognition. This is our supposedly adaptive model of education.

I can already see a huge problem here, but let me hold my tongue and see if it comes up in the pages ahead.

What Can AIEd Offer Right Now?

A multitude of AIEd-driven applications are already in use in our schools and universities. Many incorporate AIEd and educational data mining (EDM) techniques to ‘track’ the behaviours of students

Well, that's great to know. Data mining is a feature, and not an add-on, of such a system. There's no way AI can "figure out" how the student is doing and what the student should do next without collecting a ton of data-- just like the live human meat widgets currently teaching classrooms around the globe. Which raises, not for the first time, why we need AI when we have actual HI (human intelligence) already available? If you have plenty of apples available to eat, why exactly would you devote time and energy on methods for making pears seem kind of like apples? But moving on...

There are three things that AIEd can do right now, today.

1) Provide a tutor. "One to one human tutoring  has long been thought to be the most effective approach to teaching and learning(since at least Aristotle’s tutoring of Alexander the Great!)" but there aren't enough humans to go around, which-- wait. Don't most small humans come equipped with one or two humans directly involved in bringing the small human into existence in the first place? Granted, not all small humans have them handy, and not all of them are going to be great tutors, but-- not enough humans to go around??!!

I presume the authors would consider a program like, say, Study Island, an AI tutor. The problem with these programs is that they kind of suck, and end up mostly being programs that tutor students in how to outthink the guys who wrote the software. Because an AI tutor is not an intelligence system-- it's a way of taking one tutor (the software writer) and multiplying his effect. Kind of like a book.

2) Intelligent support for collaborative learning. By crunching all that data, the software can tell you who should be in a group together. They might moderate the collaboration, or they might participate as an "expert voice." Kind of like a book.

3) Intelligent virtual reality to support learning in authentic environments. In other words, games. Another rabbit hole I don't want to go down now. I am intrigued by their framing of this technology as a safe way of role-playing for the student.

The Next Phase of AIEd

Pearson figures that the growing market for AI-ish stuff will drive more R&D in the field. Fair enough.

They also assert that AIEd will help teach 21st century skills, and they go to the list of skills from the World Economic Forum. The WEF is a global chamber of commerce, famous for their Davos convention. You will be unsurprised to learn that Pearson is a "strategic partner associate" which means they are "actively involved in the Forum’s mission and shape the agenda at the industry, regional and global level." So the list of skills that they call "common wisdom" are the product of a global corporate activist and advocacy group to which they belong. Just saying.

The writers feel there are two challenges for pushing these skills. First, "We must develop reliable and valid indicators that will allow us to track learner progress on all the skills and capabilities needed to thrive in the current century – at the level of the individual, the district, and the country." So, data mining the crap out of everything. Second, "We need a better understanding of the most effective teaching approaches and the learning contexts that allow these skills to be developed." So, we're not really sure exactly how to teach these skills, but we know it surely involves collecting all the data. All of it.

AIEd will obviously help with the massive data mining, and the writers believe that will unlock the keys to learning. We have heard this before-- go back to the 2012 DC speech from Knewton in which a Pearson sub-altern unironicaly explains that they hope to be able to tell you what to eat for breakfast on the day you have a math test.

Also, AIEd will help with the Renaissance in Assessment, a Pearson plan so audacious and awful that it took me six blog posts to work through it all (you can start here or here, if you're not doing anything else today).

The writers make some more claims for the future of AIEd. I'm only going to touch on a few.

AIEd will mark the end of stop-and-test. This is the CBE dream-- all assessment all the time, and Professor HAL will be there to help. AIEd will also use all the shiniest new research about psychology and learning and stuff. Oh, and this. I'd better give it its own heading.

AIEd Will Provide Lifelong Learning Partner

Your own personal Professor HAL to stay right with you through life. They can challenge you with questions, bring in experts and expert materials, and even prod you to learn by having you teach them. You will no longer need a human teacher; Professor HAL will be with you every step of the way.

Ethical Concerns

There's an actual full sidebar about this in the report, and it considers some good questions. What if something goes wrong, like say, a stock crash because of computer programming or a computerized car has a wreck? And what if the AI comes under an unsavory influence like a hacker (Pearson wrote this before Tay starting tweeting her love of f@#king Hitler). Who would be responsible for any of this, asks the report, which is a slightly different question than how do we keep this from happening or does this show an ethical void in the heart of this enterprise.


They also acknowledge the problems of data privacy, not only from the standpoint of the data-generating meat widgets , but from the standpoint of intellectual property rights. Because selling your child's data to other corporations is one thing, but letting someone pass around a copyrighted piece of Pearson intellectual revenue-generating property-- that's a real problem.

Also, this could change the way people act. Users might be tempted to have a relationship with their learning companion, and the writers acknowledge that this is very.... something. Squicky.

Also, there's a sidebar about AIEd in the physical world, so that you get squicked out about how, oh yeah, they'll keep track of you physically as well-- how you look, how you move. Also, how you feel. And I'm not sure exactly what these paragraphs right here mean, but I think Pearson is promising us all holodecks. Cool!

The Next Level: AIEd and the Great Unsolved Problems of Education

So there are all these ongoing ed challenges that AIEd will totally help with. Let's run the list.

The Achievement Gap. If we just bathe children in the AI all the time, starting at birth, we can keep the poor ones from falling behind. We could put them in some sort of facility-- call it something benign, like a creche, maybe. Incidentally, I'm thinking it's time for all of us to get out our copies of Brave New World.

Bettering teachers. Training them better, keeping them longer, getting them into high-needs schools. Professional development is important, but expensive. So let's just give every teacher their own Learning Companion. With your own personal Professor HAL at the ready, you'll be kept as informed and well-educated as you were back when you were a hatchling in the creche. Oddly enough, this bullet point almost sidesteps the obvious implication of the rest of the work, which is, what exactly do we need human teachers for, again?

Although the most effective implementations of AIEd will deploy it alongside the expertise and empathy that is peculiarly human, in some instances this simply will not be possible, at least in the short-term. This means we will need to rely on technology to make available high-quality learning experiences to places where this is currently lacking.

Bringing It All Together for a Big Finish

Throughout this paper we have set out the AIEd pieces that could– with further development and smart real-world testing – offer a proportionate response to the new innovation imperative in education. Simply stated the imperative is this: as humans live and work alongside increasingly smart machines, our education systems will need to achieve at levels that none have managed to date.

So, trying to read through the rhetorical fog that is a Pearson paper, this may mean "we must use education to help lift the masses out of their big ever-deepening hole" or "we must get the meat widgets tooled up so that the corporations of tomorrow will be able to find enough useful drones all over the globe." It's possible that these guys don't see any difference between the two.

They offer a chart for fifteen years from now, which does helpfully note that humans will continue to excelle at social skills, and the ability to get along and empathize will "continue to be valued" (though it does not say by whom).

Recommendations (theirs and mine)

So what exactly does Pearson think people should be doing to bring this brave new world to fruition? They do have some thoughts-- here are some of them, and my response.

AIEd has dealt mostly with highly structured learning like math and physics. It will have to get more ambitious, and not be seduced by technology, but focus on the learning. 

Fifty Shades of Gray aside, when someone chains you to a wall in the basement, that is not seduction. AIEd has stuck to the simple stuff because that is all the technology can handle. What software can do very well is run a fairly complicated decision tree, where each last multiple choice answer leads to a different set of follow-up questions. But the more complex the response, the more useless the software.

This is best highlighted by the testing industry's complete and utter failure at developing reliable, valid, and just plain not-ridiculous essay grading software. PARCC is just about to try it again, and it will fail-- again. Les Perelman is one of my heroes, a man who has humiliated essay-scoring software again and again and again, because, as my computer prof told me decades ago, computers are dumb. (For more on this, read here, here, and here).

The classic expression of computer stupidity is GIGO-- garbage in, garbage out. A computer learning program is only as good as the questions and answers that have been programmed into it. Because computers cannot assess complex answers, any so-called learning software is really just a huge, complex question bank. If those questions suck, the software sucks.

AIEd technology cannot be a monolithic creation. Funding and structure need to favor a multitude of individual components. 

Well, yes and no. I agree that creating one monolithic AIEd entity that can handle any possible subject or level is a fools game. But a quick look around the world of computer tech tells us that many-pronged tech isn't really a thing. Do you run Windows or Apple OS (you in the back, trying to plug Linux, just hush-- nobody cares)?

Whether we're talking computers, phones or game systems, there are many options for individual apps/games/software/whatever, but only a very few platforms on which to run them. AIEd may, and should, come from many many sources, but at the end of the day, some monolith is going to be the platform. And platform wars can be long and ugly (Sony was scheduled to make the last BETAmax videotape this month). How will schools, with already-limited budgets, handle the AIEd platform wars? Who can afford to drop a few million on tomorrow's AIEd Sega Dreamcast.

AIEd system change will be a bitch. Better make sure to include teachers, students, and parents. Also develop some standards for addressing the ethics of handling all that data. 

Nice thoughts, all. But of course teachers, parents and students have not been involved in any meaningful way in any of the education reforms of the past fifteen years. If they are admitted at all, the price of admission is to be a compliant agreenik who doesn't say mean things like, "This idea is crap," and the admission ticket generally gets you a chance to say nice things about work that has already been completed.


It's the eternal puzzle of educational expertise. If a bunch of teachers at my school called up a computer lab and said, "Clear us some space. We are going to tell you how to run your AI development project," not a human being on the planet would take us seriously. However, any human being on the planet with access to power, money or technology can barge into our classrooms and tell us how we should be doing our jobs. It's not that I think that no teachers ever need to hear advice from anyone ever. But why is teaching the only professional sector in the world that everybody feels qualified to "reform"?

Also, when developing the ethics of handling data, make sure that you cover the ethics of having and collecting it in the first place (how much private information are you entitled to take in the first place, and why are you entitled) as well as the questions of keeping it safe (much recent history suggests you can't).

That's It 

I'm still not sure that this report manages to distinguish between Artificial Intelligence and Intelligence Simulation, but either way, it imagines a future that is premised by many factors not in evidence (In the future, unicorns will be carried to their cloud cities by flying pigs). The black box of learning has not been opened. The ethics of data collection, storage and use have not been settled (or even particularly discussed). The ability of software to handle anything but the most simple multiple choice questioning still doesn't exist. And the educational value of a CBE-personalized system has not yet been proven. Pearson's crystal ball needs a cleaning.






1 comment:

  1. Your discussion of AI and intelligence simulation stirred me to think about actual knowledge and knowledge simulation. One worry that I have about the current emphasis on multiple-choice standardized testing and discussions about CBE is that students might just be simulating knowledge.

    I recall Richard Feynman's discussion of this sort of thing in "Surely, You're Joking Mr. Feynman." He taught physics in Brazil for a year as a visiting professor. The physics students there seemed to know a lot about physics. They knew the particular facts and the equations. But when Feynman pressed them as individuals or in groups, it turned out that they had simulated knowledge. They really didn't know physics. They couldn't really apply this "knowledge" or even imagine how to discover new knowledge in physics. He recounts how he was almost fooled by one young man, but after repeated questioning he realized that even this young man really didn't know physics.

    In a farewell speech to an audience of the physics community in Brazil and education officials, Feynman recommended completely changing how physics was taught in Brazil.

    ReplyDelete