Sunday, May 14, 2017

Artificial Stupidity

Facebook absolutely insist on showing me "top stories." Every time I open the Facebook page, I have to manually switch back to "most recent," because even though the Facebook Artificial Smartitude Software thinks it knows what I most want to see, it can't figure out that I want to see the "most recent" feed. Mostly because the Facebook software is consistently wrong about what I will consider Top News.



Meanwhile, my Outlook mail software has decided that I should now have the option of Focused, an email listing that lists my emails according to... well, that's not clear, but it seems to think it is "helping" me. It is not. The Artificial Smartitude Software seems to work roughly as well as rolling dice to decide the ranking of each e-mail. This is not helpful.

I pay attention to these sorts of features because we can't afford to ignore new advances in artificial intelligence, because a whole lot of people think that AI is the future of education, that computerized artificial intelligence will do a super-duper job directing the education of tiny humans, eclipsing the lame performance of old-school meat-based biological intelligence.


Take, for instance, this recent profile in Smithsonian, which is basically a puff piece to promote a meat-based biological intelligence unit named Joseph Qualls. Now-Dr Qualls (because getting meat-based biological intelligence degrees is apparently not a waste of time just yet) started his AI business back when he was a lonely BS just out of college, and he has grown the business into.... well, I'm not sure, but apparently he used AI to help train soldiers in Afghanistan among other things.

To his credit, Qualls in his interview correctly notes one of the hugest issues of AI in education or anywhere else-- What if the AI's wrong? Yes, that's a big question. It's a "Other than that, how did you like the play, Mrs. Lincoln" question. It's such a big question that Quall notes that much AI research is not driven by academics, but by lawyers who want to know how the decisions are made so they can avoid lawsuits. So, hey, it's super-encouraging to know that lawyers are so involved in developing AI. Yikes.

Still, Qualls sees this rather huge question as just a bump in the road, particularly for education.

With education, what’s going to happen, you’re still going to have monitoring. You’re going to have teachers who will be monitoring data. They’ll become more data scientists who understand the AI and can evaluate the data about how students are learning.

You’re going to need someone who’s an expert watching the data and watching the student. There will need to be a human in the loop for some time, maybe for at least 20 years. But I could be completely wrong. Technology moves so fast these days.

So neither the sage on the stage or the guide on the side, but more of a stalker in the closet, watching the data run across the screen while also keeping an eye on the students, and checking everyone's work in the process. But only for the next couple of decades or so; after that, we'll be able to get the meat widgets completely out of education. College freshmen take note-- it's not too late to change your major to something other than education.

Where Qualls' confidence comes form is unsure, since a few paragraphs earlier, he said this:

One of the great engineering challenges now is reverse engineering the human brain. You get in and then you see just how complex the brain is. As engineers, when we look at the mechanics of it, we start to realize that there is no AI system that even comes close to the human brain and what it can do.

We’re looking at the human brain and asking why humans make the decisions they do to see if that can help us understand why AI makes a decision based on a probability matrix. And we’re still no closer.

I took my first computer programming course in 1978; our professor was exceedingly clear on one point-- computers are stupid. They are fast, and they are tireless, and if you tell them to do something stupid or wrong, they will do it swiftly and relentlessly, but they will not correct for your stupid mistake. They do not think; they only do what they're told, as long as you can translate what you want into a series of things they can do.

Much of what is pitched as AI is really the same old kind of stupid, but AI does not simply mean "anything done by a computer program." When a personalized learning advocate pitches an AI-driven program, they're just pitching a huge (or not so huge) library of exercises curated by a piece of software with a complex (or not so complex) set of rules for sequencing those exercises. There is nothing intelligent about it-- it is just as stupid as stupid can be but, but implemented by a stupid machine that is swift and relentless. But that software-driven machine is the opposite of intelligence. It is the bureaucratic clerk who insists that you can't have the material signed out because you left one line on the 188R-23/Q form unfilled.

There are huge issues in directing the education of a tiny human; that is why, historically, we have been careful about who gets to do it. And the issues are not just those of intelligence, but of morals and ethics as well.

We can see these issues being played out on other AI fronts. One of the huge hurdles of self-driven cars are moral questions-- sooner or later a self-driven car is going to have to decide who lives and who dies. And as an AP story noted just last week, self-driven car software also struggles with how to interact with meat-based biological intelligence units. The car software wants a set of rules to follow all the time, every time, but meat units have their own sets of exceptions and rules for special occasions etc etc etc. But to understand and measure and deal and employ all those "rules," one has to have actual intelligence, not simply a slavish, tireless devotion to whatever rules someone programmed into you. And that remains a huge challenge for Artificial So-called-intelligence. Here are two quotes from the AP story:

"There's an endless list of these cases where we as humans know the context, we know when to bend the rules and when to break the rules," says Raj Rajkumar, a computer engineering professor at Carnegie Mellon University who leads the school's autonomous car research.

"Driverless cars are very rule-based, and they don't understand social graces," says Missy Cummings, director of Duke University's Humans and Autonomy Lab.

In other words, computers are stupid.

It makes sense that Personalized Learning mavens would champion the Artificial Stupidity approach to education, because what they call education is really training, and training of the simplest kind, in which a complicated task is broken down into a series of simper tasks and then executed in order without any attention to what sort of whole they add up to. Software-directed education is simply that exact same principle applied to the "task" of teaching. And like the self-driven car fans who talk about how we need to change the roads and the markings and the other cars on the highways so that the self-driven car can work, software-driven education ends up being a "This will work well if you change the task to what we can do instead of what you want to do." You may think you can't build a house with this stapler-- but what if you built the house out of paper! Huh?! Don't tell me you're so stuck in a rut with the status quo that you can't see how awesome it would be!

So, they don't really understand learning. they don't really understand teaching, and they don't really understand what computers can and cannot do-- outside of that, AI-directed Personalized Learning Fans are totally on to something.

And still, nobody is answering the question-- what if the AI is wrong?

What if, as Qualls posits, an AI decides that this budding artist is really supposed to be a math whiz? What if the AI completely mistakes what this tiny human is interested in or motivated by? What if the AI doesn't understand enough about the tiny human's emotional state and psychological well-being to avoid assigning tasks that are damaging? What if the AI encounters a child who is a smarter and more divergent thinker than the meat widget who wrote the software in the first place? What id we decide that we want education to involve deeper understanding and more complicated tasks, but we're stuck with AI that is unable to assess or respond intelligently to any sort of written expression (because, despite corporate assurances to the contrary, the industry has not produced essay-assessment software that is worth a dime, because assessing writing is hard, and computers are stupid)?

And what if it turns out (and how else could it turn out) that the AI is unable to establish the kind of personal relationship with a student that is central to education, particularly the education of tiny humans?

And what, as is no doubt the case with my Top Stories on Facebook, the AI is also tasked with following someone else's agenda, like an advertiser's or even political leader's?

All around us there are examples, demonstrations from the internet to the interstate of how hugely AI is not up to the task. True-believing technocrats keep insisting that any day now we will have the software that can accomplish all these magical things, and yet here I sit, still rebooting some piece of equipment in my house on an almost-daily basis because my computer and my router and my isp and various other devices are all too stupid to talk to each other consistently. My students don't know programming or intricacies of certain software that they use, but they all know that Step #1 with a computer problem is to reboot your device because that is the one computer activity that they all practice on a very regular basis.

Maybe someday actual AI will be a Thing, and then we can have a whole other conversation about what the virtues of replacing meat-based biological intelligence with machine-based intelligence may or may not be. But we are almost there in the sense that the moon landings put us one step closer to visiting Alpha Centauri. In the meantime, beware of vendors bearing AI, because what they are selling is a stupid, swift, relentless worker who is really not up to the task.

7 comments:

  1. I hate any kind of auto-correct because I know what I want to say and my spelling is good. My son's isn't, but the computer program can't tell if he writes "expect" when he means "except".

    My son's Spanish course in college used an online "workbook" that was horrible. It would mark it wrong if you didn't put the answer in the exact format it wanted, and some of their answers were wrong anyway. The teacher hated it too because it didn't save her any time because she had to go over everything anyway. And we all know that computer programs to grade compositions don't care if it's gobbledeegook if it follows the right formula. And computer translators from one language to another aren't very good because they don't understand semantics or context.

    My new computer changes the wallpaper every so often and asks me if I like it, but it's been months and the algorithms are just beginning to get a little better at predicting what I like. The AI people just want CBE to use students as guinea pigs to try to perfect their software.

    The other thing is that there's a lot of evidence that the teach-to-the-test mentality is creating people who can't think for themselves and if something happens that they have no exact rules to follow to resolve it, they just shut down. We're going to dumb down people until they're all as stupid as computers!

    If I need to break material down into simpler skill chunks, I'm perfectly able to do it, but I know when it's necessary and when it isn't. The only kind of software I wish they'd work on for teachers is a kind where it's a tool to make it easier for me to make my own material.

    ReplyDelete
  2. I am a teacher. I am NOT, nor will I EVER be, a "data scientist."

    ReplyDelete
  3. Recommended read. What if he AI is wrong? I get that. The potential problem I see with it is it's bound to narrow perception. Or maybe not. Who knows?

    ReplyDelete
  4. Sure, but for some of us who have spent a lot of time with meat-based teachers, there are a lot of those I'm not sure that AI would be worse than. At least AI doesn't generally willfully emotionally abuse kids. And if I found a program that appeared to be operating with unreasonable biases, I could expect a reasonable possibility of getting the program fixed before the heat death of the universe occurs. Try that with some entrenched teachers (I do, on a frequent basis, but far less success than I'd like).

    Luckily, there ARE options that include human-based reasoning that isn't hopelessly either/or, where we might use computer-based teaching/learning for its strengths and meat-based teaching for its strengths. Keeping an open mind on such matters requires stepping outside the GERM/Ed Deform paradigm, watching out for where advocacy is more sales pitch than sense, but also being honest about teachers who are actually less able to build relationships with children than "dumb" computers.

    There's no official "Hippocratic Oath" that starts with, "First, do no harm" that teachers must take, but there sure as hell should be. Programmers, too, but then they mostly build what they're asked to build by people who may not be altruists or humanists or very decent, ethical people. This is also true, oddly enough, of teacher educators, state DOEs, etc.

    ReplyDelete
    Replies
    1. I think everyone should follow the "First, do no harm" axiom. But of all the teachers I and all my kids had, only one was emotionally abusive with students, and as soon as the administration realized it, they got rid of him. It wasn't even willful on his part, he just had no people skills whatsoever and thought he was being funny. No one, including Peter, argues that computers and software don't have their uses in the classroom, but computers aren't capable of building any kind of "relationship" with students.

      Delete
  5. Truly incompetent, "harmful" teachers are few and far between. A good building principal does not allow them to continue in the classroom. The existence of such teachers is a sign of inexcusably poor management. Just don't blame the teacher.

    AI, like virtually every reform initiative is doomed to eventual failure. Ignorance of classroom dynamics is always their downfall.

    ReplyDelete
  6. PhD, Art as ‘Artificial Stupidity’: http://sro.sussex.ac.uk/67604

    ReplyDelete