Pages

Friday, April 22, 2016

The Morality of Artificial Intelligence

The rising tide of support for computer-centered competency based education is a computer with artificial intelligence (AI), a computer smart enough to be to follow, understand and respond to the behavior and choices of the human students linked up to the system. But this presents problems.

Some are pretty obvious. Just a month ago, Microsoft hooked an AI-powered chatbot up to Twitter and watched in horror as it proceeded to tweet horrible racist comments. That was not the plan, but any AI development has to wrangle with the problem of installing human values into a machine.

How can an AI-driven system "teach" children if it can't be instilled with human values?


There's an interesting discussion of these issues in an article posted at Slate today. It has nothing at all to say about CBE or other computer-driven education systems-- at least not directly-- but much to ponder about the business of creating a computer program that could handle the job.

There are scientists working on it; there have been since the days that Isaac Asimov designed the Three Laws of Robotics, meant to give robots something like a moral center. The article says that these folks want to achieve AI "provably aligned with  human values." Which is a hugely reductive statement of the problem, because the first question we have to answer is, "Which human values?"

You may think that there are surely some clear, central core values shared by all humans, but the Slate article reaches back to work by Joseph Henrich that I've discussed here before which suggests that most of what we think of as "normal" is really just the product of our own culture. This extends not just to silly, obvious examples like how to shake hands but, as Henrich shows, actual perception-- what is an optical illusion in some cultures is not one in others.

AI has depended on knowledge based and outward behaviors, but it is hugely limited. As writer Adam Elkus says in the article's opening, "Computers are powerful but frustratingly dumb in their inability to grasp ambiguity and context."

That means that AI often falls back not so much on creating intelligence, but on creating a complex of behaviors that simulate intelligence, but are still just the computer responding stupidly to a series of complicated instructions.



This obviously matters to more than just people in the way of educational AI. One of the challenges of programmers trying to perfect computer-driven cars is the big question-- in an accident involving many people, which people should the AI car most try not to kill?  In such an accident, the decision of whose life to try to save will not be made by the car-- it will be made by the programmer who wrote the software that tells the car which individual to "value" most.

An AI teacherbot will implement a complex algorithm, a super-rubric, and those directions will come from programmers who will include their own values, their own beliefs about how that educational moment/issue/response/thingamabob ought to be handled. "Well," you may say, "So will a human teacher. A human teacher will bring biases and views to the classroom as well."

And that's true. But the programmed-in bias of computers is an issues because A) it will most likely be put there by people who are NOT trained, experienced classroom professionals and B) because, like a standardized test, the computer centered CBE program will come wrapped in a mantle of objectivity, a crown of bias-free just-the-facts-ma'm, all of which will merely be an illusion. Furthermore, it will be an illusion that cannot be challenged or modified. As a live human, I can be challenged by my students on a point; they can even convince me to change my mind as we all wrestle with context and ambiguity.

Teaching is a moral act, an act that comes with a heavy moral and ethical context. AI does not currently have that capacity, and may very well never have it. Putting an educational program under the control and guidance of an AI-flavored computer program is putting a classroom in the hands of a sociopath who literally does not know right from wrong but only, at best, a list of rules given to them by someone, rules that it now follows slavishly.

Well, what if we have those rules written by someone who we agree is a highly moral person? Would that satisfy you, you cranky old fart?

My answer is no. Moral and ethical behavior by its very nature must deal with ambiguity and context, and it must be able to change and grow in understanding. It requires wisdom, not just intelligence. When folks push AI computers as a solution to the classroom, they are pretending to have solutions to problems that the leading minds in the computer world have not solved, and even if those solutions existed, we would still have to argue about whether or not they made a teacherbot fit for the classroom.



2 comments:

  1. I recently had a student tell me that the class worked harder for me because they knew I loved them.
    Let them find an AI that can do that!

    ReplyDelete
  2. Strange, isn't it, that when a technique studied by the artificial intelligence community actually becomes useful the term "artificial intelligence" is soon dropped. I am thinking in particular of "neural networks". The continued use of the AI term in machine teaching (edubots?) tells me that they have a very long way to go before creating "something that works".

    ReplyDelete