Pages

Wednesday, October 23, 2024

Computers And Defective Children

Kristen DiCerbo is the "chief learning officer" at Khan Academy, the hot new ed tech firm that is using computer programs to replicate some of the oldest problematic behavior in the educational universe. 

"If bringing AI into the classroom is a marathon," asserts DiCerbo in a recent article, "we’re 250 yards into this." I would argue that's a generous assessment.

See, Khan is one of the outfits betting on AI tutoring. But we can already see the problems, all completely predictable, emerging. 
Transcripts of student chats reveal some terrific tutoring interactions. But there are also many cases where students give one- and two-word responses or just type “idk,” which is short for “I don’t know”. They are not interacting with the AI in a meaningful way yet. There are two potential explanations for this: 1.) students are not good at formulating questions or articulating what they don’t understand or 2.) students are taking the easy way out and need more motivation to engage.

Oh, there are more than two possible explanations. Like 3) students aren't interested in interacting with computer software. Or 4) AI is incapable of interacting with students in a meaningful, human way that helps them deal with the material with which they struggle. 

I'm not going to expand on that point, because Benajmin Riley got there while I was still mulling this piece, and he's written a beautiful piece about what the profoundly human act of connecting and teaching with a struggling young human being requires. You should read that.  

But I do want to focus on one other piece of this. Because it's the same old mistake, again, some more.

In talking to teachers about this, they suggest that both explanations are probably true. As a result, we launched a way of suggesting responses to students to model a good response. We find that some students love this and some do not. We need a more personal approach to support students in having better interactions, depending on their skills and motivation.

In other words, the students are doing it wrong and we need to train them so that the tech will work the way we imagined it would. 

It's actually two old mistakes. Mistake #1 is the more modern one, familiar to every teacher who has had hot new "game changing" ed tech thrown at them with some variation of That Pitch--the one that goes "This tech tool will have an awesome positive effect on your classroom just as long as you completely change the way you do the work." The unspoken part is "Because this was designed by folks who don't know much about your job, so it would help them if you'd just change to better resemble the teachers they imagined when they designed this product." Raise your hand, teachers, if you've ever heard some version of "This isn't working because of an implementation problem." (The unspoken part here is "Let me, a person who has never done your job, tell you how to do your job.")

Mistake #2 is the more pernicious one, committed by a broad range of people including actual classroom teachers. And we've been doing it forever (I just saw it happening 200 years ago in Adam Laats's book about Lancaster schools). It's the one where I say, "My program here is perfect. If a student isn't getting it, that must be because the student is defective." 

Nobody is ever going to know how many students have been incorrectly labeled "learning disabled" because they failed to fall in line with someone's perfect educational plan.

We also sought to find the right balance between asking students questions and giving them hints and support. Some students were frustrated that our AI tool kept asking questions they didn’t know. If AI is to meet the promise of personalization, the technology needs to be aware of what the student currently knows and what they are struggling with to adjust the amount and type of support it provides.

You just measure what is in the brain tank, and if the level is low, pour in more knowing stuff! If this is all AI thinks it needs for personalization, AI has a Dunning-Kruger problem. At a minimum, it seems to be stuck in the computational model, a model floating around since the 1940s that the brain is like a computer that just stores data, images coded as data, experiences reduced to data. If you buy the brain-is-computer model, then sure, everything teachers do is just about storage and retrieval of data.

The brain-is-computer model has created a kind of paradox-- the idea that AI can replicate human thought is only plausible because so many people have been thinking that the brain is also a computer. In other words, some folks shrank the distance between computers and human thought by first moving the model of human thought closer to computers. If both our brains and our manufactured computers are just computers, well, then, we just make bigger and better computers and eventually they'll be like human brains.

Problem is, human brains are not computers (go ahead--just google "your brain is not a computer"), and a teacher's job is not managing storage and retrieval of data from a meat-based computer. 

Which means that if your AI tutor is set up to facilitate input-output from a meat computer, it suffers from a fundamental misconception of the task. 

This lack of humanity is tragic and disqualifying. We are only just learning how much can go wrong with these electro-mimics. There's a gut-wrenching piece in today's New York Times about a young boy who fell in love with a chatbot and committed suicide; reading the last conversation and the chatbot's last words to the child is absolutely chilling. 

AI is not human, and so many of its marketeers don't seem to have thought particularly hard about what it means to be human. If this is a marathon, then we aren't 250 yards in or even 250 feet in, and some of us aren't even running in the right direction.

3 comments:

  1. The brain-is-a-computer model is also suspiciously like the brain-is-a-bank model that Paulo Freire told us about long ago.

    ReplyDelete
  2. Do you think AI is worthless for math drill? I'm not sure myself. There are right and wrong answers, but often several routes for getting to either.

    ReplyDelete
  3. Well, it's not called "drill and kill" for nothing.

    ReplyDelete