Over on the dead bird app, you can find the AI Being Dumb account, an invaluable source. It recently highlighted the Andon Labs experiment in letting AI run a store. Two items of note. One is that the AI hired humans to do some of the work (though it didn't tell them when to show up). The other is that folks started using Google Reviews to try to get the store to stock products, like $260,000 worth of paper clips, tungsten metal cubes, barrels of oil, or 413,793 KitKats.
This highlights one of the assumptions of every discussion about AI tutors and AI paper graders and AIs in place of humans in education. The assumption is that once we replace the human actor with an AI agent, everyone else will keep interacting with the AI agent as if they were still dealing with the human.
That's a silly assumption, particularly in a school setting. Students do not even treat humans like other humans. Part of September is the annual Testing Of The Classroom Boundaries as well as te annual Mapping Of The Expectations. Students conduct these activities, sometimes augmented by the Existing Reputations of the adult humans, and use the collected data to make their choices for the remainder of the year. All of this testing and mapping is conducted withing each student's personal rules for how one treats other human beings.
This is part of the rich web of human relationships that support and enrich education. The AI-in-education crowd seems to think that one can swap out any human node of that web and replace it with a bot and nothing important will change.
For the moment, I don't want to focus on the dehumanizing of a human activity and dynamic. I want to focus on this question-- how will young humans act when they find themselves educationally yoked to a robot instead of a human. Expect a couple of effects.
Erasing ethical boundaries. Most humans operate on the assumptions that we owe other humans a good-faith attempt to communicate honestly. Yes, lots of people violate that assumption, but the fact that te boundary exists is why we have a whole language about lies and dishonesty that describes the transgressive nature of not making that good-faith, honest effort. But what do we owe a bot? Is there any reason to make a good-faith honest human effort in responding to or interacting with a non-human bot?
This may seem like esoteric philosophical noodling that young humans would not waste a minute pondering, but I assure you they get it on some level. Why do schools spend so much time hooting and hollering at the onset of Big Standardized Test season, trying to connect the test to students' relationships with their teacher and schools? Because students on level understand that they don't owe any good-faith honest effort to whatever faceless unknown buraucrats are behind the BS Test, so schools figure they'd better activate student's connection to teachers and school. "I know you don't owe it to Pearson or education reformsters to give this an honest try, but how about doing it for Mrs. Swellclass and the East Egg Battling Chickens?"
Do you think a student will give the same size and shape of effort to a bot that they would give to their beloved human teacher, or even their sort-of-don't-mind human teacher? Some will decide to see how entertaining bad-faith efforts can be; what kind of baloney will the bot accept? All will figure out how to deal with the bot-generated pressure to create human-crafted AI slop. They may fight back, give in, try to outsmart the bot, but only a few will keep trying to do their best as if they were working for a human.
It is worst for any instruction or assessment that involves writing. Writing is impervious to objective evaluation; everyone who grades writing assignments does so with their own set of biases in place. Another AI falsehood applies here; decades of fiction and years of marketing have primed us to think of robot intelligence as perfectly objective, strictly factual and "true." It is not. It reflects whatever biases are progremmed into it (and it has some, deliberately or not). You can barely swap out human for human without changing the definition of "good" writing; you certainly can't swap out human for bot without blowing up the definition entirely.
There are a hundred bad assumptions and built-in problems with AI in education. But we have to include the way proponents ignore the effect AI will have on how folks interact with the school. Parents will not treat your AI slop letter the same way they will treat a human note. Students will not complete assignments for the AI the same way they would for a human. Taking the human out of human interaction matters, and the people who don't admit it are just too busy trying to sell some education-flavored slop.

No comments:
Post a Comment