Pages

Thursday, December 12, 2024

AI in Ed: The Unanswered Question

It is just absolutely positively necessary to get AI into education. I know this because on social media and in my email, people tell me this dozens of times every day. 

Just two examples. UCLA is excited to announce that a comparative literature course next semester will be "built around" UCLA's Kudu artificial intelligence platform. Meanwhile, Philadelphia schools and the University of Pennsylvania are teaming up to make Philadelphia a national AI in education model. The AI-in-education list goes on and on, and there are soooo many questions. Ethical questions. Questions about the actual capabilities of AI? Questions of resource use?

But here's the question I wish more --well, all, actually-- of these folks would ask.

What problem does it solve?

This is the oldest ed tech problem of them all, an issue that every teacher has encountered-- someone introduces a new piece of tech starting from the premise, "We must use this. Now let's figure out how." This often leads to the next step of, "If you just change your whole conception of your job, then this tech will be really useful. Will it get the job done better? Hey, shut up." 

This whole process is why so many, many, many, many pieces of ed tech ended up gathering dust, as well as birthing painfully useless sales pitchery masquerading as professional development. And when it comes to terrible PD, AI is right on top of things (see this excellent taxonomy of AI educourses, courtesy of Benjamin Riley)

So all AI adoption should start with that question.

What problem is this supposed to solve? 

Only after we answer that question can we ask the next important question, which is, will it actually solve the problem? Followed closely by asking what other problems it will create.

Sometimes there's a real answer. It turns out that once you dig through the inflated verbiage of the UCLA piece, what's really happening is that AI is whipping up a textbook for the course, using the professors notes and materials from previous iterations of the course. So the problem being solved is "I wish I had a text for this course." Time will tell whether having to meticulously check all of the AI's work for accuracy is less time consuming than just writing the text herself.

[UPdate: Nope, it's more than the text. It's also the assignments and the TA work. What problem can this possibly solve other than "The professor does not know how to do their job" or "The professor thinks work is way too hard." Shame on UCLA.]

On the other hand, Philadelphia's AI solution seems to be aimed at no problem at all. Says dean of Penn's education grad school, Katherine O. Strunk:
Our goal is to leverage AI to foster creativity and critical thinking among students and develop policies to ensure this technology is used effectively and responsibly – while preparing both educators and students for a future where AI and technology will play increasingly central roles.

See, that's a pretty goal, but what's the problem we're solving here. Was it not possible to foster creativity and critical thinking prior to AI? Is the rest of the goal solving the problem of "We have a big fear of missing out"?

Assuaging FOMO is certainly one of the major problems that AI adoption is meant to address. The AI sector makes some huge and shiny predictions, including some that show a fundamental misunderstanding of how education works for real humans (looking at you, Sal Khan and your AI-simulated book characters). Some folks in education leadership are just deathly afraid of being left behind and so default to that old ed tech standard-- "Adopt it now and we'll figure out what we can do with it later."

So if someone in your organization is hollering that you need to pull in this AI large language model Right Now, keep asking that question--

What problem will it help solve?

Acceptable answers do not include: 

* Look at this thing an AI made! Isn't it cool! Shiny!

* I read about a school in West Egg that did some really cool AI thing.

* We could [insert things that you should already be doing].

* I figured once you got your hands on it, you could come up with some ideas.

* We're bringing in someone to do 90 minutes of training that will answer all your questions.

* Just shut up and do it.

The following answers are also not acceptable, but they probably won't be spoken aloud:

* We are going to replace humans and save money.

* It will make it easier to dump work on you that other people don't want to do.

Acceptable answers include:

* We could save time in Task X

* We could do a better job of teaching Content Q and/or Skill Y

Mind you, the proposed AI may still flunk when you move on to the "Can it actually do this, really," but if you don't really know what you want it to do, it's senseless to debate whether or not it can do that.

There's some debate raging currently in the world of AI stuff, and as usual Benjamin Riley has it laid out pretty clearly here. But much of it is set around the questions "Is AI fake" and "Does AI suck," and in the classroom, both of those questions are secondary importance to "What problem is AI supposed to help solve here?" If the person pushing AI can't answer that question, there really isn't any reason to continue the conversation. 



No comments:

Post a Comment