Remember Diane Tavenner? The Bay area edupreneur started the ill-fated Summit Charter Chain, got a whole bunch of money and tech from Mark Zuckerberg, watched a whole lot of students and their parents push back hard on her automated-education-in-a-box model, and spun it all off into a non-profit thingy.
That was back in 2018. Since then, she has been doing all the fun silicon valley stuff, including writing books like Prepared: What Kids Need for a Fulfilled Life, chaired the Pahara Institute, started a find-your-career Life Navigation Platform (and app) in Mountain View, and she started a podcast, because of course she did. And it's on The74. And that's what we're looking at today.
Her co-host is Michael B. Horn, a speaker-author with a book blurbed by Reed Hastings. He's a co-founder of the Clayton Christensen Institute for Disruptive Innovation, and he writes blog posts with titles like "Why Tech Didn’t Fix Schools: Applying Innovation and Disrupting the Factory."
Their guest on the episode in question is John Bailey, American Enterprise Institute's AI guy. He has worked under Governor Glenn Youngkin, done some White House stints, vp-ed at Jeb Bush's Foundation for Excellence in Education, Aspen Global Leadership Network. You get the world these folks soak in.
The episode is called "How AI is Democratizing Access to Expertise in Education," so you know we're in for a good time. Let's dig into the transcript, and start by skipping the obligatory introductory shmoozing.
Bailey talks a little about how he ended up in this particular arena, coming from a background in ed tech already.
And if I have to admit, like, I’ve been part of a lot of the hype of, like, we really think technology can personalize learning. And often that promise was just unmet. And I think there was, like, potential there, but it was really hard to actualize that potential. And so I just want to admit up front, like, I was part of that cycle for a number of years. And. And then what happened was when ChatGPT came out in December of 2022, everyone had sort of like a moment of ChatGPT, and for me, it wasn’t getting it to write a song or, you know, a rap song or. Or a press release. It was. I was sitting next to someone with a venture team and I said, what is, like, what is an email you would ask an associate to do to write a draft term sheet? And she gave me three sentences. I put it in ChatGPT and it spit back something that she said was a good first draft, good enough for her that she would actually run with it and edit it.
Yes, ed tech has failed to live up to its hype before, but This Time It's Different (which, coincidentally, is a phrase that is always part of the hype). Bailey found ChatGPT fun to play with, and I agree-- I, too, played several rounds of Stump The Software, myself, but only one of us was invited by corporate to come play with the toys inside. This is going to be "so transformative," says Bailey. "It just feels different."
So what are the rewards and risks here? Well, the internet "democratized" information access (it also democratized information creation, which has not turned out to be a great thing and has rather messed up the other thing).
What I think is different about this technology is that it’s access to expertise and it’s driving the cost of accessing expertise almost to zero. And the way to think about that is that these general purpose technologies, you can give them sort of a role, a Persona to adopt. So they could be a curriculum expert, they could be a lesson planning expert, they could be a tutoring, and that’s all done using natural language, English language. And that unlocks this expertise that can take this vast amounts of information that’s in its training set or whatever specific types of information you give it, and it can apply that expertise towards different, you know, Michael, in your case, jobs to be done.
Yikes. Bailey has lost me already. LLMs can pretend to be these things, and do it quickly, but "expert"? I don't think so. You aren't accessing expertise; you're accessing a parrot that has listened to a huge number of experts and also a huge number of dopes who know nothing and the LLM is incapable of telling them apart. At the same time, it's not clear how using ChatGPT is any quicker or more efficient than just googling.
Bailey thinks it's going to be a great tutor. But no-- a great tutor needs to be able to "read" the student to suss out the exact areas that the student is stumbling over, and do it in real time. Tutoring by algorithm has been the same forever-- give the student a task, check to see what the student got wrong, give the student a new task that focuses on what they got wrong. This is slow, clunky, a blunt instrument approach to teaching. It's the same theory of action behind the earliest teaching machines, and it has the same problems. 1) The machine cannot read the student with any sort of precision and 2) the student is asked to perform for a mechanical audience. At best, the AI might be helpful in generating a worksheet to specifications given by a human teacher. That's helpful. It's not transformational.
I think it’s also going to be an amazing tutoring mechanism for a lot of students as well. Not just because they’ll be able to type to the student, but as we were just talking about, this advanced voice is very amazing in terms of the way it can be very empathetic and encouraging and sort of prompting and pushing students, it can analyze their voice.
I cannot say this hard enough-- the bot cannot be empathetic. It might simulate empathy. Do we expect students to be moved and motivated by a machine that can pretend to give a shit about them? And what, I ask, and not for the first or last time, is the problem being solved here? Is there some reason it's better to have software that can mimic a human interaction than it is to have an actual human interaction with an actual human.
What will deployment in education look like? Bailey compares it to offices where AI is deployed in "back office functions," like, say, coding. He admits that a back office low risk function would be a better start than, say, having an AI do tutoring and "hallucinating," and I am reminded of the observation that AI is always hallucinating, but sometimes the hallucination accidentally matches reality.
What does Bailey think a low risk back office education function might be? How about parent communications? And holy shneikies, how is that remotely low risk. On what world does a parent want to hear from their child's teacher's bot, rather than the teacher?
How about using AI to do scoring and assessments? We've been doing that for ages, and mostly the result is designing the test so that it can be scored by a machine rather than designing it so it measures what we want to have measured. Computer-assessed writing? We've been pursuing that for ages and it still sucks and, like the robocaller on your phone, can only handle responses that fall within specific parameters.
Teacher productivity tools? Maybe, but people whose lives are outside the classroom seriously mis-estimate what "productivity" covers for a teacher. Teachers are not making toasters or cranking out footstools, and creating lesson plans and assessing tasks-- that's not like working an assembly line.
What are the risks? Well, despite the calls to keep teachers in the loop, Bailey is concerned that tired and overworked teachers might jump to AI, much like they turn to Pinterest and Teachers Pay Teachers now, So I guess AI doesn't solve that problem. Because an AI lesson plan for reading might not even be based on the science of reading or aligned to your curriculum. He thinks this is much like the concerns about students just improving an essay with a button instead of doing the struggle that is how one learns. And maybe even talking to an empathy-faking AI will cause students to miss the friction of real human interaction, which would be bad. So for a whole long paragraph Bailey made sense in a way that he hadn't up to this point. Because I'm pretty sure everything he just said is the argument against tutoring students and probably also doing the back office stuff for teachers.
Tavenner is also concerned that the increased "efficiency" of AI will reinforce the current model instead of disrupting it all. I think by "efficiency" she really means "speed," which is not the same thing at all (I would rather have my surgeon be efficient than fast).
Bailey agrees that yes, as she has often said, the system and institutions within the system are "remarkably resistant to change." Also, because of that, "technology doesn't change a system." I have a theory about this, but this post is already long, so let me just say that "change" is constantly happening in education, just not the kind of transformation that every person with a piece of ed tech to peddle envisions in their pitch. The key here is utility. Teachers adopt practices and technologies at the speed of light--when they are useful. But ed tech vendors are forever showing up to the construction sight with a case full of butter knives declaring, "This will be a huge help in building houses if you just change the way you build houses. At least, that's what our in-house testing projections say."
AI in education is still a solution in search of a problem. Bailey is going to swing back around to the "access to expertise" idea which is just-- I mean, he is clearly a smart and accomplished guy, but AI bots possess no "expertise" at all and your best hope is they can hallucinate their way to a passable imitation of it.
[I]f you’re a school principal, all of a sudden you have a parent communication marketing expert just by asking it to be that Persona and then giving it some tasks to do. And if you’re a teacher, it means all of a sudden every teacher in America can have a teaching assistant like a TA that is available to help on a variety of different tasks.
"Variety of different tasks " is doing so much work here, and I know this is a podcast and not a dissertation, but these are the specifics on which his whole idea hangs, and what he comes up with are the vague generalities and things like asking the AI TA , "I see like John and Michael really struggling in algebra what are some ways I could put them in a small group and give them an assignment that would resonate with both of their interests and help them scaffold into the next lesson? That was impossible to do before." Well, no, not really impossible; more like regular teaching. And the teacher would still have to feed the AI the boys' interests and the scope and sequence of the next lesson.
There's some chatter about pricing which is as close a we get to asking if AI in education would be worth the cost to money-strapped schools, and then Horn has a thought he wants to toss out here. So you list bad things like losing the humanity in coaching, he says, and an easy button for writing that "jumps you ahead to the product, but not necessarily the learning and the struggle from it" but what if... and he takes me back to my college days with an analogy from Brewer Saxberg, learning scientist, that Saxberg attributed to Aristotle but I'm pretty sure I learned about studying how pre-literate cultures shifting to literacy.
The idea is this-- when cultures shift from oral tradition to the written word, certain skills get lost, like the ability to recall and recite Beowulf-sized chunks of poetry. "Kids these days," complain the elders. "Can't even remember fifteen minutes' worth of Bede. Just walk around staring at those funny marks on paper all day."
Horn seems to be suggesting that we're on the cusp of something like that. Here's a real quote:
of these things that might hurt, which are really going to, are they still going to matter in the future or are there going to be other things that we, you know, other behaviors or things that are more relevant in the future? And how do you think about sort of that substitution versus ease versus actually like really, you know, frankly, I think when you talk about social interaction that could be, forget about disruptive, that could be quite destructive.
Interesting, says Bailey. AI is chipping away at entry level jobs, but that means that people are not acquiring the entry level job skills. His example-- legislators don't need an intern to summarize legislation. AI can do it, but then the interns aren't learning to read legislation. So now the intern has to do higher level cognitive functions, which tells me that students who coasted through high school letting ChatGPT do their homework or all the thinky parts of writing are going to be even LESS prepared for entry level jobs that require MORE skills. Bailey understates that there will be a huge strain on the education system, but then he ruins it by citing TIMSS and NEAP scores as if those tests provide any sort of measure of high level thinking.
And he's back to the cheap expert again, offering that he can't do fancy Excel stuff, but now an AI can do it for him, so "now I could do it," except of course he still can't, and I have to wonder how much it matters that he still couldn't understand what the AI had done on the spreadsheet.
Look, there's a whole continuum here. The tech trend is always toward needing less and less understanding from the user. The first people to own automobiles had to know how to fix every last nut and bolt; now you can drive in blissful ignorance--as long as nothing ever goes wrong.
So maybe you can just count on AI magic and not care what's happening. But I don't think so--particularly because AI can only deliver in certain sorts of situations.
Catch your breath, because there's more nightmare to come. Tavenner wants to talk about the intersection between AI and ed policy. Like, could you use AI to help you decide how to use your ESA voucher money? Bailey says that sounds cool, and gives some examples, and seems stuck on hos the AI could make the "friction" between families and education institutions go better with robot empathy simulations. Let the AI help you figure out what to do with your education, your career, your life. "We're very close to that," says Bailey, repeating the motto of every tech promise of the last decade (self-driving cars have been a year away for ten years). And speaking of old familiar songs--
I think that’s going to be powerful and it’s going to make policy easier. I’m still, until we create more flexible ways for teachers to teach, for students to learn and students to engage in different types of learning experiences, I just think we’re going to end up boxing and limiting a lot of this technology capabilities.
Once we change how we build houses, the power of this butter knife will be unlocked. Because the education system is there to help unlock technological potential, and not vice versa.
This is what's out there among the thought leaders and people who get excited by tech stuff and don't know much about classroom teaching of live humans. These are smart, accomplished folks. They even seem nice. But they are on some planet far, far away.
I can offer you one palate cleansing chaser after all that-- two weeks later they did an interview with Ben Riley, who said a whole lot of things that need to be said. Go read, or listen to, that one.
No comments:
Post a Comment