Ben Riley has pulled a lot of attention lately for the story of his father, who turned to AI for advice on how to manage his cancer, and died because of it. Riley gets into the experience of being a New York Times story subject in a recent post, and looks into the reporters idea to show oncologists the advice the AI was providing. Riley shares their responses, and even for AI, it is shockingly, horribly wrong.
A trained cancer doctor would recognize that it was nonsense. An amateur might be fooled by how AI manages to mimic the look and feel of s real medical report.
This points to a recurring theme in AI use. The "human in the loop" principle is all about including a human being who can actually understand--and check-- the AI output. Or consider one of the more popular AI assignments for students-- have a LLM write about a topic you know well, and count up all the mistakes it makes. In other words, experts.
Large Language Models can perfectly mimic form and confidence. They have, literally, no shame, less than even the most shameless bullshit artist that ever sold you some Florida real estate or a White House super-duper ballroom. They are elegantly mechnized Dunning-Kruger machines.
I recently sat and talked to someone who works in the computer tech and coding world and describes himself as a power user of AI. AI does save him and his team time, but there are caveats. AI doesn't remember what it has done. "It's like talking to a smart person with Alzheimer's." And it is not trustworthy. The project has to be broken down into chunks, and then each chunk has to be run through testing, designed by and/or involving a human coder in order to determine if the code actually does what it is supposed to do. The resulting process is still faster than the old all-human approach, but it still requires the involvement of humans with expertise to check the work, go back, re-do, check again, and on and on. It is most definitely not "Press a button and an hour later a fully-completed project is ready to go."
The conversation raised lots of questions for me. If the AI is doing all the entry-level grunt work under the watchful expert eye of human accountability sinks, then where will the future expert eyes come from?
I'm also thinking of all those folks happily burbling "I use AI to write my journalism-flavored content" or "I use AI to write my lesson plans," and wondering if their process looks similar, if they are taking the bot through building up a lesson plan step by step, carefully examining each product every step of the way with their own expert eyes. Because I'm betting not.
Because while coding involves a lot of time-intensive grunt work hours that can be collapsed by AI, writing things does not. Doing the thinking work (outlines, brainstorming, etc) is how you get ready for the writing work, and that includes writing a lesson plan. If you have the AI write the outline, you still have to do the thinking part. In short, if you use the bot to write your lesson plan in a responsible, professional manner, I don't see it saving you any amount of time.
In fact, if you really are an expert, I'm betting lesson plans or writing by bot, if done well, will actually take more time than just doing it yourself. The people who are finding it botting their way through the work are, just like the students using cheatbots, the folks least qualified to use the bot without producing junk.
It is the central irony of AI is that it's really only safe to use if you are already an expert in your field. And that's a terrifying thought when you consider that AI has the potential to completely gut the pipeline that would ordinarily produce experts.
Mind you, expertise is not a guarantee of well-used bots. AI repeatedly encourages users to trust its illusory expertise. Last week CNN reported that a top-ranked lawyer at "one of the most prestigious firms on the planet" became the latest in a long string of lawyers tripped up by AI error. He had to send a letter of apology to a judge after submitting a filing loaded with errors-- it took three pages to highlight and correct all of them. The mistakes were caught by opposing counsel.
All of this underlines one clear idea-- of all the people who shouldnt be using AI, students shouldn't be using it the most. Jessica Winters, in her recent New Yorker article, cites a host of experts who point out the many ways that AI is not a useful, appropriate, or even safe tech to include in education. But it is already oushed heavily in all manner of K-12 education.
The Chromebooks, which the students use in every class and for homework, came pre-installed with an all-ages version of Gemini, a suite of A.I. tools. When my daughter, who is in sixth grade, begins writing an essay, she gets a prompt: “Help me write.” If she is starting work on a slide-show presentation, the prompt is “Help me visualize.” She shoos away these interruptions, but they persist: “Help me edit.” “Beautify this slide.” The image generator is there, if she’d ever wish to pull the plug on her imagination.
There are so many reasons to keep AI away from students. At the very least, we should be replacing all the cute little "become an AI expert" lesson plans helpfully provided by AI corporations with lessons about what AI is not and can not do, and nwhy children should avoid it like they avoid strangers in vans offering them candy.
Winters asks what it will take to push AI out of schools, and the answer, I think, is a whole hell of a lot because a lot of very powerful people have bet a very large amount of money that they can push AI everywhere, regardless of what harms it will do. It is as if the wealthiest corporations in the world have bought a vast supply of very powerful crack and they now are desperate to move it into any market they can think of.
AI is not for amateurs in any field, and I only grudgingly accept that in some forms, it may have some use for some experts. In education, I think it will be awesome for cranking out lesson plans that administrators demand but don't read and teachers generate but don't use. For anything else, educators had better be prepared to use it like grown-ass experts in their field and not like a 14-year-old trying to generate a term paper ten minutes before it is due. And if using it like an expert in your field turns out to create a process that is longer and less productive than the non-AI version, well, experts should know how to get the job done.
No comments:
Post a Comment