Monday, April 13, 2026

AFT Shares Bad AI Advice

AFT made plenty of folks sad when they decided to jump on the AI bandwagon last year (kind of reminds me of the days they were resolutely on the wrong side of Common Core). They haven't shown any signs of slowing their enthusiasm, with items like this puff piece from the AFT site about "Harnessing the nest of AI," which is right up there with "Embracing the advantages of cholera." 

In this piece, three teachers share some of their "tips for saving time and boosting creativity" with AI, and oh boy. 

The three teachers have 23, 6 and 30 years of experience. They teach K, 4th and 5th grade special ed, and math. I'm not going to call them out by name because they are out there trying to do the work, and teachers take enough crap for doing that, anyway. But I do want to highlight some of these highly dubious ideas.

LV regularly uses "ChatGPT to create curriculum and lessons, as well as to differentiate lessons." And while "uses" can mean many things here, I cannot say hard enough that there is no chatbot that knows more about content or instruction than a teacher does. None. Because a chatbot doesn't "know" anything. It doesn't understand the content, doesn't know what would be the best instructional approach, doesn't know anything about bridging information and young human brains. Lesson planning is the perfect time to conceptualize chatbots like this-- If you ask it to write a lesson plan about the Civil War, it processes that as "what would a Civil War lesson plan look like?" It can create something that looks like a sort of average of every lesson plan that is has been trained on, but it knows nothing about good or bad instructional design, and it is making up everything it says about the content (some of it will be accurate, some of it won't, but all of it is made up). 

LV also uses Google's GEM tool, which is basically a tool that lets you set up your own chatbot which only accesses what you have fed it. LV is the math teacher, and I think that matters a lot here. They say the GEM can be "limited" to provide hints for the next step rather than the answer, which strikes me as more functional for solving an equation than exploring themes in Song of Solomon.

CS uses AI for small tasks, including differentiation, building rubrics, and to "refine the wording" on IEPs. And to make substitute plans. Differentiation comes up a lot, and I can see the appeal, but the time it takes to really, precisely prompt the bot strikes me as canceling out the time saved. If you are letting the bot determine what the differentiation should be, that's malpractice.

EL is in a school district that adopted Copilot as its "AI platform," which is its own kind of dopey idea. But this teacher thinks it's cool for creating games and scavenger hunts. And this comment --

As a veteran teacher, it’s easy to teach the same thing over and over again; AI is helping me get outside of my comfort zone and do some different things with the kids.

If you only get outside your comfort zone because some bot has made it super easy, have you really gotten outside of your comfort zone. I try not to be too judgy (just judgy enough) but I really do judge people who can't even work up the ambition  to scroll down the search engine page past the often-wrong AI results to see search results. Is Googling for lesson ideas (which was never a great approach) really too hard for some folks now? 

But that's not as alarming as this quote about a colleague who is "not very good with technology" but just "bloomed" with ChatGPT:

He became interested in having it generate passages for his students to read and then started adding topics that they like, such as dinosaurs and Power Rangers. Now these AI passages are a reward in his classroom—when kids complete their work, they can ask for a personalized passage. One child asked for a story about playing soccer with the Argentine star Lionel Messi.

Just stop it. Stop. It. Is he checking every one of these for accuracy? Because I'm betting a teacher who doesn't have the time to hunt down real pieces of writing by live human authors also doesn't have the time to make sure some child isn't getting a "bonus" reading about how dinosaurs used to help cavemen work at Mr. Slate's gravel pit. There are so many bad messages here, leading with the devaluing of human writing. Just stop!

LV uses a GEM to help students edit an end-or-semester project, and that has led to a concern that students "may start to optimize their writing to please AI instead of writing for a human reader." You think? Of course they will. LV has a solution-- "I’m trying to instruct the Gems to give objective, rubric-based feedback without altering the students’ voice, tone, or style. I want AI to support their thinking and not reshape their writing." Good luck with that, given that the bot is not well-equipped to identify voice, tone, or style, let alone preserve it.

And if it seems as if writing the instructions for a GEM would be rather intricate and time-consuming, well, can you guess how LV solved that? But having ChatGPT do it. 

All three use AI to communicate with parents, and some of what they have to say is surreal, like EL explaining that when they're tired and frustrated, the bot "helps me send notes to families about behavior challenges that are clear and kind." I don't know what to do with the notion that a human needs a bot to help them be more human, but I do not how I would feel as a parent if I were getting notes from a bot instead of my child's actual teacher (the answer is "pissy.") EL also uses AI to generate activities for families to use at home. 

The "interviewer" does ask the three if they have concerns. EL is concerned that middle and high school students use it too much, to the detriment of their thinking, but she doesn't have those concerns as a kindergarten teacher, and I am wondering if her students see her having AI do parts of her job for her, because that might matter. And she's sure that AI won't replace her, because AI can't hug five-year-olds or meet their social and emotional needs. Sigh. First, if you think that's the only thing AI can't do, you need to rethink what you bring to the job. Second, if that's all you do that AI can't, you can in fact be replaced with an AI augmented with a minimum-wage aid who handles the hugging and social-emotional stuff.

LV correctly notes that AI is actually worse than plagiarism for students because AI can do the whole job without students even glancing at the work. AI can oversimplify and push formulaic patterns. "Students miss out not only on building knowledge but also on developing curiosity and their voice." LV makes students do handwritten assignments twice a month for an "authentic picture."

CS notes concerns about environmental impact of AI and replacing human joy, like art. CS calls these "on a personal note" and I am wondering why that's not a professional note. Fears about student learning are valid "But the more I use it, the more I realize that if educators don’t know how to use it, then we can’t help our students learn to use it responsibly." I am imagining a high school coach explaining, "Yes, I use steroids, because how else can I help my students learn how to use steroids responsibly."

And language like this really concerns me:

My last thought for my fellow educators is that getting started with AI is a lot like having a conversation with a new colleague. You introduce yourself and your goals, and it provides suggestions—sometimes good, sometimes bad. But unlike a colleague, it has no feelings, so I can say plainly that I like one section of a lesson plan but not another. Plus, it works instantly; I can provide a critique and get a revision immediately. The key for me has been treating AI as a partner in the creative and planning process, not a replacement for my judgment.

No no no. AI is not a partner, not a colleague, not a thing you can have a conversation with. It's a tool.

Look, I get the need for finding more time? I really do. One of my most widely-read pieces was about exactly that. But I am suspicious of AI times savings, given the amount of time needed to craft a prompt and run the result through multiple revisions, I'm unconvinced. 

Nor am I convinced that getting into a do-as-I-say-not-as-I-do situation with students will end well. "Today, I had AI generate a lesson plan so that you can learn to not use AI to just do your work," is going to be a hard sell. And this line from CS--

As professionals, we use AI to save time and enhance our work—but we’re still doing the thinking and using a mix of resources. Too many students are using AI to think and do their work for them.

You can tell yourself that, but I'm not so sure. Yes, AI is worse for someone with no background of knowledge at all, but how many teaching muscles are you not using when you use AI to take care of all these various functions? Maybe you're still doing some of the thinking, but you definitely aren't doing as much as when you hammered out lesson plans by yourself. 

There are some worthwhile cautions folded in this AFT puff piece. There are plenty of professional conversations to be had (with other humans) about these issues, but when they come wrapped in big-tech-financed AFT packaging, they aren't a conversation-- they're an advertisement, and an advertisement designed to swoop us right past the whole Should We Do This At All question. I expect better from teh second-largest teacher union in the country. 

 

No comments:

Post a Comment