Showing posts sorted by date for query AI. Sort by relevance Show all posts
Showing posts sorted by date for query AI. Sort by relevance Show all posts

Sunday, July 13, 2025

ICYMI: Scopes Centennial Edition (7/13)

Last week we sailed past the 100-year anniversary of the Scopes trial, arguably the grand kick-off for a century of culture panic in this country. There were a couple good pieces about the event (here and here) that, if nothing else, taught some folks that they could use one issue to harness christiniast discontent with many issues and tied to a massive persecution complex. 

Still stuff to read this week. Let's see what we've got in the bin.

"South Carolina Partners with PragerU" (Updates)

Steve Nuzum has been following the attempts of South Carolina to use fake PragerU as an education partner. Yuck.

NH's Latest School Funding Case

Andru Volinsky updates us on the latest chapter in the long-running attempt to get New Hampshire to fully and fairly fund its schools.

Read Receipt

Doing your brainstorming with a chatbot? Audrey Watters would rather read or, you know, think.

The ‘big, beautiful’ fight over school choice ends with escape clause for blue states

Lexi Lonas Coch at The Hill looks at the escape clause in the federal voucher bill. Will states avoid the whole business, or will it be hard to resist free federal money?

How the Supreme Court Is Making Public Education Itself Unconstitutional

At EdWeek, Johann Neem provides the most depressing take on the recent SCOTUS allowing parents a religious opt-out for any lessons they don't care for.

Survey: 60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week

Speaking of lousy news, here are some depressing stats reported by The74..

A District-by-District Accounting of the $6.2 Billion the U.S. Department of Education Has Held Back from Schools

I've linked to this piece from New America in two pieces this week, but I'm going to put the link here because it's an extraordinary resource for breaking down the damage from the regimes withholding of funding from schools across the country.


Thomas Ultican takes a look at Sacramento, where Kevin Johnson and Michelle Rhee have been busy folks.

The Real Reason Churches Advocate for Vouchers

Robert Repino writes at The Progressive about one of the big unanswered questions of vouchers. Churches want them and have pushed hard for them, but what do they do with the money?

Indiana Vouchers: Private School Coupons for Wealthy Families

Andy Spears breaks down yet another state voucher program that is all about taxpayers funding wealthy families and private schools.

Most U.S. adults say child care costs are a ‘major problem,’ a new AP-NORC poll finds

Yeah, you already knew this, but child care is crazy expensive-- so much so that folks aren't working because it would cost too much to have child care. 

MAGA’s Ugly Budget at Odds with Its Creepy Pronatalism

Jennifer Rubin joins the crowd pointing out that if the far right wants more (white) babies, maybe don't make life miserable for young parents.

La. Teachers: State Raise Funding Is on the 2026 Ballot

The indispensable Mercedes Schneider updates us with a picture of the kind of mess teachers have to go through in a state where the legislature decides if they can have a raise or not.

The resistance to “School choice” isn’t psychological—it’s principled.

Patty Levesque has enjoyed a full career as a serial reform grifter, and she recently published a piece arguing about the psychology of school choice resisters. Sue Kingery Woltanski explains why Levesque's argument is bunk.

When The U.S. Government Tried To Replace Migrant Farmworkers With High Schoolers

This is an NPR story from 2018 (reported by Gustavo Arellano). While we're hearing noise about making able-bodied people work in the fields to earn their Medicaid, it's worth looking back to 1965, when the feds decided high school jocks could replace those damned migrant farm workers. There's a reason that the program wasn't around in 1966. 

I include the music clips these days because when the news is lousy, it's good to remember what is beautiful about being human in the world. 



Sign up for my newsletter. It's free and you get all the stuff I crank out. 

Saturday, July 12, 2025

Sal Khan Flunks Lit Class

Sal Khan has established himself as one of the big names in the world of Tech Overlords Who Want To Reshape Education Even Though They Don't Know Jack About How It Works.  

These days Khan is pimping for AI, including publication of a terrible book about AI and education, and John Warner's review of that book ("An Unserious Book") pretty well captures the silly infomercial of that work. You should read the whole thing, but let me share this quick clip:

Khan is in the business of solving the problems he perceives rather than truly engaging with and collaborating with teachers on the actual work of teaching. He turns teaching into an abstract problem, one that just so happens to align with the capabilities of his Khanmigo tutor-bot.

More than fair. 

Khan's book touches on his love for Ender's Game, a book whose main point appears to have sailed far over Khan's head. The book series is about children who are tricked into running a genocidal space war by being hooked up to a gamified simulation. Khan thinks the book is about "how humans can transcend what we think of traditionally as being human."

That's not a one off. Khan put his reading skills on display a few months ago in a Khan Academy blog post in which this "avid reader" offers five recommendations, complete with summaries, sort of.

Khan likes to say that Khan Academy was inspired by Isaac Asimov's Foundation series: "The concept of collecting and spreading knowledge for the benefit of humanity deeply resonated with me." Asimov's future history (now at about 18 books) is about many things, including human society being manipulated and directed by a robot with some mild psychic powers, but okay. Let's look at his five recommendations.

A Little History of the World

E. M. Gombrich covers history from cave dweller says to just after WWI. Khan appears to know what he's talking about here, saying that it "reads like a magical adventure that inspires true wonder as the reader journeys through our shared story on this planet." Though I'm not sure Khan caught the very humanist tones of the book. "In many ways, Gombrich has the same approach to education as Khan Academy does—showing that learning is best when paired with accessibility, joy, and wonder." Khan Academy videos are about joy and wonder? 

The Art of Living

Epictetus, a Greek stoic philosopher, was a sort of classical Ben Franklin, and this book collects a whole bunch of his observations about Living a Good Life under headings like Your Will Is Always Within Your Power, Create Your Own Merit, and Events Are Impersonal and Indifferent. What Khan gets from it is some sweet, sweet marketing copy:"

This quote resonates with me: “The key is to keep company only with people who uplift you, whose presence calls forth your best.” The sentence perfectly captures the spirit of Khan Academy. By surrounding ourselves with passionate, supportive learners like you, we can create an environment where everyone can thrive.

Three Body Problem

Cixin Liu's trilogy is a huge nut to crack, but Khan reads it as "a skilled blend of both scientific and philosophical speculation that challenges our assumptions about who we are and what our place is in the universe." And, okay--there's a lot to discuss and argue about the work, but our place in the universe appears to be painfully small and the work is arguably a huge FAFO novel about humanity biting off way more than it can chew. Khan thinks it fits in an age of AI. when we should "double down on its positive uses while placing reasonable guardrails to mitigate the negative." I am pretty sure any number of SF novels could have been plugged in here.

Great Expectations

I taught this Charles Dickens classic innumerable times, and his summary would shame the dimmest freshman. 

The novel follows Pip, a young man whose life is shaped by opportunity, wealth, and societal expectations. Throughout history, these forces have dictated access to education and determined a person’s future. Pip’s journey highlights the inherent unfairness of this system.

Well, that's not what "expectations" means in this novel. And that's not exactly what shapes Pips life. There's also sheer happenstance (because Dickens) and love and the social status strictures of Victorian England. Most of all, it's about Pip coming to terms with himself and his goals in life in a story of moral regeneration. I confess to loving the richness and depth of this novel, far deeper and human that a complaint about fairness, and it is painful to see Khan reduce it to those few sentences.

A Connecticut Yankee In King Arthur's Court

Hoo boy, does Khan miss the boat on this one. 

In this book, Hank Morgan, a knowledgeable American engineer from the late 1800s, finds himself magically transported to King Arthur’s England in the 500s, a far more backward and ignorant time than the fanciful tales of legend. He also discovers that his knowledge of science and engineering is nothing short of magic to the people of Camelot. Through his experiences, he realizes that the best way to “liberate” people is to educate them in science, critical thinking, and humanist ideals.

Connecticut Yankee is one of Mark Twain's darkest works. It starts as a simple lampoon of the romanticized view of medieval times, but Morgan's "upgrades" to the past include the creation of firearms and other modern weaponry. Morgan wins a duel by shooting a bunch of knights with a pistol, and then in the climactic battle, uses modern technology to slaughter 30,000 cavalrymen (sent by the Catholic Church, which is a major antagonist in the novel). Thus, science "liberates" a whole bunch of people from breathing. If I wanted to pick a novel that demonstrates the corrupting dangers of technology, I could do worse than this one.

I would guess that Khan had ChatGPT write the list for him, except that I'm not sure that a bot wouldn't do a better job. I know it's just a little fluff piece for his company's blog, but damn-- someone who wants to commandeer the shape and direction of education out to be better than this. This is a guy who sees what he wants to see and not what is actually there, a serious absence of critical thinking skills for someone working in education.

Sunday, July 6, 2025

ICYMI: Post-Independence Day Edition (7/5)

In our town, the annual fireworks display is set off pretty much across the river from my back yard. So every year we have a cookout, mt brother and some friends come over and after supper, we play some traditional jazz in the backyard where anyone in the neighborhood can hear. Then the fireworks happen. There's no doubt that some years feel different than others, but our country has so many terrible chapters that it's impossible not to live through some of them. At the same time, our most immediate sphere of control involves watching out for the friends and family and community that is in our immediate vicinity. So we try to do that.

Meanwhile, I've got a reading list for you from the week. Remember to share.

South Georgia librarian is fired over LGBTQ children’s book included in summer reading display

Another one of these damned stories. She's got a lawyer; we'll see if that helps.

‘I Don’t Want Any Light Shining on Our District:’ Schools Serving Undocumented Kids Go Underground

The 74 was launched as a bad faith exercise in reformsterism and political hackery, but they still manage to put out valuable stories like this. Jo Napolitano looks at school districts that are trying to evade the long arm of the anti-diversity regime.

Cyber school facing wrongful death suit says it’s ‘unreasonable’ for teachers to see students weekly

I've written about Commonwealth Charter Academy many times, because they are a profiteering real estate-grubbing company disguised as a cyber school. Katie Meyer at Spotlight PA has this story about how CCA is resisting the state's mandate to make even a minimal effort to take care of its students.
 
Public Money, Private Control: Inside New Orleans’ Charter School Overhaul

Big Easy magazine does another post-mortem of the New Orleans charter experiment (which has now been running for twenty years) and finds, once again, that it's not as great a model as reformsters want to believe.

The Chan-Zuckerbergs stopped funding social causes. 400 kids lost their school.

From the Washington Post, one more example of why depending on flakey fauxlanthropists is not a great plan for schools.


Thomas Ultican looks at some of the forces trying to sell the Science of Reading

Making Sense of Trump's K-12 Budget Slashing

Jennifer Berkshire puts the regime budget slashing in the context of some broader, uglier ideologies at work.

Whatever Happened to Values Clarification

Oh, the misspent days of my youth, when Values Clarification was a thing. Larry Cuban takes us back to this little chapter of history.

Trump Administration Axes Funding for Key K-12 Education Programs on One Day’s Notice

Jan Resseger reports on the Trump initiative to just withhold funds from schools because, well, he feels like it.

Reading is the door to freedom

Jesse Turner on reading and his time spent teaching on the Tohono O'odham Reservation.

Fiscal Year Ends in Chaos for Florida Schools

Florida continues to set the standard for assaulting public education. Sue Kingery Woltanski reports on latest budgetary shenanigans.

Firms belonging to wife of Rep. Donalds grabbed millions in charter school contracts

Speaking of Florida shenanigans, here's a piece from Florida Bulldog that looks at the many ways that Erika Donalds has enriched herself with education funds. You Florida fans will recognize many of the names in this piece by Will Bredderman.

Unconstitutional Voucher Program Can't Be Fixed Easily

Policy expert Stephen Dyer has been all over the recent successful challenge to an Ohio voucher program. Where do they go next? No place easy.

The Trump Administration is Ending Special Education!

Nancy Bailey explains how the new Trumpian budget slashing may well end special ed as we know it.

California colleges spend millions to catch plagiarism and AI. Is the faulty tech worth it?

Turnitin is now in the AI detection biz, and it's just as scammy as their old business model. Tara Garcia Mathewson at Cal Matters has the story.

The AI Backlash Keeps Growing Stronger

If you're thinking that maybe AI isn't all that awesome, you have plenty of company. Reece Rogers reports for Wired.

Make Fun Of Them

Ed Zitron points out that our tech overlords are mostly dopes, and we should make fun of them for it.

This week at Forbes.com  I took a look at what the Senate's version of federal vouchers looks like. At the Bucks County Beacon, I broke down the Mahmoud v. Taylor decision.

Tuba Skinny is the band I'd like to play in when I grow up.




Subscribe to my newsletter and stay caught up on the Curmudgucation Institute output. 

Friday, July 4, 2025

What The Free Market Does For Education and Equality

"Unleash market forces" has been a rallying cry of both the right and some nominally on the left for the past twenty-some years. The free market and private operators do everything better! Competition drives improvement! 

It's an okay argument for toasters. It's a terrible argument for education.

The free market does not foster superior quality; the free market fosters superior marketing. And as we've learned in the more recent past, the free market also fosters enshittification-- the business of trying to make more money by actively making the product worse (see: Google, Facebook, and any new product that requires you to subscribe to get the use of basic features). 

We know what competition drives in an education market-- a competition to capture the students who give you the most marketable "success" for the lowest cost. The most successful school is not one that has some great new pedagogical miracle, but the one that does the best job of keeping high-testing students ("Look at our numbers! We must be great!") and getting rid of the high-cost, low-scoring students. Or, if that's your jam, the success is the one that keeps away all those terrible LGBTQ and heathen non-believer students. The kind of school that lets parents select a school in tune with their 19th century values.

The market, we are repeatedly told, distinguishes between good schools and bad ones. But what does the free market do really, really well?

The free market distinguished between people who have money and people who don't.

This is what school choice is about, particularly the brand being pushed by the current regime.

"You know what I like about the free market," says Pat Gotbucks. "I can buy a Lexus. In fact, not only can I buy a Lexus, but if you can't, that's not my problem. I can buy really nice clothes, and if you can't, that's not my problem. Why can't everything work like that? Including health care and education?"

It's an ideology that believes in a layered society, in a world in which some people are better and some people are lesser. Betters are supposed to be in charge and enjoy wealth and the fruits of society's labor. Lessers are supposed to serve, make do with society's crumbs, and be happy about it. To try to mess with that by making the Betters give the Lessers help, by trying to elevate the Lessers with social safety nets or DEI programs-- that's an offense against God and man.

Why do so many voters ignore major issues in favor of tiny issues that barely affect anyone? Because the rich getting richer is part of the natural order of things, and trans girls playing girls sports is not.

What will the free market do for education? It will restore the natural order. It will mean that Pat Gotbucks can put their own kids in the very best schools and assert that what happens to poor kids or brown kids of Black kids or anybody else's kids is not Pat's problem. If Pat wants a benevolent tax dodge, Pat can contribute to a voucher program, confident that thanks to restrictive and discriminatory private school policies, Pat's dollars will not help educate Those People's Children. 

Pat's kids get to sit around a Harkness table at Philips Exeter, and the children of meat widgets get a micro-school, or some half-bakes AI tutor, and that's as it should be, because after all, it's their destiny to do society's grunt work and support their Betters. 

One of the huge challenges in this country has always been, since the first day a European set foot on the North American continent, that many folks simply don't believe that it is self-evident that all people are created equal. They believe that some people are better than others--more valuable, more important, more deserving of wealth, more entitled to rule. Consequently, they don't particularly believe in democracy, either, (and if they do, it's in some modified form in which only certain Real Americans should have a vote).

The argument for the many layers of status may be "merit" or achievement or race or "culture" or, God help us, genetics. But the bottom line is that some folks really are better than others, and that's an important and real part of life and trying to fix it or compensate for it is just wrong. For these folks, an education system designed to elevate certain people is just wrong, and a system that gives lots of educational opportunities to people whose proper destiny is flipping burgers or tightening bolts is just wasteful. 

For these folks, what the free market in education means is that people get the kind of education that is appropriate for their place in life, and that the system should be a multi-tiered system in which families get the education appropriate to their status in society. And it is not an incidental feature of such a system that the wealthy do not have to help finance education for Other Peoples' Children.  

It's an ideology that exists in opposition to what we say we are about as a nation and in fact announces itself with convoluted attempts to explain away the foundational ideas of this country. Public education is just one piece of the foundation, but it's an important one. 

Monday, June 30, 2025

Lewis Black on AI in Education

Just in case you missed this bit from the Daily Show. As always with Black, language my mother would not appreciate. 


Sunday, June 29, 2025

ICYMI: Call Your Senator Edition (6/29)

The Board of Directors here at the Curmudgucation Institute is excited because tonight summer cross country sessions start up, and they would like very much to start running endlessly through rugged terrain again. Cross Country was their first (sort of) organized sport, and it was a hit. 

Meanwhile, however, the Senate GOP rolled their new version of the Giant Bloodsucking Bill Friday after midnight and apparently plan to vote on it tomorrow, because when you're going to pass a bill that screws over everyone (including future national debt-bearing generations) except some rich guys, you don't want to do more in the light of day than you can avoid. 

Contact your senator today. I know it's unlikely to stem this wretched tide (hell, my GOP senator doesn't even live in my state), but if they are going to do this, they need to feel the heat. Put it on your to-do list for today.

Thanks, Supreme Court! It's now my right to prevent my kid from learning about Trump. 

I'm finishing up a piece about the Mahmoud court decision for the Bucks County Beacon, but this piece from Rex Huppke at USA Today nails it pretty well.

School choice, religious school tax carveouts run afoul of Senate’s Byrd rule

Federal vouchers are now out of the Giant Bloodsucking Bill. This piece from Juan Perez, Jr., explains why and how that happened (spoiler alert: not because Congress decided to make better choices).

Updated: Senate Parliamentarian Rejects School Vouchers in Big Beautiful Bill as Violation of Byrd Rule

Jan Resseger can take you through the federal voucher uproar in more detail here.

The Education Reform Zombie Loses (Again)

The school reform wing of the Democratic party has learned absolutely nothing over the years, and Jennifer Berkshire is tracking their latest attempt at a comeback.

Against Optimization

John Warner examines some of the strange assumptions our tech overlords make about an excellent life.

Schools Need to Prepare for Those Masked ICE Agents

The indispensable Mercedes Schneider addresses one of the great challenges of our day-- federal agent attacks on schools.

NC made vouchers open to any family, then many private schools raised tuition

Liz Schlemmer at WUNC reports on the completely unsurprising news that North Carolina schools taking taxpayer-funded vouchers are raising tuition.

Public Comment Opened on Bishop's Education Funding Ambush

Even in Alaska, there are legislators who would like to gut public education. Matthew Beck at Blue Alaskan looks at the latest play to gut funding.

Privatization Parallels for National Parks and Public Schools

Nancy Bailey on how school privatization is much like the attempts to undercut our national park system.

Bugs, Brains, and Book Pirates\

Benjamin Riley with not one, but three stories from the AI skepticism beat. A naturalist group stands up to AI, that anti-AI study you keep reading about is bunk, and a court rules on stealing books for training.

Florida’s “School Choice” Boom? Most Families Still Choose Public Schools

No state has worked harder to kneecap public education than Florida. And yet, as Sue Kingery Woltanski reports, that's still the leading choice of Florida families. 

Voucher Judge Recognizes Reality

Policy expert Stephen Dyer has been all over the recent court victory over Ohio's EdChoice voucher program. He has several excellent posts on the subject, but this one is a fine place to start. Also, this one about voucher lies. 

Why Does Every Commercial for A.I. Think You’re a Moron?

This New York Times piece from Ismail Muhammad is pretty great. "Ads for consumer A.I. are struggling to imagine how the product could improve your day — unless you’re a barely functioning idiot."

ChatGPT Has Already Polluted the Internet So Badly That It's Hobbling Future AI Development

At Futurism, Frank Landymore considers the prospects of an endless AI slop loop


Eryk Salvaggio at Tech Policy Press gets a little wonky about considering what is behind the curtain, and what is just the curtain itself.

This week at Forbes.com I wrapped up the Ohio voucher decision. 

There is a thing that happens with musicians when you've performed the same stuff a million times-- you can just add bits and pieces and stuff while preserving the main thread of the performance. And if you are comfortable with each other, it's extra cool. Louis Prima and Keeley Smith and Sam Butera's band were the epitome of this; in live performance you everything from the record, and so much more. 




Come join my newsletter on substack and get all my various stuffs for free in your email. 

Thursday, June 26, 2025

Mattel Promises AI Toys

Today in our latest episode of Things Nobody Asked For, we've got the announcement that Mattel has teamed up with the folks at OpenAI to bring you toys that absolutely nobody has asked for.

It's a "strategic collaboration," say the folks at Mattel corporate. The announcement comes with lots of corporate argle bargle bullshit:
Brad Lightcap, Chief Operating Officer at OpenAI, said: "We're pleased to work with Mattel as it moves to introduce thoughtful AI-powered experiences and products into its iconic brands, while also providing its employees the benefits of ChatGPT. With OpenAI, Mattel has access to an advanced set of AI capabilities alongside new tools to enable productivity, creativity, and company-wide transformation at scale." 
Josh Silverman, Chief Franchise Officer at Mattel, said: “Each of our products and experiences is designed to inspire fans, entertain audiences, and enrich lives through play. AI has the power to expand on that mission and broaden the reach of our brands in new and exciting ways. Our work with OpenAI will enable us to leverage new technologies to solidify our leadership in innovation and reimagine new forms of play.”

You'll note that the poor meat widgets who work for Mattel are going to have to deal with AI and the "new tools to enable productivity, creativity, and company-wide transformation at scale." 

As for play, well, who knows. Mattel's big sellers include Uno. If you don't have card-playing children in your home, you may be unaware that Uno now comes in roughly 647 different versions, including some that have new varieties of cards ("Draw 125, Esther!") and some that involve devices to augment game play, like a card cannon that fires cards at your face in an attempt to get you to drop out of the game before your face is sliced to ribbons. So maybe the AI will design new cards, or we'll have a new tower that requires you to eat a certain number of rocks based on whatever credit score it makes up for you.

Mattel is also the Hot Wheels company, so I suppose we could have chatting toy cars that trash talk each other. Maybe they could more efficiently make the "bbbrrrrrrrrrrrrrooom" motor noises quickly and efficiently, leaving children more free time to devote to other stuff. The AI could also design new cars; I'm holding out for the Datamobile that collects as much family surveillance data as possible and then drives itself to a Mattel station where it can download all that surveillance info to... well, whoever wants to pay for it.

But I think the real possibilities are with Mattel's big seller-- Barbie! Imagine a Barbie who can actually chat with little girls and have real simulated conversations so that the little girls don't have to have actual human friends. 

The possibilities of this going horribly wrong are as limitless as a teen's relationship questions. Which of course are being asked of chatbots, because they trained on the internet and the internet is nothing if not loaded with sexual material. So yes, chatbots are sexting with teens. Just one of the many reasons that some auth0orities suggest that kids under 18 should not be messing with AI "companions" at all. 

Maybe Mattel isn't going to do anything so rash. Maybe Barbie will just have a more 21st century means of spitting out one of several pre-recorded messages ("Math is fun!") Please, God, because an actual chatbot-powered Barbie would be deeply monstrous.

Scared yet? Just remember-- everything a bot "hears" and responds to it can also store, analyze and hand off to whoever is interested. Don't think if it as giving every kid a "smart" toy-- think of it as giving every kid a monitoring device to carry and be surveilled by every minute of the day. And yes, a whole bunch of young humans are already mostly there thanks to smartphones, but this would expand the market. Maybe you are smart enough to avoid giving your six year old a smartphone, but gosh, a doll or a car that can talk with them, like a Teddy Ruxpin with less creep and more vocabulary-- wouldn't that be sweet.

It's not clear to me how much AI capability can be chipped into a child's toy (do we disguise it by giving Barbie an ankle bracelet?) especially if the toymakers don't figure out how to get Barbie or the Datamobile logged into the nearest wi-fi. Best case scenario is that this mostly results in shittier working conditions for people at Mattel and toys that disappoint children by being faux AI. Worst case is a bunch of AI and child horror stories, plus a monstrous expansion of surveillances state (buy Big Brother Barbie today!). 

But I have a hard time imagining any universe in which we look back on this "team" and think, "Gosh, I'm really glad that happened."

Sunday, June 22, 2025

ICYMI: Pride Edition (6/22)

This weekend my little under-50K county hosted its second annual Pride in the Park event, and it was a lovely day for it. Plenty of friends, many fun booths, some good food, live music--everything necessary for a fun park festival. A really nice way to get the summer under way.

The Institute's mobile office (aka my aging laptop) self-obliterated about a week ago, so purchasing and setting up the replacement has been sucking up time here. You really forget just how many apps and passwords and bits and pieces you have loaded into a machine until you have to replace them all. Meanwhile, I am really trying to keep my resolve to prioritize writing the book over posting and other ancillary activities, but sometimes the world makes it really hard. 

 A reminder that if you are reading on the original mother ship, there's a whole list of links to excellent writing about education. Now here's the list for the week.

Broad network of anti-student-inclusion groups impacts public education

The Southern Poverty Law Center takes a look at the groups and tactics working against diversity and inclusion in education. Not encouraging, but informative.

Can AI identify safety threats in schools? One district wants to try.

Karina Elwood at the Washington Post reports on one more leap forward in the super-creepy surveillance state. Omnipresent cameras plus only-kind-of-reliable AI. What could possibly go wrong?

Abstinence, patriotism and monogamy all required curriculum under new Ohio bill

Ohio's legislature is working hard to become one of the worst in the nation, what with mandating their own social ideology for students. Report from Katie Milard at NBC4.

What’s better than DEI?

One of the big brains at the U of Arkansas's department of dismantling public ed has some thoughts about DEI. Nancy Flanagan explains just how full of it he is. 


A reality-impaired op-ed from two old-school reformsters sends Thomas Ultican on a trip down memory lane, with pity stops to look at some of the bunkum that has appeared along the way. When folks use Michelle Rhee as an example of awesomeness, you know you're in Bizarro World.

AI Is Not the Inevitable Answer to What Ails Us: We've Seen Artificial Solutions Before

John Robinson reminds that we've seen this movie before, and the latest miracle cure is not inevitable.

It's Compassion That Gets Stuff Done

Teacher Tom explains that reason and logic aren't necessarily the tools that students need all the time.

Oak Ridge Schools Bows to Book Banning Legislation by the Tennessee Taliban

James Horn provides yet one more example of a gutless school district making absurd choices for books to ban from its libraries-- like medical texts and books about important artists like Donatello and Edward Hopper.

War Pigs

Audrey Watters offers a ton of great links this week, plus solid arguments against AI in education. You really should subscribe.

Trump’s ICE Raids Traumatize Children, Frighten Parents, Reduce School Attendance, and Undermine School Climate

Jan Resseger points out that maybe it's not great for schools to be repeatedly raided by the ICE thuggery patrol.

Code Red: How AI Is Set to Supercharge Racism, Rewrite History, and Hijack Learning

Apparently I'm reading a lot about AI these days. Here's a take from Julian Vasquez-Heilig to remind us that AI is not remotely objective.

Don't Buy the AI Hype

Have You Heard, the podcast from Jack Schneider and Jennifer Berkshire, hits its 200th episode with a stacked line up of Audrey Watters, Ben Riley, and John Warner discussing AI hype (there's a transcript here, too, if you're one of those). 

Plato was an AI skeptic

Benhamin Riley addresses the argument that opposition to AI is just like when Plato opposed writing, and we know he was wrong about that, so...

AI in the Classroom with Brett Vogelsinger

Of all the AI non-skeptics out there, Brett Vogelsinger seems to have the most thoughtful views on how to incorporate it in the classroom. This interview with Marcus Luther gives you a sense of what he's talking about (again, transcript for those who'd rather read than listen).

School Choice without equity is cover for inequality in our public schools

Jesse Turner talks to Robert Cotto (Trinity College) about the equity issues of school choice. 

I Tried To Make Something In America (The Smarter Scrubber Experiment)

Not directly related to education, but I found this video fascinating. The guy at Smarter Every Day sets out to make a grill scrubber in America. The process shows some of the barriers, but it particularly illustrates the loss of tool and die workers and what that means to US industry. 

Is there a more extraordinary friendship than that between Lady Gaga and Tony Bennett in the end stretch of his career. Those final concerts, with 95 year old Bennett in the grip of Alzheimers, becoming himself again through the music, and Gaga supporting him through it-- I mean, damn. Somietimes we humans can be beautiful, and it's important to remember that. Here's a Cole Porter song from their last album together.


Join me on the newsletter. It's free and easy.

Friday, June 20, 2025

Should AI Make Students Care?

Over the years I have disagreed with pretty much everything that Thomas Arnett and the Christensen Institute have had to say about education (you can use the search function for the main blog to see), but Arnett's recent piece has some points worth thinking about. 

Arnett caught my attention with the headline-- "AI can personalize learning. It can’t make students care." He starts with David Yeager's book 10 to 25: The Science of Motivating Young People

Yeager challenges the prevailing view that adolescents’ seemingly irrational choices—like taking risks, ignoring consequences, or prioritizing peer approval over academics—result from underdeveloped brains. Instead, he offers a more generous—and frankly more illuminating—framing: adolescents are evolutionarily wired to seek status and respect.

As someone who worked with teenagers for 39 years, the second half of Yeager's thesis feels true. I'd argue that both ideas can be true at once-- teens want status and respect and their underdeveloped brains lead them to seek those things in dopey ways. But Arnett uses the status and respect framing to lead us down an interesting path.

[T]he key to unlocking students’ motivation, especially in adolescence, is helping them see that they have value—that they are valued by the people they care about and that they are meaningful contributors to the groups where they seek belonging. That realization has implications not just for how we understand student engagement, but for how we design schools…and why AI alone can’t get us where we need to go.

This leads to a couple of other points worth looking at.

"Motivation is social, not just internal." In other words, grit and growth mindset and positive self-image all matter, but teens are particularly motivated by how they are seen by others, particularly peers. Likewise, Arnett argues that it's a myth that self-directed learning is just for a handful of smarty-pants auto-didacts. He uses Bill Gates and Mark Zuckerberg as examples, which is interesting as they are both excellent examples of really dumb smart people, so maybe autodacting isn't all it's cracked up to be. But his point is that most students are autodidacts-- just about things like anime and Taylor Swift. And boy does that resonate (I have a couple of self-taught Pokemon scholars right here). I'll note that all these examples point to auto-didactation that results in a fairly narrow band of learning, but let's let that go for now.

Arnett follows this path to an observation about why schools are often motivational dead zones:

The problem is that school content often lacks any social payoff. It doesn’t help them feel valued or earn respect in the social contexts they care about. And so, understandably, they disengage.

And this

Schools typically offer only a few narrow paths to earn status and respect: academics, athletics, and sometimes leadership roles like Associated Student Body (ASB) or student council. If you happen to be good at one of those, great—you’re in the game. But if you’re not? You’re mainly on the sidelines.

Students want to be seen, and based on my years in the classroom, I would underline that a zillion times. 

The AI crew's fantasy is that students sitting in front a screen will be motivated because A) the adaptive technology will hit them with exactly the right material for the student and B) shiny! Arnett explains that any dreams of AI-aided motivation are doomed to failure. 

AI won't fix this

Arnett's explanation is not exactly where I expected we were headed. Human respect is scarce, he argues, because humans only have so much time and attention to parcel out, and so it's valuable. AI has infinite attention resources, can be programed to be always there and always supportive. Arnett argues that makes its feedback worthless in terms of status and respect. 

I'm not sure we have to think that hard about it. Teens want status and respect, especially from their peers. The bot running their screen is neither a peer, not even an actual human. It cannot confer status or respect on the student, nor is it part of the larger social network of peers. 

Arnett argues that this might explain the 5% problem-- the software that works for a few students, in part because 95% of students do not use the software as recommended. Because why would they? The novelty wears off quickly, and truly, entertainment apps don't do much better. I don't know what the industry figures say, but my anecdotal observation was that a new app went from "Have you seen this cool thing!" to "That old thing? I haven't used it in a while" in less than a month, tops. 

What keeps students coming back, I believe, isn’t just better software. It’s the social context around the learning. If students saw working hard in these programs as something that earned them status and respect—something that made them matter in the eyes of their peers, teachers, and parents—I think we’d see far more students using the software at levels that accelerate their achievement. Yet I suspect many teachers are disinclined to make software usage a major mechanism for conferring status and respect in their classrooms because encouraging more screen time doesn’t feel like real teaching.

From there, Arnett is back to the kind of baloney that I've criticized for years. He argues that increasing student motivation is super-important, and, okay, I expect the sun rise in the East tomorrow. But he points to MacKenzie Price's Alpha School, the Texas-based scam that promises two hour learning, and Khan Academy as examples of super-duper motivation, using their own company's highly inflated results as proof. And he compares software to "high dosage tutoring," which isn't really a thing.

Arnett has always been an edtech booster, and he's working hard here to get the end of a fairly logical path to somehow provide hope for the AI edtech market. 

But I think much of what he says here is valuable and valid-- that AI faces a major hurdle in classrooms because it offers no social relationship component, little opportunity to provide students with status or respect. Will folks come up with ways to use AI tools that have those dimensions? No doubt. But the heart of Arnett's argument is an explanation of one more reason that sitting a student in front of an AI-run screen is not a viable future for education. 


Wednesday, June 18, 2025

AI, Facing the Dark, and Human Sparknotes

The New York Times unleashed a feature section about AI, and it is just a big fat festival of awful.

There's a conversation between Kevin Roose and Casey Newton, hosts of the podcast Hard Fork, named, perhaps, after the object I want to drive into my own brain while reading this conversation. 

These days I read this kind of stuff for the same reason that I leave many far right voices unblocked on my social media-- because if you're going to face reality, you have to face the dark parts where people believe awful stuff. It's ugly, but it won't go away just because you ignore it.

So here's Roose saying that AI has replaced Google to answer questions like "What setting do I put this toaster oven on to make a turkey melt?" Or his friend who now gets through the morning commute by putting ChatGPT on voice mode and asking it to teach them about modern art or whatever. And "another person I know just started using ChatGPT as her therapist after her regular human therapist doubled her rates." 

The piece is loaded with quotable foolishness, like this:
But I confess that I am not as worried about hallucinations as a lot of people — and, in fact, I think they are basically a skill issue that can be overcome by spending more time with the models. Especially if you use A.I. for work, I think part of your job is developing an intuition about where these tools are useful and not treating them as infallible. If you’re the first lawyer who cites a nonexistent case because of ChatGPT, that’s on ChatGPT. If you’re the 100th, that’s on you.

Intuition? I suppose if you lack actual knowledge, then intuition will have to do. But this will be a recurring theme-- AI's lack of expertise in a field can be compensated for by a human with expertise in that field. How does that shake out down the road when people don't have expertise because they have leaned on AI their whole lives? Hush, you crazy Luddite.

Newton says he uses LLM for fact checking spelling, grammatical, and factual errors, and of course the first two aren't really AI jobs, but these days we just slap an AI label on everything a computer can do. Factual errors? Yikes. Roose says he likes AI for tasks where there's no right or wrong error. They both like it for brainstorming. Also for searching documents, because AI is easier than Control F? Mistakes? Well, you know, humans aren't perfect, either.  

Roose notes that skeptics say that the bots are just predicting the next word in a sentence, that they aren't capable of creative thinking or reasoning, just a fancy autocomplete, and that all that will just turn this into a flash in the pan, and Roose has neatly welded together two separate arguments because A) bots aren't actually thinking, just running word token prediction models and B) AI will wash out soon-- those are not related. In fact, I think I'm not unusual in thinking that A is true, and B is to be hoped for, but unlikely. Anyway, Roose asks Newton to respond, and the response is basically, "Well, a lot of people are making a lot of money." 

Roose and Newton are not complete dopey fanboys, and at one point Roose says something I sort of agree with:

I think there are real harms these systems are capable of and much bigger harms they will be capable of in the future. But I think addressing those harms requires having a clear view of the technology and what it can and can’t do. Sometimes when I hear people arguing about how A.I. systems are stupid and useless, it’s almost as if you had an antinuclear movement that didn’t admit fission was real — like, looking at a mushroom cloud over Los Alamos, and saying, “They’re just raising money, this is all hype.” Instead of, “Oh, my God, this thing could blow up the world.”

"Clear view of the technology" and "hype" are doing a lot of work here, and Roose and Casey fall into the mistake of straw manning AI skeptics by conflating skeptics and deniers (a mistake Newton has made before and to which Ben Riley responded well). 

The other widely quoted chunk of the discussion is this one from Roose:

The mental model I sometimes have of these chatbots is as a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30 percent of the time. But also, the bar of 100 percent reliability is not the right one to aim for here: The base rate that we should be comparing with is not complete factuality but the comparable smart human given the same task.

But the bots don't have Ph.D.s, and I don't want to work with someone juiced up on ketamine, and if bots aren't any better than humans, why am I using them? 

The article is entitled "Everyone Is Using AI for Everything," which at least captures the concerning state of affairs. 

Take the re-emergence of disgraced author and professional asshat James Frey (the guy who was shamed by Oprah for his fake memoir) who just put an AI-created book on the Book of the Month list. If that seems like a problem, Frey explained why he was happy to let AI do most of his work back in 2023.

I have asked the AI to mimic my writing style so you, the reader, will not be able to tell what was written by me and what was generated by the AI. I am also not going to tell you or make any indication of what was written by me and what was generated by the AI. It was I, the writer, who decided what words were put on to the pages of this book, so despite the contributions of the AI, I still consider every word of this book to be mine. And I don’t care if you don’t.

And there's the other article in the NYT section, a piece about using NotebookLM, a bot designed to help writers.  "AI Is Poised To Rewrite Hostory," says editorial director Steve Wasik. He talks about how author Steven Johnson used the bot (which he had helped build) to sift through the research and generate story ideas. Muses Wasik:

Like most people who work with words for a living, I’ve watched the rise of large-language models with a combination of fascination and horror, and it makes my skin crawl to imagine one of them writing on my behalf. But there is, I confess, something seductive about the idea of letting A.I. read for me — considering how cruelly the internet-era explosion of digitized text now mocks nonfiction writers with access to more voluminous sources on any given subject than we can possibly process. This is true not just of present-day subjects but past ones as well: Any history buff knows that a few hours of searching online, amid the tens of millions of books digitized by Google, the endless trove of academic papers available on JSTOR, the newspaper databases that let you keyword-search hundreds of publications on any given day in history, can cough up months’ or even years’ worth of reading material. It’s impossible to read it all, but once you know it exists, it feels irresponsible not to read it.

What if you could entrust most of that reading to someone else … or something else?

On one level, I get it. I do a ton of reading. Did a ton of reading when I was teaching so that I could better represent the material. I do a ton of reading for the writing I do, and yes-- sometimes you tug on a string and a mountain falls in your lap and you despair of reading enough of it to get a picture of what's going on.

But, you know, working out is sweaty and painful. What if I could entrust most of that exercising to someone or something else? Keeping in touch with the any farflung members of my family is really hard and time consuming. What if I could entrust most that work to someone or something else? Preparing and eating food is time consuming and not always fun. What if I could entrustmost of that work to someone or something else? 

Humaning is hard. Maybe I could just get some tech to human for me.

Any day now

I know. It's not a simple issue. I wear glasses and, in fact, have plastic lenses inserted in my human eyeballs. I drive a car. I enjoy a variety of technological aids that help me do my humaning both personally and professionally. But there's a line somewhere, and some of these folks have uncritically sailed past it, cheerfully pursuing a future in which they can hand off so many tasks to the AI that they can... what? Settle down to a happy life as a compact, warm ball of flash in a comfortable plasticene nest, lacking both cares and autonomy?

At what point do folks say, "No, you can't have that. That business belongs to me, a human."

But back to the specifics at hand.

I don't know how one separates the various parts of writing into categories like Okay If AI Cuts This Corner For Me and This Part Really Matters So That I Should Do It Myself (or, like Frey, simply decide that none of it is important except the part where you get to sign checks). Brainstorming, topic generation, research-- these are often targeted for techification, but why? I am often asked how I am able to write so much and so quickly, and part of my answer has always been "low standards," but it is also that I read so much that I have a ton of stuff constantly being churned over in my brain and my writing is just the result of a compulsion to process all that stuff into a written form.

That points to a major issue that Roose and Newton and Wasik all miss. Using the bot as a research assistant or first reader or brainstormer can only hope to be useful to a human who is already an expert. Steven Johnson can only use what his AI research bot hands him because he is expert enough to understand it. The notion that a human can use intuition to check the AI's work is a dodge-- what the human needs is actual expertise.

That may be fine for the moment, but what happens when first hand experience and expertise are replaced by "I read an AI summary of some of that stuff"?

At least one of Wasik's subjects wrestles with the hypocrisy problem of an educator who tells students to avoid the plagiarism machine and then employs the same bots to help with scholarship. But I wish more were wrestling with the basic questions of what parts of writing and reading shouldn't be handed over to someone or something else. 

In some ways, this is an old argument. I talked to my students about Cliff notes and, later, Sparknotes, and I always made two points. First, what you imagine as an objective judgment is not, and by using their work instead of your own brain, you are substituting their judgment for your own. Not only substituting the final project, but skipping your own mental muscle-building exercise. Second, you are cheating yourself of the experience of reading the work. It's like kissing your partner at the end of an excellent date-- if it's worth doing, it's worth doing yourself. 

No doubt there are some experiences that aren't necessarily worth having (e.g. spending ten years scanning data about certain kinds of tumors). But I'd appreciate a little more thoughtfulness before we sign everyone up to use sparknotes for humaning. 

Sunday, June 15, 2025

ICYMI: Kingless Edition (6/15)

I hope your day yesterday was a good one, regardless of what you did with it. What times we live in. 

I'll remind you this week that everyone can amplify. If you read it and think it's important, share it. Also, subscribe to the blog, newsletter, or whatever. Bigger numbers mean greater visibility. And it doesn't hurt to throw in a little money for those who depend on their writing to help put bread on the table. Clicking and liking and sharing are not quite up there with getting actively involved, but they can provide the information and motivation that get folks out there. 

So here's what we've got this week.

New data confirms NC school voucher expansion disproportionately benefits wealthy private school families

Gosh, what a surprise. North Carolina school vouchers are not a rescue for the poor, but a hand out for the wealthy. Kris Nordstrom explains the findings.

12News I-Team finds Arizona's $1 billion voucher experiment hurting high-performing public districts and charter schools

A news team discovers that besides subsidizing wealthy private school patrons, Arizona's voucher program helped students "escape" top-rated public schools.

Trump and Republicans Want Taxpayers to Fund Their Pet Project: Private Schools

Jeff Bryant reports for Our Schools on the HOP goal of taxpayer-funded private schools.

What a Difference Teachers Could Make With $45 Million!

Nancy Bailey points out that $45 million could buy many things more desirable than a military parade for Dear Leader.

Teach Your Children Well

Nancy Flanagan reflects on the No Kings protests and our responsibilities to each other.

Bird of Pray

Audrey Watters hits it again.
We have bent education – its budgets, its practices – to meet the demands of an industry, one that has neatly profited from the neoliberal push to diminish and now utterly dismantle public funding.
Some Thoughts about Science Education Reforms in the Past Century

Larry Cuban looks into the conflicts involved in teaching science. What are we trying to teach, and how are we trying to teach it?

Trump’s Policies Would Undermine Public School Equity and Launch Costly Federal School Vouchers

Jan Resseger looks at the threats to public education in the Trump budget ideas. 

The Myths of GPA in College Admissions Explained

Akil Bello, testing and college admissions guru explains that your GPA isn't what you-- or the college you're applying to-- think it is.

The Lunch Ladies are Not Smiling

Thank goodness that lawyers like Andru Volinsky exist to plough through the legal esoterica of legislative attempts to avoid funding schools for Those Peoples' Children. New Hampshire school tax law takes an odd turn with the latest court decision.

Brief FEFP Budget Update

And speaking of funding esoterica, Sue Kingery Woltanski tries to keep tabs on the funding shenanigans in Florida, the laboratory where the nation's worst education policies are nursed to life.

Fuel of delusions

Banjamin Riley with a heck of a personal story about AI and delusions and what we really need to know.

At Forbes.com, I looked at the latest attempt to fix Pennsylvania's cyber charter funding, and a little-noted Supreme Court case that could have major effects for schools across the country. 

Here's a little David Byrne. Many is the time I have pulled this song out to give me a little boost. 



You can sign up for my newsletter that will keep you up to date with whatever I'm putting out into the world. And it's free, now and forever.


Monday, June 9, 2025

Another Bad AI Classroom Guide

We have to keep looking at these damned things because they share so many characteristics that we need to recognize so we can recognize them when we see them again and react properly, i.e. by throwing moldy cabbage at them. I read this one so you don't have to.

And this one will turn up lots of places, because it's from the Southern Regional Education Board

SREB was formed in 1948 by governors and legislators; it now involves 16 states and is based in Atlanta. Although it involves legislators from each of the states, some appointed by the governor, it is a non-partisan, nonprofit organization. In 2019 they handled about $18 million in revenue. In 2021, they received at $410K grant from the Gates Foundation. Back in 2022, SREB was a cheerful sock puppet for folks who really wanted to torpedo tenure and teacher pay in North Carolina. 

But hey-- they're all about "helping states advance student achievement." 

SREB's "Guidance for the Use of AI in the K-12 Classroom" has big fat red flag right off the top-- it lists no authors. In this golden age of bullshit and slop, anything that doesn't have an actual human name attached is immediately suspect.

But we can deduce who was more or less behind this-- the SREB Commission on Artificial Education in Education. Sixteen states are represented by sixty policymakers, so we can't know whose hands actually touched this thing, but a few names jump out.

The chair is South Caroline Governor Henry McMaster, and his co-chair is Brad D. Smith, president of Marshall University in West Virginia and former Intuit CEO. As of 2023, he passed Jim Justice as richest guy in WV. And he serves on lots of boards, like Amazon and JPMorgan Chase. Some states (like Oklahoma) sent mostly legislators, while some sent college or high school computer instructors. There are also some additional members including Youngjun Choi (UPS Robotics AI Lab), Kim Majerus (VP US Public Sector Education for Amazon Wen Services) and some other corporate folks.

The guide is brief (18 pages). It's basic pitch is, "AI is going to be part of the working world these students enter, so we need schools to train these future meat widgets so we don't have to." The introductory page (which is certainly bland, vague, and voiceless enough to be a word string generated by AI) offers seven paragraphs that show us where we're headed. I'll paraphrase.

#1: Internet and smartphones means students don't have to know facts. They can just skip to the deep thinking part. But they need critical thinking skills to sort out online sources. How are they supposed to deep and critically think when they don't have a foundation of content knowledge? The guide hasn't thought about that. AI "adds another layer" by doing all the work for them so now they have to be good prompt designers. Which again, would be hard if you didn't know anything and had never thought about the subject.

#2: Jobs will need AI. AI must be seen as a tool. It will do routine tasks, and students will get to engage in "rich and intellectually demanding" assignments. Collaborative creativity! 

#3: It's inevitable. It is a challenge to navigate. Shareholders need guidance to know how to "incorporate AI tools while addressing potential ethical, pedagogical, and practical concerns." I'd say "potential" is holding the weight of a world on its shoulders. "Let's talk about the potential ethical concerns of sticking cocaine in Grandma's morning coffee." Potential.

#4: This document serves as a resource. "It highlights how AI can enhance personalized learning, improve data-driven decision-making, and free up teachers’ time for more meaningful student interactions." Because it's going to go ahead and assume that AI can, in fact, do any of that. Also, "it addresses the potential risks, such as data privacy issues, algorithmic biases, and the importance of maintaining the human element in teaching." See what they did there? The good stuff is a given certainty, but the bad stuff is just a "potential" down side.

#5: There's a "skills and attributes" list in the Appendix.

#6: This is mostly for teachers and admins, but lawmakers could totally use it to write laws, and tech companies could develop tech, and researchers could use it, too! Multitalented document here.

#7: This guide is to make sure that "thoughtful and responsible" AI use makes classrooms hunky and dory.

And with that, we launch into The Four Pillars of AI Use in the Classroom, followed with uses anbd cautions.

Pillar #1
Use AI-infused tools to develop more cognitively demanding tasks that increase student engagement with creative problem-solving and innovative thinking.

"To best prepare students for an ever-evolving workforce..." 

"However, tasks that students will face in their careers will require them..."

That's the pitch. Students will need to be able think "critically and creatively." So they'll need really challenging and "cognitively demanding" assignment. Now more than ever, students need to be creators rather than mere purveyors of knowledge. "Now more than ever, students need to be creators rather than mere purveyors of knowledge."

Okay-- so what does AI have to do with this?
AI draws on a broad spectrum of knowledge and has the power to analyze a wide range of resources not typically available in classrooms.
This is some fine tuned bullshit here, counting on the reader to imagine that they heard something that nobody actually said. AI "draws on" a bunch of "knowledge" in the sense that it sucks up a bunch of strings of words that, to a human, communicate knowledge. But AI doesn't "know" or "understand" any of it. Does it "analyze" the material? Well, in the sense that it breaks the words into tokens and performs complex maths on them, there is a sort of analysis. But AI boosters really, really want you to anthropomorphize AI, to think about it as human-like un nature and not alien and kind of stupid.

"While AI should not be the final step in the creative process, it can effectively serve in the early stages." Really? What is it about the early stages that makes them AI-OK? I get it--up to a point. I've told students that they can lift an idea from somewhere else as long as they make it their own. But is the choice of what to lift any less personal or creative than what one does with it? Sure, Shakespeare borrowed the ideas behind many of his plays, but that decision about what to borrow was part of his process. I'd just like to hear from any of the many people who think AI in beginning stages is okay why exactly they believe that the early stages are somehow less personal or creative or critical thinky than the other stages. What kind of weird value judgment is being made about the various stages of creation?

Use AI to "streamline" lesson planning. Teach critical thinking skills by, and I'm only sort of paraphrasing here, training students to spot the places where AI just gets stuff wrong. 

Use AI to create "interactive simulations." No, don't. Get that AI simulation of an historical figure right out of your classroom. It's creepy, and like much AI, it projects a certainty in its made-up results that it does not deserve. 

Use AI to create a counter-perspective. Or just use other humans.

Cautions? Everyone has to learn to be a good prompt engineer. In other words, humans must adjust themselves to the tool. Let the AI train you. 

Recognize AI bias, or at least recognize it exists. Students must learn to rewrite AI slop so that it sounds like the student and not the AI, although how students develop a voice when they aren't doing all the writing is rather a huge challenge as well. 

Also, when lesson planning, don't forget that AI doesn't know about your state standards. And if you are afraid that AI will replace actual student thinking, make sure your students have thought about stuff before they use the AI. Because the assumption under everything in this guide is that the AI must be used, all the time.

Pillar #2
Use AI to streamline teacher administrative and planning work.

The guide leads with an excuse-- "teachers' jobs have become increasingly more complex." Have they? Compared to when? The guide lists the usual features of teaching-- same ones that were there when I entered the classroom in 1979. I call bullshit. 

But use AI as your "planning partner." I am sad that teachers are out there doing this. It's not a great idea, but for a generation that entered the profession thinking that teacher autonomy was one of those old-timey things, as relevant as those penny-farthings that grampa goes on about. And these suggestions for use. Yikes.

Lesson planning! Brainstorming partner! And, without a trace of irony, a suggestion that you can get more personalized lessons from an impersonal non-living piece of software.

Let it improve and enhance a current assignment. Meh. Maybe, though I don't think it would save you a second of time (unless you didn't check whether AI was making shit up again). 

But "Help with Providing Feedback on and Grading Student Work?" Absolutely not. Never, ever. It cannot assess writing quality, it cannot do plagiarism detection, it cannot reduce grading bias (just replace it). If you think it even "reads" the work, check out this post. Beyond the various ways in which AI is not up to the task, it comes down to this-- why would your students write a work that no other human being was going to read?

Under "others," the guide offers things like drafting parent letters and writing letters of recommendation, and again, for the love of God, do not do this! Use it for translating materials for ESL students? I'm betting translation software would be more reliable. Inventory of supplies? Sure, I'm sure it wouldn't take more than twice as much time as just doing it by eyeball and paper. 

Oh, and maybe someday AI will be able to monitor student behavior and engagement. Yeah, that's not creepy (and improbable) at all.

Cautions include a reminder of AI bias, data privacy concerns, and overreliance on AI tools and decisions, and I'm thinking "cautions" is underselling the issues here. 

Pillar #3
Use AI to support personalized learning.

The guide starts by pointing out that personalized learning is important because students learn differently. Just in case you hadn't heard. That is followed by the same old pitch about dynamically adaptive instruction based on data collected from prior performance, only with "AI" thrown in. Real time! Engagement! Adaptive!

AI can provide special adaptations for students with special needs. Like text-to-speech (is that AI now). Also, intelligent tutoring systems that " can mimic human tutors by offering personalized hints, encouragement and feedback based on each student’s unique needs." So, an imitation of what humans can do better. 

Automated feedback. Predictive analytics to spot when a student is in trouble. AI can pick student teams for you (nope). More of the same.

Cautions? There's a pattern developing. Data privacy and security. AI bias. Overreliance on tech. Too much screen time. Digital divide. Why those last two didn't turn up in the other pillars I don't know. 

Pillar #4
Develop students as ethical and proficient AI users.

I have a question-- is it possible to find ethical ways to use unethical tools? Is there an ethical way to rob a bank? What does ethical totalitarianism look like?

Because AI, particularly Large Language Models, is based on massive theft of other peoples' work. And that's before we get to the massive power and water resources being sucked up by AI. 

But we'll notice another point here-- the problems of ethical AI are all the responsibility of the student users. "Teaching students to use AI ethically is crucial for shaping a future where technology serves humanity’s best interests." You might think that an ethical future for AI might also involve the companies producing it and the lawmakers legislating rules around it, but no-- this is all on students (and remember-- students were not the only audience the guide listed) and by extension, their teachers. 

Uses? Well, the guide is back on the beginning stages of writing
AI can also help organize thoughts and ideas into a coherent outline. AI can recommend logical sequences and suggest sections or headings to include by analyzing the key points a student wants to cover. AI can also offer templates, making it easier for students to create well-structured and focused outlines.

These are all things the writer should be doing. Why the guide thinks using AI to skip the "planning stages" is ethical, but using it in any other stages is not, is a mystery to me.

Students also need to develop "critical media literacy" because the AI is going to crank out well-polished turds, and it's the student's job to spot them. "Our product helps dress you, but sometimes it will punch you in the face. We are not going to fix it. It is your job to learn how to duck."

Cross-disciplinary learning-- use the AI in every class, for different stuff! Also, form a student-led AI ethics committee to help address concerns about students substituting AI for their own thinking. 

Concerns? Bias, again. Data security-- which is, incidentally, also the teacher's responsibility. AI research might have ethical implications. Students also might be tempted to cheat- the solution is for teachers to emphasize integrity. You know, just in case the subject of cheating and integrity has never ever come up in your classroom before. Deepfakes and hallucinations damage the trustworthiness of information, and that's why we are calling for safeguards, restrictions, and solutions from the industry. Ha! Just kidding. Teachers should emphasize that these are bad, and students should watch out for them.

Appendix

A couple of charts showing aptitudes and knowledge needed by teachers and admins. I'm not going to go through all of this. A typical example would be the "knowledge" item-- "Understand AI's potential and what it is and is not" and the is and is not part is absolutely important, and the guide absolutely avoids actually addressing what AI is and is not. That is a basic feature of this guide--it's not just that it doesn't give useful answers, but it fails to ask useful questions. 

It wraps up with the Hess Cognitive Rigor Matrix. Whoopee. It's all just one more example of bad guidance for teachers, but good marketing for the techbros. 



Sunday, June 8, 2025

ICYMI: Birthday Board Edition (6/8)

This week the Board of Directors here at the Institute celebrated their birthday. This involved some extended book store time and a day at Waldameer Park in Erie, an old amusement park that the Chief Marital Officer and I had not visited in many years. The board was both delighted and exhausted, and I got enough steps in that I believe I can just sit for the upcoming week. That's how that works, right?

Have some reading.

Diabolus Ex Machina

Amanda Guinzburg tries some new games with AI and ends up providing yet another demonstration of how terrible chatbots are at doing the most simple reading assignments.

Texas Schools to Get a Bit More Cash and a Lot More Christian Nationalism

Just how bad for public education was this last session of the Texas legislature? Brant Bingamon breaks it down for the Austin Chronicle.

How Educators Can Escape Toxic Productivity

Peter DeWitt and Michale Nelson at Ed Week address one of the oldest problems in education--the expectation that a good, productive teacher will just beat the living crap out of herself to do the job.

Big Changes and Controversy in Oakland

Why do I often include highly specific and local pieces, like this one from Thomas Ultican? Because what is happening elsewhere often illuminates what is about to happen in your neck of the woods. Including twisty board vs. superintendent politics.

Kids: 1, ICE: 0

ICE grabbed a high school kid on his way to volleyball practice, and a whole community rose up to protest. Jennifer Berkshire with an encouraging story from her neck of the woods.

Book-banning, Book-burning, Book reading—and Truth

It is disheartening when a community you love has important institutions commandeered by the anti-book crowd. Nancy Flanagan tells her own story of a small Michigan community.

Hard Times

Audrey Watters finds a connection between Charles Dickens and the modern day "just teach facts" crowd and bad tech, plus a load of excellent links. 

Do "pronatalists" like Musk care about children and babies?

Okay, not a hard question to answer. But Steve Nuzum digs deep into the natalism crowd's issues, and it's not pretty.

Chall’s Missing STAGES OF READING DEVELOPMENT in the Science of Reading

Nancy Bailey points out some critical info that the Science of Reading crowd misses.

Larry Cuban is not excited about the idea of robots providing human care.

They Want Missouri Education Policies to go Nationwide

Just how bad has it gotten in Missouri? Jess Piper, noted activist, paints the broad picture.

Ohio Senate Budget Plan Released on Tuesday Bodes Ill for Ohio Public School Funding

Jan Resseger breaks down the details in Ohio's newest attempt to become the Florida of the Midwest.

Rain, Meet Piss: How Ohio Keeps Screwing Over Public School Kids

Yeah, Stephen Dyer has some thoughts about that budget as well. 

Nary a Deviation From The Playbook

TC Weber continues to chronicle Penny Schwinn's rise from Tennessee embarrassment to national embarrassment. He actually followed her confirmation hearing, and has some notes.

Exposed: University of Michigan Hired Undercover Spies to Target Students

Jullian Vasquez Heilig reacts to reporting that his alma mater has hired goons to spy on students.

From Policy to Prosecution: Florida Raises The Stakes for School Boards

In Florida, right wingers continue to use manufactured outrage over naughty books to attack public schools, and they've decided to throw in threats of criminal prosecution. Sue Kingery Woltanski reports.

A little Gilbert and Sullivan today, with Kevin Kline working really hard!