Showing posts sorted by relevance for query AI. Sort by date Show all posts
Showing posts sorted by relevance for query AI. Sort by date Show all posts

Monday, April 14, 2025

Predicting AI Armageddon For Universities

Once again, the Chronicle of Higher Education is hosting some top-notch chicken littling about the coming  of our robot overlords. This time it's "Are You Ready for the AI University," from Scott Latham, and it is some top notch hand waving.

Latham is a professor at the Manning School of Business at the University of Massachusetts, with a background in tech and business, which certainly fits with the pitch he's making here. It's worth looking at because it leans hard on every marketing note we encounter in the current full court AI press.

The hyperbole here is huge. AI will be "forever altering the relationship between students and professors." Latham waves away mundane cheating concerns, the "tired debate about academic ethics" because students have always cheated and always will, so, I guess, never mind that ethics baloney. 
An AI arms race is under way. In a board room at every major college in America there is a consultant touting AI’s potential to lower costs, create new markets, and deliver more value to students.

Latham is certain of not only the inevitability, but the dominance of AI. And the FOMO is strong with this one. Here's just one of his broad sweeping portraits of the future.

Across the country some institutions are already piloting fully AI-instructed courses and utilizing AI to enable higher yields and improve retention, graduation rates, and job placement. Over the course of the next 10 years, AI-powered institutions will rise in the rankings. US News & World Report will factor a college’s AI capabilities into its calculations. Accrediting agencies will assess the degree of AI integration into pedagogy, research, and student life. Corporations will want to partner with universities that have demonstrated AI prowess. In short, we will see the emergence of the AI haves and have-nots. Sadly, institutions that need AI the most, such as community colleges and regional public universities, will be the last to get it. Prepare for an ever-widening chasm between resource-rich, technologically advanced colleges and those that are cash-starved and slow to adapt to the age of AI.

Yes, I am sure that wealthy, elite parents will send their children off to the ivies along with a note to the college saying, "Now don't try to stick my child with one of those dumb old human professors. I want that kid hooked up to an AI-driven computer."  

Latham seems to think so, asserting that 

Colleges that extol their AI capabilities will be signaling that they offer a personalized, responsive education, and cutting-edge research that will solve the world’s largest problems. Prospective students will ask, “Does your campus offer AI-taught courses?” Parents will ask: “Does your institution have AI advisers and tutors to help my child?”

I am the non-elite parent of two potential future college students, and this sounds like an education hellscape to me.

But Latham says this is all just "creative destruction," like when digital photography killed off film photography. He seriously mischaracterizes film photography to make his point, but there's no question that cheap and easy digital photography kneecapped the film variety. 

Latham argues that the market will force this, that the children of the Amazon, Netflix and Google generation want "a speedy, on-demand, and low-friction experience." Of course, they may also have learned that increasingly enshittified tech platforms are the enemy that provides whole new versions of friction. Latham also argues that these students see college as a transaction, a bit of advanced job training, a commodity to be purchased in hopes of an acceptable Return On Investment, and while I'd like to say he's wrong, he probably has a point here because A) that's what some folks have been telling them their whole lives and B) we are in an increasingly scary country where a safe economic future is hard to come by. Still, his belief in consumer short-sightedness is a bit much.

So they regard college much like any other consumer product, and like those other products, they expect it to be delivered how they want, when they want. Why wouldn’t they?

Maybe because somewhere along the way they learned that they aren't the center of the universe? 

Latham is sure that AI is an "existential threat" to the livelihood of professors. Faculty costs are a third of institutions cost structure, he tells us, and AI "can deliver more value at lower cost." One might be inclined to ask what, exactly, is the value that AI is delivering more of, but Latham isn't going to answer that. I guess "education" is just a generic substance squeezed out of universities like tofu out of a pasta press. 

If Latham hasn't pissed you off yet, this should do it:

Professors need to dispense with the delusional belief that AI can’t do their job. Faculty members often claim that AI can’t do the advising, mentoring, and life coaching that humans offer, and that’s just not true. They incorrectly equate AI with a next-generation learning-management system, such as Blackboard or Canvas, or they point out AI’s current deficiencies. They’re living in a fantasy. AI is being used to design cars and discover drugs: Do professors really think it can’t narrate and flip through PowerPoints as well as a human instructor?

 And here is why colleges and universities are going to the first to be put through the AI wringer-- there is a lot of really shitty teaching going on in colleges and universities. I would love to say that this comes down to Latham getting the professorial function wrong, that no good professor simply narrates through a Power Point deck, and I'd be correct. But do some actual professors just drone and flip? Yeah, I'm pretty sure they do.

In the end, Latham's argument is that shitty AI can replace a sub-optimal human instructor. That may be true, but it's beside the point. Can AI provide bad advising, bad mentoring, and bad life coaching? Probably. But who the heck wants that? Can AI do those jobs well? No, it can't. Because it cannot create a human connection, nor can it figure out what a human has going on in their head. 

Latham is sure, however, that it's coming. By the end of the decade, there will be avatars, and Latham says to think about how your iPhone can recognize your face. Well, 

Now imagine AI avatars that will be able to sense subtle facial expressions and interpret their meaning. If during a personalized lecture an avatar senses on a student’s face, in real time, that they’re frustrated with a specific concept, the avatar will shift the instructional mode to get the student back on track.

"Imagine" is doing a lot of work here, but even if I imagine it, can I imagine a reason that this is better done by AI instead of by an actual human instructor.

Beyond the hopeful expectation of technical capabilities, Latham makes one of the more common-yet-unremarked mistakes here, which is to assume that students will interact with the AI exactly as they would with human beings and not as they would with, say, a soulless lifeless hunk of machinery. 

Never mind. Latham is still flying his fancy to a magical future where all your education is on a "portable, scalable blockchain" that includes every last thing you ever experienced. It does not seem to occur to him that he is describing a horrifyingly intrusive mechanized Big Brother, a level of surveillance beyond anything ever conceived. 

Latham has news for the other functions of higher ed. AI can replace the registrar. AI will manage those blockchain records that "will be owned by the student and empower the student" because universities won't be able to stand in the way of students sharing records. 

AI will create perfect marketing for student recruitment, targeted to individual students. AI will handle filtering admissions as well "by attributes that play to an institution's strength." Because AI magic! Magicky magic. 

This is such bullshit, the worst kind of AO fetishization that imagines capabilities for AI that it will not have. AI is good at finding patterns by sifting through data; it does what a human could do if that human had infinite patience and time. Could a human being with infinite time and patience look at an individual 18-year-old and predict what the future holds for them? No. And neither can AI.

AI is going to take over career services, which I suppose could happen if we reach the point that the college AI reaches out to an AI contact it has in a particular business. And if you think students want to deal with human career-services professionals," Latham has a simple answer-- "No, they don't. Human interaction is not as important to today's students." I guess that settles that. It's gonna suck for students who want to go into human-facing professions (like, say, teaching) when they finally have to deal with human beings.

AI will handle accreditation, too! Witness the hellscape Latham describes:

In our unquestioning march to assessment that is driven by standardized processes and outcomes, we have laid the groundwork for AI’s ascendancy. Did the student learn? Did the student have a favorable post-graduation path, i.e., graduate school or employment? Accreditors will have no choice but to offer a stamp of approval even when AI is doing all the work. In the past decade, we have shifted from emphasizing the process of education to measuring the outcome of education when determining institutional effectiveness. We have standardized pedagogy, standardized student assessments, standardized teaching evaluations, and standardized accreditation. Accreditation by its nature is standardized, and we won’t need vice provosts to do that job much longer.

Administration will also be assimilated (I guess the AI can go ahead and shmooze wealthy alumni for contributions). Admins will deal with political pressure by asking, “Did you run this through AI?” or “Did the AI engine arrive at a similar decision?” Because if there's anything that can deal with something like the politics of the Trump regime, it's an AI.

He's not done yet. This is all so far just how AI will commandeer the existing university structure. 

But that is only step one of a broader transition. Imagine a university employing only a handful of humans, run entirely by AI: a true AI university. In the next few years, it’s likely that a group of investors in conjunction with a major tech company like X, Google, Amazon, or Meta will launch an AI university with no campus and very few human instructors. By the year 2030, there will be standalone, autonomous AI universities.

Yes, because our tech overlords have always had a keen hand on how education works. Like that time the tech geniuses promised that Massive Open Online Courses would replace universities by, well, now. Or that time that Bill Gates failed to be right about education for decades. What a bold, baseless, inevitably wrong prediction for Latham to make--but he's not done.

AI U will have a small, tight leadership team who will select a "tight set of academic disciplines that lend themselves to the early-stage capabilities of artificial intelligence, such as accounting or history." Good God-- is there any discipline that lends itself to automation less than history? History only lends itself to this if you are one of those ahistorical illiterates who believes that history is just learning a bunch of dates and names because all history is known and set in stone. It is not, and this one sentence may be the most disqualifying sentence in the whole article.

Will AI U succeed? Latham allows that a vast majority will fail (like the dot-com bubble era) but dozens will survive and prosper, because this will work for non-traditional students (you know--like those predatory for-profit colleges did) who aren't served by the "one size fits all" model currently available, because I guess Latham figures that whether you go to Harvard or Hillsdale or The College of the Atlantic or Poor State U or your local Community College, you're getting pretty much the same thing. Says the guy who earlier asserted that AI would help select students based on how they played to the individual strengths of particular institutions. AI will target the folks who started a degree but never finished it. Sure.

AI U's secret strength will be that it will be cheapo. No campus and stuff. Traditional universities offering "an old-fashioned college experience complete with dorm rooms, a football stadium, and world-class dining" will continue, though they'll be using AI, too. 

Winding down, Latham allows as predicting the carnage is easy, but "making people realize the inevitable" is hard (perhaps because it skips right over what reasons there are to think that this time, time #12,889,342, the tech world's prediction of the inevitable should be believed). "Predicting" is always easy when it's mostly just wishful guessing.

Students will benefit "tremendously" and some professors will remain. Jobs will be lost. Some disciplines will benefit, like the science-and-mathy ones. Latham sees a "silver lining" for the humanities-- "as AI fully assimilates itself into society, the ethical, moral, and legal questions will bring the humanities to the forefront." To put it another way, since the AI revolution will be run by people lacking moral and ethical grounding in the humanities, the humanities will have to step up to save society. 

I have to stipulate that there is no doubt that Professor Latham is more accomplished and successful than I am. Probably smarter, and for all I know, a wonderful human being who is kind to his mother. But this sure seems like a lot of bunk. Here he has captured most of the features of AI sales. A lack of clarity about what teachers, ideally, actually do (it is not simply pour information into student brains to be recalled later). A lack of clarity about what AI actually does, and what capabilities it does and does not have. A faith that a whole lot of things can be determined with data and objectivity (spoiler alert: AI is not actually all that objective). Complete glossing over the scariest aspects of collecting every single detail of your life digitally, to be sorted through by future employers or hostile American governments (like the one we have right now which is trying to amalgamate all the data the feds have so that they can sift through it to find the people they want to attack). 

Is AI going to have some kind of effect on universities? Sure. Are those effects inevitable? Not at all. Will the AI revolution resemble many other "transformational" education revolutions of the past, and how they failed? You betcha-- especially MOOCs. Are people going to find ways to use AI to cut some corners and make their lives easier, even if it means sacrificing quality? Yeah, probably. Is all of this going to get way more expensive once AI companies decide it's time to make some of their money back? Positively. 

Would we benefit from navigating all of this with realistic discussions based on something other than hyperbolic marketing copy? Please, God. The smoke is supposed to stay inside the crystal ball. 


Tuesday, January 27, 2026

Paper: AI Destroys Institutions

From its title-- "How AI Destroys Institutions"-- this draft essay pulls no punches. It's heavily researched (166 footnotes) and plain in its language. I'm going to hit the highlights here, but I hope you'll be motivated to go read the entire work yourself.

The essay is from two Boston University law professors. Woodrow Hartzog focuses on privacy and technology law; Jessica Silbey teaches and writes about intellectual property and technology law (she also has a PhD in comparative literature--yay, humanities). Their forty-page draft essay breaks down neatly into sections. Let's go.

Institutions are society's superheroes

When we use the term “institutions,” we mean the commonly circulating norms and values covering a recognizable field of human action, such as medicine or education. Institutions form the invisible but essential backbone of social life through their familiar yet iterative and adaptable routines across wide populations in space and time.

These are really important because these "bundles of normative commitments and conventions" help to reduce "uncertainty while promoting human cooperation and efficacy of mission." In other words, they keep things flowing smoothly, particularly for people involved in moving a certain mission forward. 

However, they note, "People both inside and outside an institution must believe in its mission and competency for it to remain durable and sustain legitimacy." Institutions also rely on expertise which helps because it "values and promotes competence, innovativeness, and trustworthiness."

So, institutions really matter, and they depend on certain factors. And here our trouble begins.

The destructive affordances of AI

Hartzog and Silbey explain that we'll be using AI to mean main generative AI systems (chatbots), predictive AI (facial recognition), and automated decision AI (content moderation). They can tempt institution folks by promising to be both fast and correct.

So surface-level use cases for AI in institutions exist. But digging deeper, things quickly fall apart. We are a long way from the ideal conditions to implement accountability guardrails for AI. Even well-intentioned information, technology rules, and protective frameworks are often watered down, corrupted, and distorted in environments where people face powerful incentives to make money or simply get the job done as fast as possible.

Perhaps if human nature were a little less vulnerable to the siren’s call of shortcuts, then AI could achieve the potential its creators envisioned for it. But that is not the world we live in. Short-term political and financial incentives amplify the worst aspects of AI systems, including domination of human will, abrogation of accountability, delegation of responsibility, and obfuscation of knowledge and control.

But despite the seductive lure of AI, the authors point out that it "requires the pillaging of personal data and expression, and facilitates the displacement of mental and physical labor." But mostly it reproduces existing patterns, amplifies biases, and just generally pumps harmful slop into the information ecosystem, all while pretending to be both authoritative and objective.

And its faux-conscious, declarative, and confident prose hides normative judgments behind a Wizard-of-Oz-esque curtain that masks engineered calculations, all the while accelerating the reduction of the human experience to what can be quantified or expressed in a function statement.

What we end up with is the "outsourcing of human thought and relationships to algorithmic outputs." And that means that AI does some serious damage in three main ways.

First, AI undermines expertise

First, AI systems undermine and degrade institutional expertise. Because AI gives the illusion of accuracy and reliability, it encourages cognitive offloading and skill atrophy, and frustrates back-end labor required to repair AI’s mistakes and “hallucinations.”

This doesn't just substitute unreliable bot answers for the work of human experts; it also "denies the displaced person the ability to hone and refine their skills." We get this in education; if you have someone or something do your assignment for you, you don't develop the skills that would have come from doing the work yourself. Same thing in the workplace. Would you rather have a nurse who can say "I have seen this kind of problem a hundred times" or one who can say "I have referred this kind of problem to a medibot a hundred time."

Hartzog and Silbey also remind us that AI can only look backwards; they are bound by pre-existing information. As Arvind Naryann and Sayash Kapoor point out in the AI Snake Oil, predictive AI won't work because the only way it can make good predictions is if nothing else changes. AI is your mother explaining to you how to get a job in today's market based on how she got her job thirty years ago, as if conditions have not changed since then.

AI may appear "hyper-competent," but the authors correctly point out that hallucinations are not a bug, but an inevitable feature of how these systems are designed. Remember, the "stochastic" in "stochastic parrot" means "randomly determined," a guess. When the guesses are correct, the humans in the institution lose skill and value; when the guess is wrong, the institution has to compensate for that failure,

AI short-circuits decisionmaking

Important moral decisions get sloughed off to AI, justified by the notion that they are somehow objective and efficient and therefor not involved in making any moral choices.

To start, the decision to implement an AI system in an institution in any significant way is not just about efficiency. Technologies have a way of obscuring the fact that moral choices that should be made by humans have been outsourced to machines.

When your insurance company uses AI to approve or deny your claim, it is making a moral choice, and furthermore, it's making that choice based on rules that are hidden inside the black box of AI. Then, the authors note, "When AI systems obscure the rules of institutions, the legitimacy of those rules degrades." 

The authors further argue that AI is incapable of "a willingness to learn, engage, critique, and express yourself even though you are vulnerable or might be wrong." Humans can stretch beyond what is known, make big jumps or wide connections. Those kinds of creative leaps are beyond AI, which gives us more of what is already out there. 

The authors also argue that AI cannot challenge the status quo "because its voice has no weight." In other words, humans might speak up, confront management, or even resign loudly in protest, creating pressure for the institution to be better. Raise your hand if you think that this is exactly why some leaders think AI employees are an awesome idea. But the authors argue that "moral courage and insight" are "necessary for institutions to adapt and survive." One would hope.

AI isolates humans

Finally, AI systems isolate people by displacing opportunities for human connection and interpersonal growth. This deprives institutions of the necessary solidarity and space required for good faith debate and adaptability in light of constantly changing circumstances. AI displaces and degrades human-to-human relationships and—through its individualized engagement and sycophancy— erodes our capacity for reflection about and empathy towards other and different humans.
If an institution isn't working out roles and the rules that guide the roles, the rules that make the institution function start to waste away. Then "there is only institutional chaos or the rule of the powerful." 

This strikes me as a drawback that people are really blind to. The consistent assumption in every single plan to have students taught by an AI bot is the assumption that those students will react to the bot as they would to a human teacher, that they will behave as if a real live teacher is in the room, and not, instead, simply throw out the rules about what it means to be a student in a classroom.

The institutions on AI's death row

Hartzog and Silbey offer DOGE as a prime example of an institution that rotted from AI dependence, but they see many areas that are susceptible.

For instance, if the rule of law is handed to AI, we've got trouble. The idea of enforcing rules is that enforcement makes the rules visible and therefor easier for everyone to follow. But when the rules are obscured or unclear or simply hidden in the black box of AI, nobody knows what the rules are or what we are supposed to do. 

Imagine, they suggest, you get a notice that the IRS AI has determined that you owe $100,000 in back taxes. Nobody can tell you why, exactly, but they assume that the efficient and unbiased AI must have it right. Or a judge who hits you with a fine far above the recommended range, based on AI recommendation. Again, without explanation, but with the assumption of accuracy.

I'm imagining an AI that grades your student essay, but can't answer any of your questions about why you got that particular grade. 

It's all much like having someone in charge of government who sets rules based on his own personal whims and quirks from day to day and offers no explanation except that it's what he wants and he will use power to force compliance. Imagine how much that would suck. AI is also an authoritarian bully, except that its mechanized nature allows folks to pretend that its rule is unbiased and accurate. 

Hartzog and Silbey unsurprisingly also see trouble for higher education. AI taking over the cognitive load needed for learning. AI producing mediocre and homogenized content. AI shifting the questions researchers ask "from qualitative mysteries to quantifiable puzzles." If your main tool is an AI hammer, you are going to look for only nails that it will work on. 

And then there's trust, emerging more and more as an AI issue in education. Can you trust your students' work? Can they trust yours as a teacher? And what does all this do to the human connections needed for education to work? More distrust means more vulnerability to outside authorities trying to control the institution.

Then there's journalism...

As AI slop, the cheap, automatic, and thoughtless content made possible by AI, contaminates our public discourse and companies jam AI features into all possible screens, few institutions are more vital to preserve than the free press.

Too much slop and junk, particularly when it devalues expertise and knowledge, leads to a "scarcity of attention" and a lessened ability to respond to misinformation and disinformation. Everyone trying to do journalism of any sort knows the problem-- how do you get anyone to actually pay attention to what you have to say. We suffer from a collective thirteenth clown problem-- if there are twelve clowns on stage frolicking about, you can jump on stage and start reciting Shakespeare, but to the audience, you'll just be the thirteenth clown.

 Plus, the generation of mountains of slop means that AI is both generating and feeding on slop, and slop made out of slop is--well, not good. 

Journalism is defined by its adaptive, responsive dialogue in the face of shifting social, political, and economic events and by its sensitivity to power. But AI systems are not adaptive in a way that is responsive to human complexity, and they are agnostic to power. AI systems are pattern matchers; they cannot discern or produce “news.” 

Democracy and civic life 

Hartzog and Silbey pull out Robert Putnam's Bowling Alone, a standard on my list of books everyone should read. 

One key concept necessary for a society to function is the idea of “generalized reciprocity: I’ll do this for you without expecting anything specific back from you, in the confident expectation that someone else will do something for me down the road.” Putnam wrote, “[a] society characterized by generalized reciprocity is more efficient than a distrustful society. . . . Trustworthiness lubricates social life.” As people become isolated and withdraw from public life, trust disappears, and social capital along with it. 

If we continue to embrace AI unabated, social capital and norms of reciprocity will abate, and our center—democracy and civil life—will not hold. Because AI systems undermine expertise, short-circuit decision-making, and isolate humans, they are the perfect machines to destroy social capital.

There is an irony in the AI industry's attempt to solve the "loneliness crisis" by offering chatbot companions-- which is looking more and more like a very bad idea. Nor does it seem helpful for society if everyone sits at home and has AI agents handle everything from shopping to email correspondence. Working stuff out with other humans requires social capital, and your handy AI agent cannot do that for you. And again-- every scenario in which an AI agent replaces a human assumes that the transaction will go on as if it still involved a human. You'll use AI to answer emails and, the assumption goes, people will respond to those emails as they would had you written them yourself and not, say, dismiss and ignore them because they did not come from a human. Meanwhile, how does one build empathy and reciprocity when two AIs are talking back and forth on your behalf?

The section ends with a reporting about the techbro dream of a world in which AI runs everything (and they run AI), a new brand of technofacsism. They quote Jill Lepore's NYT story from last fall:

More recently, Mr. Altman, for his part, pondered the idea of replacing a human president of the United States with an A.I. president. “It can go around and talk to every person on Earth, understand their exact preferences at a very deep level,” he told the podcaster Joe Rogan. “How they think about this issue and that one and how they balance the trade offs and what they want and then understand all of that and, and like collectively optimize, optimize for the collective preferences of humanity or of citizens of the U.S. That’s awesome.” Is that awesome? Replacing democratic elections with machines owned by corporations that operate by rules over which the people have no say? Isn’t that, in fact, tyranny?

Well, it's not tyranny from Altman's point of view. It's just him living with absolute freedom from anything that would impede his will or that would involve him actually dealing with meat widgets. Meanwhile, Oracle is shopping around AI to help run your local municipal government

So, this paper

It's not a pretty or encouraging picture, but it is a thorough one and a compelling articulation of the argument against indiscriminate AI use in our institutions. I'm not sure how many people are really listening, but I recommend the essay as a worthwhile read. You can get to it here. 

 



Monday, June 9, 2025

Another Bad AI Classroom Guide

We have to keep looking at these damned things because they share so many characteristics that we need to recognize so we can recognize them when we see them again and react properly, i.e. by throwing moldy cabbage at them. I read this one so you don't have to.

And this one will turn up lots of places, because it's from the Southern Regional Education Board

SREB was formed in 1948 by governors and legislators; it now involves 16 states and is based in Atlanta. Although it involves legislators from each of the states, some appointed by the governor, it is a non-partisan, nonprofit organization. In 2019 they handled about $18 million in revenue. In 2021, they received at $410K grant from the Gates Foundation. Back in 2022, SREB was a cheerful sock puppet for folks who really wanted to torpedo tenure and teacher pay in North Carolina. 

But hey-- they're all about "helping states advance student achievement." 

SREB's "Guidance for the Use of AI in the K-12 Classroom" has big fat red flag right off the top-- it lists no authors. In this golden age of bullshit and slop, anything that doesn't have an actual human name attached is immediately suspect.

But we can deduce who was more or less behind this-- the SREB Commission on Artificial Education in Education. Sixteen states are represented by sixty policymakers, so we can't know whose hands actually touched this thing, but a few names jump out.

The chair is South Caroline Governor Henry McMaster, and his co-chair is Brad D. Smith, president of Marshall University in West Virginia and former Intuit CEO. As of 2023, he passed Jim Justice as richest guy in WV. And he serves on lots of boards, like Amazon and JPMorgan Chase. Some states (like Oklahoma) sent mostly legislators, while some sent college or high school computer instructors. There are also some additional members including Youngjun Choi (UPS Robotics AI Lab), Kim Majerus (VP US Public Sector Education for Amazon Wen Services) and some other corporate folks.

The guide is brief (18 pages). It's basic pitch is, "AI is going to be part of the working world these students enter, so we need schools to train these future meat widgets so we don't have to." The introductory page (which is certainly bland, vague, and voiceless enough to be a word string generated by AI) offers seven paragraphs that show us where we're headed. I'll paraphrase.

#1: Internet and smartphones means students don't have to know facts. They can just skip to the deep thinking part. But they need critical thinking skills to sort out online sources. How are they supposed to deep and critically think when they don't have a foundation of content knowledge? The guide hasn't thought about that. AI "adds another layer" by doing all the work for them so now they have to be good prompt designers. Which again, would be hard if you didn't know anything and had never thought about the subject.

#2: Jobs will need AI. AI must be seen as a tool. It will do routine tasks, and students will get to engage in "rich and intellectually demanding" assignments. Collaborative creativity! 

#3: It's inevitable. It is a challenge to navigate. Shareholders need guidance to know how to "incorporate AI tools while addressing potential ethical, pedagogical, and practical concerns." I'd say "potential" is holding the weight of a world on its shoulders. "Let's talk about the potential ethical concerns of sticking cocaine in Grandma's morning coffee." Potential.

#4: This document serves as a resource. "It highlights how AI can enhance personalized learning, improve data-driven decision-making, and free up teachers’ time for more meaningful student interactions." Because it's going to go ahead and assume that AI can, in fact, do any of that. Also, "it addresses the potential risks, such as data privacy issues, algorithmic biases, and the importance of maintaining the human element in teaching." See what they did there? The good stuff is a given certainty, but the bad stuff is just a "potential" down side.

#5: There's a "skills and attributes" list in the Appendix.

#6: This is mostly for teachers and admins, but lawmakers could totally use it to write laws, and tech companies could develop tech, and researchers could use it, too! Multitalented document here.

#7: This guide is to make sure that "thoughtful and responsible" AI use makes classrooms hunky and dory.

And with that, we launch into The Four Pillars of AI Use in the Classroom, followed with uses anbd cautions.

Pillar #1
Use AI-infused tools to develop more cognitively demanding tasks that increase student engagement with creative problem-solving and innovative thinking.

"To best prepare students for an ever-evolving workforce..." 

"However, tasks that students will face in their careers will require them..."

That's the pitch. Students will need to be able think "critically and creatively." So they'll need really challenging and "cognitively demanding" assignment. Now more than ever, students need to be creators rather than mere purveyors of knowledge. "Now more than ever, students need to be creators rather than mere purveyors of knowledge."

Okay-- so what does AI have to do with this?
AI draws on a broad spectrum of knowledge and has the power to analyze a wide range of resources not typically available in classrooms.
This is some fine tuned bullshit here, counting on the reader to imagine that they heard something that nobody actually said. AI "draws on" a bunch of "knowledge" in the sense that it sucks up a bunch of strings of words that, to a human, communicate knowledge. But AI doesn't "know" or "understand" any of it. Does it "analyze" the material? Well, in the sense that it breaks the words into tokens and performs complex maths on them, there is a sort of analysis. But AI boosters really, really want you to anthropomorphize AI, to think about it as human-like un nature and not alien and kind of stupid.

"While AI should not be the final step in the creative process, it can effectively serve in the early stages." Really? What is it about the early stages that makes them AI-OK? I get it--up to a point. I've told students that they can lift an idea from somewhere else as long as they make it their own. But is the choice of what to lift any less personal or creative than what one does with it? Sure, Shakespeare borrowed the ideas behind many of his plays, but that decision about what to borrow was part of his process. I'd just like to hear from any of the many people who think AI in beginning stages is okay why exactly they believe that the early stages are somehow less personal or creative or critical thinky than the other stages. What kind of weird value judgment is being made about the various stages of creation?

Use AI to "streamline" lesson planning. Teach critical thinking skills by, and I'm only sort of paraphrasing here, training students to spot the places where AI just gets stuff wrong. 

Use AI to create "interactive simulations." No, don't. Get that AI simulation of an historical figure right out of your classroom. It's creepy, and like much AI, it projects a certainty in its made-up results that it does not deserve. 

Use AI to create a counter-perspective. Or just use other humans.

Cautions? Everyone has to learn to be a good prompt engineer. In other words, humans must adjust themselves to the tool. Let the AI train you. 

Recognize AI bias, or at least recognize it exists. Students must learn to rewrite AI slop so that it sounds like the student and not the AI, although how students develop a voice when they aren't doing all the writing is rather a huge challenge as well. 

Also, when lesson planning, don't forget that AI doesn't know about your state standards. And if you are afraid that AI will replace actual student thinking, make sure your students have thought about stuff before they use the AI. Because the assumption under everything in this guide is that the AI must be used, all the time.

Pillar #2
Use AI to streamline teacher administrative and planning work.

The guide leads with an excuse-- "teachers' jobs have become increasingly more complex." Have they? Compared to when? The guide lists the usual features of teaching-- same ones that were there when I entered the classroom in 1979. I call bullshit. 

But use AI as your "planning partner." I am sad that teachers are out there doing this. It's not a great idea, but for a generation that entered the profession thinking that teacher autonomy was one of those old-timey things, as relevant as those penny-farthings that grampa goes on about. And these suggestions for use. Yikes.

Lesson planning! Brainstorming partner! And, without a trace of irony, a suggestion that you can get more personalized lessons from an impersonal non-living piece of software.

Let it improve and enhance a current assignment. Meh. Maybe, though I don't think it would save you a second of time (unless you didn't check whether AI was making shit up again). 

But "Help with Providing Feedback on and Grading Student Work?" Absolutely not. Never, ever. It cannot assess writing quality, it cannot do plagiarism detection, it cannot reduce grading bias (just replace it). If you think it even "reads" the work, check out this post. Beyond the various ways in which AI is not up to the task, it comes down to this-- why would your students write a work that no other human being was going to read?

Under "others," the guide offers things like drafting parent letters and writing letters of recommendation, and again, for the love of God, do not do this! Use it for translating materials for ESL students? I'm betting translation software would be more reliable. Inventory of supplies? Sure, I'm sure it wouldn't take more than twice as much time as just doing it by eyeball and paper. 

Oh, and maybe someday AI will be able to monitor student behavior and engagement. Yeah, that's not creepy (and improbable) at all.

Cautions include a reminder of AI bias, data privacy concerns, and overreliance on AI tools and decisions, and I'm thinking "cautions" is underselling the issues here. 

Pillar #3
Use AI to support personalized learning.

The guide starts by pointing out that personalized learning is important because students learn differently. Just in case you hadn't heard. That is followed by the same old pitch about dynamically adaptive instruction based on data collected from prior performance, only with "AI" thrown in. Real time! Engagement! Adaptive!

AI can provide special adaptations for students with special needs. Like text-to-speech (is that AI now). Also, intelligent tutoring systems that " can mimic human tutors by offering personalized hints, encouragement and feedback based on each student’s unique needs." So, an imitation of what humans can do better. 

Automated feedback. Predictive analytics to spot when a student is in trouble. AI can pick student teams for you (nope). More of the same.

Cautions? There's a pattern developing. Data privacy and security. AI bias. Overreliance on tech. Too much screen time. Digital divide. Why those last two didn't turn up in the other pillars I don't know. 

Pillar #4
Develop students as ethical and proficient AI users.

I have a question-- is it possible to find ethical ways to use unethical tools? Is there an ethical way to rob a bank? What does ethical totalitarianism look like?

Because AI, particularly Large Language Models, is based on massive theft of other peoples' work. And that's before we get to the massive power and water resources being sucked up by AI. 

But we'll notice another point here-- the problems of ethical AI are all the responsibility of the student users. "Teaching students to use AI ethically is crucial for shaping a future where technology serves humanity’s best interests." You might think that an ethical future for AI might also involve the companies producing it and the lawmakers legislating rules around it, but no-- this is all on students (and remember-- students were not the only audience the guide listed) and by extension, their teachers. 

Uses? Well, the guide is back on the beginning stages of writing
AI can also help organize thoughts and ideas into a coherent outline. AI can recommend logical sequences and suggest sections or headings to include by analyzing the key points a student wants to cover. AI can also offer templates, making it easier for students to create well-structured and focused outlines.

These are all things the writer should be doing. Why the guide thinks using AI to skip the "planning stages" is ethical, but using it in any other stages is not, is a mystery to me.

Students also need to develop "critical media literacy" because the AI is going to crank out well-polished turds, and it's the student's job to spot them. "Our product helps dress you, but sometimes it will punch you in the face. We are not going to fix it. It is your job to learn how to duck."

Cross-disciplinary learning-- use the AI in every class, for different stuff! Also, form a student-led AI ethics committee to help address concerns about students substituting AI for their own thinking. 

Concerns? Bias, again. Data security-- which is, incidentally, also the teacher's responsibility. AI research might have ethical implications. Students also might be tempted to cheat- the solution is for teachers to emphasize integrity. You know, just in case the subject of cheating and integrity has never ever come up in your classroom before. Deepfakes and hallucinations damage the trustworthiness of information, and that's why we are calling for safeguards, restrictions, and solutions from the industry. Ha! Just kidding. Teachers should emphasize that these are bad, and students should watch out for them.

Appendix

A couple of charts showing aptitudes and knowledge needed by teachers and admins. I'm not going to go through all of this. A typical example would be the "knowledge" item-- "Understand AI's potential and what it is and is not" and the is and is not part is absolutely important, and the guide absolutely avoids actually addressing what AI is and is not. That is a basic feature of this guide--it's not just that it doesn't give useful answers, but it fails to ask useful questions. 

It wraps up with the Hess Cognitive Rigor Matrix. Whoopee. It's all just one more example of bad guidance for teachers, but good marketing for the techbros. 



Saturday, December 6, 2025

Reverse Centaurs, AI, and the Classroom

Cory Doctorow gave us "enshittification" to explain much of what has gone wrong, and he is already moving on to explain much of what we suspect is wrong with the push for AI. There's a book coming, but he has already laid out the basic themes in a presentation that he shared with his on-line audience. It doesn't address teaching and education directly, but the implications are unmistakable.

We start with the automation theory term "centaur." A centaur is a human being assisted by a machine. Doctorow cites as an example driving a car, or using autocomplete. "You're a human head carried around on a tireless robot body." 

A "reverse centaur" is a machine head on a human body, "a person who is serving as a squishy meat appendage for an uncaring machine." Here's his example, in all its painful clarity:
Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver's eyes and take points off if the driver looks in a proscribed direction, and monitors the driver's mouth because singing isn't allowed on the job, and rats the driver out to the boss if they don't make quota.

The driver is in that van because the van can't drive itself and can't get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn't just use the driver. The van uses the driver up.

Doctorow explains that tech companies are highly motivated to appear to be growth industries, and then explains how they're selling AI as a growth story, and not a pretty one. AI is going to disrupt labor.  

The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.

The thing is-- AI can't do your job. So the radiology department can't fire all the radiologists and replace them with AI to read scans-- they have to hire someone to sit and check the AI's work, to be the "human in the loop" whose job is to catch the rare-but-disastrous case where the AI screws up. 

That last radiologist is a reverse-centaur, and Doctorow cites Dan Davis' coinage for the specific type-- the Last Radiologist is an "accountability sink." Says Doctorow, "The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes."

In education, there is potential for AI to create centaurs and reverse centaurs, and I think the distinction is useful for parsing just how horrible a particular AI application can be. 

The most extreme version of a reverse centaur is any of the bullshit AI-driven charter or mini-schools, like the absurd Alpha school chain that promises two hours on a screen will give your child all the education they need. Just let the AI teach your child! All of these models offer a "school" that doesn't need teachers at all--just a "guide" or a "coach" there to be make sure nothing goes wrong, like an AI that offers instruction on white racial superiority or students who zone out entirely. The guide is a reverse centaur, an accountability sink whose function is to be responsible for everything the AI screws up, while allowing the investors in these businesses (and they are always businesses, usually run by business people and not educators) to save all sorts of costs on high-priced teachers by hiring a few low-cost guides.

For teachers, AI promises to make you a high-powered centaur. Let the AI write your lessons, correct your papers, design your teaching materials. Except that AI can't do any of those things very reliably, so the teacher ends up checking all of the AI's work to make sure it's accurate. Or at least they should, providing the human in the loop. So the teacher ends up as either a reverse centaur or, I suppose, a really incompetent reverse centaur who just passes along whatever mistakes the AI makes. 

Almost nobody is sales-arguing that AI can make teaching better, that an AI can reach students better than another human; virtually all arguments are centered on speed and efficiency and time-saving, and while that is appealing to teachers, who never have enough time for the work, the speed and efficiency argument is appealing to management because to them speed and efficiency mean fewer meat widgets to hire, and in a field where the main expense is personnel, that's appealing. 

Public schools don't have investors to make money from cutting teachers (though private and charter schools sure do), but for AI businesses (as with all other ed tech businesses before them) cannot help but salivate at just how huge the education market could be, a $6 billion mountain just waiting to be chewed up. So education gets an endless barrage of encouragements to join the AI revolution. Don't miss out! It's inevitable! It's shiny! To teachers, the promise that it will convert them into powerful cybernetic centaurs. To managers, the promise that it will convert teachers into more compliant and manageable reverse centaurs, controlled by a panel on the screen in your office. 

And both snookered, because an AI can't do a teacher's job. "Don't worry," the boosters say. "There will always be a human in the loop." Of course there will be--because AI can't do a teacher's job. The important question is whether the AI will serve the teachers or be served by them. As a teacher in the classroom being pushed to incorporate AI ("C'mon! It's so shiny!!"), you should be asking whether the tech will be empowering you and giving you new teacher arms of steel, or will it be converting you to some fleshy support for a piece of tech. 

Right now, far more pressure is being put on the Be A Fleshy Appendage side of the discussion. Here's hoping teachers find the strength to stand up to that pressure.

Oh, and a side point that I learned in Doctorow's article that's worth remembering the next time a company wants to offer AI-generated materials--  the courts have repeatedly ruled that AI-generated materials cannot be copyrighted (because they aren't human-made). 



Tuesday, January 28, 2025

AI Is For The Ignorant

Well, here's a fun piece of research about AI and who is inclined to use it.

The title for this article in the Journal of Marketing-- "Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity"-- gives away the game, and the abstract tells us more than enough about what the research found.

You may think that familiarity with technology leads to more willingness to use it, but AI runs the opposite direction.

Contrary to expectations revealed in four surveys, cross country data and six additional studies find that people with lower AI literacy are typically more receptive to AI.

That linkage is explained simply enough. People who don't really understand what AI is or what it actually does "are more likely to perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes." 

The researchers are Stephanie Tully (USC Marshall School of Business), Chiara Longoni (Bocconi University), and Gil Appel (GW School of Business) are all academics in the world of business and marketing, and while I wish they were using their power for Good here, that's not entirely the case.

Having determined that people with "lower AI literacy" are more likely to fork over money for AI products, they reach this conclusion:

These findings suggest that companies may benefit from shifting their marketing efforts and product development towards consumers with lower AI literacy. Additionally, efforts to demystify AI may inadvertently reduce its appeal, indicating that maintaining an aura of magic around AI could be beneficial for adoption.

To sell more of this non-magical product, make sure not to actually educate consumers. Emphasize the magic, and go after the low-information folks. Well, why not. It's a marketing approach that has worked in certain other areas of American life. In a piece about their own research, the authors suggest a tiny bit of nuance, but the idea is the same. If you show AI doing stuff that "only humans can do" without explaining too clearly how the illusion is created, you can successfully "develop and deploy" new AI-based products "without causing a loss of the awe that inspires many people to embrace this new technology." Gotta keep the customers just ignorant enough to make the sale.

And lord knows lots of AI fans are already on the case. Lord knows we've been subjected to an unending parade of lazy journalism of the "Wow! This computer can totally write limericks like a human" variety. For a recent example, Reid Hoffman, co-founder of LinkedIn, Microsoft board member, and early funder of OpenAI, unleashed a warm, fuzzy, magical woo-woo invocation of AI in the New York Times that is all magic and zero information.

Hoffman opens with an anecdote about someone asking ChatGPT "based on everything you know about me, draw a picture of what you think my current life looks like." This is Grade A magical AI puffery; ChatGPT does not "know" anything about you, nor does it have thoughts or an imagination to be used to create a visual image of your life. "Like any capable carnival mind reader," continues Hoffman, comparing computer software not just to a person, but to a magical person. And when ChatGPT gets something wrong, like putting a head of broccoli on your desk, Hoffman paints that "quirky charm" as a chance for the human to reflect and achieve a flash of epiphany. 

But what Hoffman envisions is way more magical than that-- a world in which the AI knows you better than you know yourself, that could record the details of your life and analyze them for you. 

Decades from now, as you try to remember exactly what sequence of events and life circumstances made you finally decide to go all-in on Bitcoin, your A.I. could develop an informed hypothesis based on a detailed record of your status updates, invites, DMs, and other potentially enduring ephemera that we’re often barely aware of as we create them, much less days, months or years after the fact.

When you’re trying to decide if it’s time to move to a new city, your A.I. will help you understand how your feelings about home have evolved through thousands of small moments — everything from frustrated tweets about your commute to subtle shifts in how often you’ve started clicking on job listings 100 miles away from your current residence.

The research trio suggested that the more AI imitates humanity, the better it sells to those low-information humans. Hoffman suggests that the AI can be more human than the user. But with science!

Do we lose something of our essential human nature if we start basing our decisions less on hunches, gut reactions, emotional immediacy, faulty mental shortcuts, fate, faith and mysticism? Or do we risk something even more fundamental by constraining or even dismissing our instinctive appetite for rationalism and enlightenment?

 Software will make us more human than humans?

So imagine a world in which an A.I. knows your stress levels tend to drop more after playing World of Warcraft than after a walk in nature. Imagine a world in which an A.I. can analyze your reading patterns and alert you that you’re about to buy a book where there’s only a 10 percent chance you’ll get past Page 6.

Instead of functioning as a means of top-down compliance and control, A.I. can help us understand ourselves, act on our preferences and realize our aspirations.

I am reminded of Knewton, a big ed tech ball of whiz-bangery that was predicting it would collect so much information about students that it would be able to tell students what they should eat for breakfast on test day. It did not do that; instead it went out of business. Even though it did its very best to market itself via magic.

If I pretend that I think Hoffman's magical AI will ever exist, I still have other questions, not the least of which is why would someone listen to an AI saying "You should go play World of Warcraft" or "You won't be able to finish Ulysses" when people tend to ignore other actual humans with similar advice. And where do we land if Being Human is best demonstrated by software rather than actual humans? What would it do to humans to offload the business of managing and understanding their own lives? 

We have a hint. Research from Michael Gerlich (Head of Center for Strategic Corporate Foresight and Sustainability, SBS Swiss Business School) has published "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking"* and while there's a lot of scholaring going on here, the result is actually unsurprising.

Let's say you were really tired of walking everywhere, so you outsourced the walking to someone else, and you sat on the couch every waking hour. Can we predict what would happen to the muscles in your legs? Sure--when someone else bears the load, your own load-bearing members get weaker.

Gerlich finds the same holds true for outsourcing your thinking to AI. "The correlation between AI tool usage and critical thinking was found to be strongly negative." There are data and charts and academic talk, but bottom line is that "cognitive offloading" damages critical thinking. That makes sense several ways. Critical thinking is not a free-floating skill; you have to think about something, so content knowledge is necessary, and if you are using AI to know things and store your knowledge for you, your thinking isn't in play. Nor is it working when the AI writes topic sentences and spits out other work for you.

In the end, it's just like your high school English teacher told you-- if someone else does your homework for you, you won't learn anything.

You can sell the magic and try to preserve the mystery and maybe move a few more units of whatever AI widget you're marketing this week, but if you're selling something that people have to be ignorant to want so that they can offload some human activity then what are you doing? To have more time for World of Warcraft? 

If AI is going to be any use at all, it will not be because it hid itself behind a mask of faux human magical baloney, but because it can do something useful and be clear and honest about what it is actually, really doing, and not because it used an imitation of magic to capitalize on the ignorance of consumers. 


*I found this article thanks to Audrey Watters


Sunday, March 30, 2025

Ready For An AI Dean?

From the very first sentence, it's clear that this recent Inside Higher Ed post suffers from one more bad case of AI fabulism. 

In the era of artificial intelligence, one in which algorithms are rapidly guiding decisions from stock trading to medical diagnoses, it is time to entertain the possibility that one of the last bastions of human leadership—academic deanship—could be next for a digital overhaul.

AI fabulism and some precious notions about the place of deans in the universe of human leadership.

The author is Birce Tanriguden, a music education professor at the Hartt School at the University of Hartford, and this inquiry into what "AI could bring to the table that a human dean can't" is not her only foray into this topic. This month she also published in Women in Higher Education a piece entitled "The Artificially Intelligent Dean: Empowering Women and Dismantling Academic Sexism-- One Byte at a Time."

The WHE piece is academic-ish, complete with footnotes (though mostly about the sexism part). In that piece, Tanriguden sets out her possible solution

AI holds the potential to be a transformative ally in promoting women into academic leadership roles. By analyzing career trajectories and institutional biases, our AI dean could become the ultimate career counselor, spotting those invisible banana peels of bias that often trip up women's progress, effectively countering the "accumulation of advantage" that so generously favors men.

Tanriguden notes the need to balance efficiency with empathy:

Despite the promise of AI, it's crucial to remember that an AI dean might excel in compiling tenure-track spreadsheets but could hardly inspire a faculty member with a heartfelt, "I believe in you." Academic leadership demands more than algorithmic precision; it requires a human touch that AI, with all its efficiency, simply cannot emulate.

I commend the author's turns of phrase, but I'm not sure about her grasp of AI. In fact, I'm not sure that current Large Language Models aren't actually better at faking a human touch than they are at arriving at efficient, trustworthy, data-based decisions.  

Back to the IHE piece, in which she lays out what she thinks AI brings to the deanship. Deaning, she argues, involves balancing all sorts of competing priorities while "mediating, apologizing and navigating red tape and political minefields."

The problem is that human deans are, well, human. As much as they may strive for balance, the delicate act of satisfying all parties often results in missteps. So why not replace them with an entity capable of making precise decisions, an entity unfazed by the endless barrage of emails, faculty complaints and budget crises?

The promise of AI lies in its ability to process vast amounts of data and reach quick conclusions based on evidence. 

Well, no. First, nothing being described here sounds like AI; this is just plain old programming, a "Dean In A Box" app. Which means it will process vast amounts of data and reach conclusions based on whatever the program tells it to do with that data, and that will be based on whatever the programmer wrote. Suppose the programmer writes the program so that complaints from male faculty members are weighted twice as much as those from female faculty. So much for AI dean's "lack of personal bias." 

But suppose she really means AI in the sense of software that uses a form of machine learning to analyze and pull out patterns in its training data. AI "learns: to trade stocks by being trained with a gazillion previous stock trades and situations, thereby allowing it to suss out patterns for when to buy or sell. Medical diagnostic AI is training with a gazillion examples of medical histories of patients, allowing it to recognize how a new entry from a new patient fits in all that the patterns. Chatbots like ChatGPT do words by "learning" from vast (stolen) samples of word use that lead to a mountain of word patter "rules" that allow it to determine what words are likely next.

All of these AI are trained on huge data sets of examples from the past.

What would you use to train AI Dean? What giant database would you use to train it, what collection of info about the behavior of various faculty and students and administrators and colleges and universities in the past? More importantly, who would label the data sets as "successful" or "failed"? Medical data sets come with simple metrics like "patient died from this" or "the patient lived fifty more years with no issues." Stock markets come with their own built in measure of success. Who is going to determine which parts of the Dean Training Dataset are successful or not.

This is one of the problems with chatbots. They have a whole lot of data about how language has been used, but no meta-data to cover things like "This is horrifying racist nazi stuff and is not a desirable use of language" and so we get the multiple examples of chatbots going off the rails

Tanriguden tries to address some of this. Under the heading of how AI Dean would evaluate faculty.

With the ability to assess everything from research output to student evaluations in real time, AI could determine promotions, tenure decisions and budget allocations with a cold, calculated rationality. AI could evaluate a faculty member’s publication record by considering the quantity of peer-reviewed articles and the impact factor of the journals in which they are published.

Followed by some more details about those measures. Which raises another question. A human could do this-- if they wanted to. But if they don't want to, why would they want a computer program to do it?

The other point here is that once again, the person deciding what the algorithm is going to measure is the person whose biases are embedded in the system. 

Tanriguden also presents "constant availability, zero fatigue" as a selling point. She says deans have to do a lot of meetings, but (her real example) when, at 2 AM, the department chair needs a decision on a new course offering, AI Dean can provide an answer "devoid of any influence of sleep deprivation or emotional exhaustion." 

First, is that really a thing that happens? Because I'm just a K-12 guy, so maybe I just don't know. But that seems to me like something that would happen in an organization that has way bigger problems than any AI can solve. But second, once again, who decided what AI Dean's answer will be based upon? And if it's such a clear criterion that it can be codified in software, why can't even a sleepy human dean apply it?

Finally, she goes with "fairness and impartiality," dreaming of how AI Dean would apply rules "without regard to the political dynamics of a faculty meeting." Impartial? Sure (though we could argue about how desirable that is, really). Fair? Only as fair as it was written to be, which starts with the programmer's definition of "fair."

Tanriguden wraps up the IHE piece once again acknowledging that leadership needs more than data as well as "the issue of the academic heart." 

It is about understanding faculty’s nuanced human experiences, recognizing the emotional labor involved in teaching and responding to the unspoken concerns that shape institutional culture. Can an AI ever understand the deep-seated anxieties of a faculty member facing the pressure of publishing or perishing? Can it recognize when a colleague is silently struggling with mental health challenges that data points will never reveal?

In her conclusion she arrives at Hybrid Dean as an answer:

While the advantages of AI—efficiency, impartiality and data-driven decision-making—are tantalizing, they cannot fully replace the empathy, strategic insight and mentorship that human deans provide. The true challenge may lie not in replacing human deans but in reimagining their roles so that they can coexist with AI systems. Perhaps the future of academia involves a hybrid approach: an AI dean that handles (or at least guides) the operational decisions, leaving human deans to focus on the art of leadership and faculty development.

We're seeing lots of this sort of resigned knuckling under in lots of education folks who seem resigned to the predicted inevitability of AI (as always in ed tech, predicted by people who have a stake in the biz). But the important part here is that I don't believe that AI can hold up its half of the bargain. In a job that involves management of humans and education and interpersonal stuff in an ever-changing environment, I don't believe AI can bring any of the contributions that she expects from it. 

Wednesday, February 18, 2026

The AI Task Force and Moms For Liberty: It's Complicated

Moms for Liberty has staked out some positions on AI in education, and it may be a preview of the policy challenge facing conservatives in the area. 

Last April, Dear Leader issued an AI in Education edict in which somebody wrote
By fostering AI competency, we will equip our students with the foundational knowledge and skills necessary to adapt to and thrive in an increasingly digital society. Early learning and exposure to AI concepts not only demystifies this powerful technology but also sparks curiosity and creativity, preparing students to become active and responsible participants in the workforce of the future and nurturing the next generation of American AI innovators to propel our Nation to new heights of scientific and economic achievement.

The edict established the Artificial Intelligence Education Task Force, five words that, when crammed together by this administration, create some sort of field that overloads and destroys any irony in the vicinity.  The federal AI Initiative offers a page of "resources" that looks much like a "list of folks hoping to make money from AI." That goes with the part calling for public-private partnerships

A bunch of organizations and businesses and also more businesses have signed the presidential Pledge To America's Youth in which [Your Name Here] pledges to provide resources that foster early interest in AI technology, promote AI proficiency, and enable comprehensive AI training for parents and educators" all of which sounds much nicer than "We promise to hook customers as soon as they are born and do whatever we can to saturate the market. Ka-ching."

Specifically, over the next 4 years, we pledge to make available resources for youth, parents and teachers through funding and grants, educational materials and curricula, technology and tools, teacher professional development programs, workforce development resources, and/or technical expertise and mentorship.

Well, of course. Hey, did you hear the unsurprising discovery via internal documents that Google is using its education products to turn schools into a "pipeline of future users"? Is it any wonder that Dear Leader, our Grifter In Chief, wants to keep an eye on this new, promising money tree.

The initiative and task force are headed up by Michael Kratsios, whose previous gigs include Chief of Staff to Peter Thiel. He served in the first Trump administration in the Department of Defense, spent his interregnum as managing director of Scale AI and is now the director of the White House Office of Science and Technology Policy. In his current gig, he's calling to "demystify these amazing technologies" and figure out what AI is and is not good for, and then American families, students and educators "can fully take advantage of AI applications with confidence and responsibility." Perhaps he's unfamiliar with the research that shows that the more people know about AI, the less inclined they are to use it. 

The task force has been meeting with folks to "discuss AI's impact in the classroom," which of course means everyone except people who actually work in classrooms. At their December confab, they heard from Chris Woolard of the Ohio Department of [Privatizing] Education, Adeel Khan of Magic School, and Tina Descovich, co-founder and current Big Cheese of Moms for Liberty. 

M4L has some thoughts about AI in education. And, well, they aren't entirely terrible. 

Along with tech companies acting responsibly, policymakers must do everything possible to make sure parents have full transparency into how AI systems operate, what data they collect, and how decisions or recommendations are made

By acting below, together we can ensure parents, not algorithms or activists, shape how AI is used in the education of our children.

Of course, they leave teachers out of the equation, perhaps because they can't quite figure out how to work "we think teachers are sometimes okay, but we hate their evil unions" into this equation. But their slogan for AI-- "Demand transparency, accountability, and boundaries" -- is not bad. And they do better by teachers elsewhere-- we'll get to that.

They've got a pledge to sign, and it hits all the usual M4L notes--



It's the usual "parents' fundamental right etc" song and dance, but that song and dance in the face of a plagiarism-driven data-mining monster makes some sense. It also suggests that M4L and its ilk are not quite ready to jump on the White House's grifty AI bandwagon. The M4L pledge certainly strikes a different tone than the White House's AI Pledge to America's Youth

M4L also has a model school board policy and a model bill for legislatures. The school board policy lists four purposes:

1. Protect parental rights and student privacy;
2. Preserve the central role of teachers in instruction;
3. Maintain academic integrity; and
4. Ensure transparency and accountability in the use of emerging technologies.

The policy calls for no AI tools used without prior parental consent. The school should annually provide written notice of all AI tools approved for use.

There's a whole section on "instructional safeguards" that states as its first point

Artificial intelligence shall not replace a certified teacher in providing core academic instruction or assigning final grades.

Which doesn't go quite far enough (AI should assign no grades at all), but still is a more blunt defense of actual human teaching than anything the administration has offered. 

M4L also seems to understand the AI threat to all manner of data that can be collected from young humans far better than plenty of other folks (for God's sake, stop inviting ChatGPT to scan all your social media content so it can make you a cute cartoon of yourself). 

The M4L model legislation is much of the same stuff with more expansive lawmakery language, but again, they seem to understand the issues here:

While artificial intelligence may offer instructional benefits, its use also presents risks, including data privacy violations, diminished academic integrity, ideological bias, and inappropriate replacement of human educators.

Well, yeah. 

It's an unusual day when we don't find M4L falling right in behind Dear Leader and nodding along with whatever his crew has to say, and I would love to think that this shows a bit of fissure between pro-any corporate entity that might enrich me MAGA and right-wing conspiracy crew MAGA. It almost smells a bit like that time a whole lot of Very Conservative Folks went rogue over Common Core.

But if the Moms want to join in the resistance to throwing AI into classrooms Right Away because if we don't OMG students won't be ready for the jobs of tomorrow because AI is inevitable and awesome and so much better than all those troublesome human meat widgets-- anyway, if the Moms want to stand up to all of that, I'm happy to see it. I am definitely staying tuned. Can AI make popcorn?

Tuesday, May 13, 2025

GOP Proposes Unregulated AI

The current regime may not have a clue what AI actually is, but they are determined to get out in front of it.

First we had Dear Leader's bonkers executive order back in April to set up an AI task force that would create an AI challenge that would boost the use of AI in education. Plus "improving education through artificial intelligence" (an especially crazypants turn of phrase) that would 
seek to establish public-private partnerships with leading AI industry organizations, academic institutions, nonprofit entities, and other organizations with expertise in AI and computer science education to collaboratively develop online resources focused on teaching K-12 students foundational AI literacy and critical thinking skills.

Does the person who whipped this together think AI and critical thinking are a package, or does this construction acknowledge that AI and critical thinking are two separate things? The eo also promises all sorts of federal funding to back all this vague partnering. The eo also contains this sad line:

the Secretary of Education shall identify and implement ways to utilize existing research programs to assist State and local efforts to use AI for improved student achievement, attainment, and mobility.

"Existing research programs"? Are there some? And "achievement, attainment, and mobility" mean what? 

The eo also touts using Title II funds for boosting AI training for teachers, like reducing "time-intensive administrative tasks" and training that would help teachers "effectively integrate AI-based tools and modalities in classrooms."

Bureaucratic bloviating. Fine. Whatever. But House Republicans decided to take their game up a notch this week by adding this tasty piece of baloney. Budget reconciliation now includes this chunk of billage. The first part has to do with selling off some pieces of the broadcast spectrum, but the second part--

no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10- year period beginning on the date of the enactment of this Act.

There are exceptions, mostly of the "anything that helps AI companies expand or make money is okay" variety.

A ban on AI regulation is dumb, particularly given that folks are still trying to figure out what it can or can't do. 

But a ban on regulation for the next decade??!! Who knew that the GOP would be involved in launching Skynet? 

"Sir, it looks like Skynet is about to send something called a terminator to kill us all. Should we take action to prevent it?"

"Stand down, kid. The Republican party has forbidden us to take action. Kiss your children goodbye."

Seriously, we can already see that AI is taking us to some undesirable places, and God only knows what might develop over the next decade. To tie our regulatory hands, to unilaterally disarm and give up any ability to put restraints on the cyber-bull in our cultural china shop is just foolish.

Of course, what the proposed anti-regulation and the eo have in common is that they prioritize the chance for corporations to profit from AI. That's common to many actions of the regime, all based on the notion that there is nothing so precious in our country or culture that it should be protected from impulse to make a buck. What the GOP proposes is a "drill, baby, drill" for AI with the nation's youths, education system, and culture playing the part of the great outdoors.

Anti-regulation for AI is worse than the other brands of deregulation being pushed, because while we have some idea what deforesting a national park might look like, we have no way of imagining what may appear under the banner of AI in the next ten years. New ways to steal content for training? Out of control faux humans who intrude in scary and dangerous ways? Whole new versions of identity theft? There are so many terrible AI ideas out there (international diplomacy by AI, anyone) and so many more to come--even as AI may be actually getting worse at doing its thing. Not all of them need to be regulated, but to pre-emptively deregulate the industry, dark future unseen, in the hopes of cashing in-- that's venal, careless stupidity of the highest order. 

Thursday, December 12, 2024

AI in Ed: The Unanswered Question

It is just absolutely positively necessary to get AI into education. I know this because on social media and in my email, people tell me this dozens of times every day. 

Just two examples. UCLA is excited to announce that a comparative literature course next semester will be "built around" UCLA's Kudu artificial intelligence platform. Meanwhile, Philadelphia schools and the University of Pennsylvania are teaming up to make Philadelphia a national AI in education model. The AI-in-education list goes on and on, and there are soooo many questions. Ethical questions. Questions about the actual capabilities of AI? Questions of resource use?

But here's the question I wish more --well, all, actually-- of these folks would ask.

What problem does it solve?

This is the oldest ed tech problem of them all, an issue that every teacher has encountered-- someone introduces a new piece of tech starting from the premise, "We must use this. Now let's figure out how." This often leads to the next step of, "If you just change your whole conception of your job, then this tech will be really useful. Will it get the job done better? Hey, shut up." 

This whole process is why so many, many, many, many pieces of ed tech ended up gathering dust, as well as birthing painfully useless sales pitchery masquerading as professional development. And when it comes to terrible PD, AI is right on top of things (see this excellent taxonomy of AI educourses, courtesy of Benjamin Riley)

So all AI adoption should start with that question.

What problem is this supposed to solve? 

Only after we answer that question can we ask the next important question, which is, will it actually solve the problem? Followed closely by asking what other problems it will create.

Sometimes there's a real answer. It turns out that once you dig through the inflated verbiage of the UCLA piece, what's really happening is that AI is whipping up a textbook for the course, using the professors notes and materials from previous iterations of the course. So the problem being solved is "I wish I had a text for this course." Time will tell whether having to meticulously check all of the AI's work for accuracy is less time consuming than just writing the text herself.

[UPdate: Nope, it's more than the text. It's also the assignments and the TA work. What problem can this possibly solve other than "The professor does not know how to do their job" or "The professor thinks work is way too hard." Shame on UCLA.]

On the other hand, Philadelphia's AI solution seems to be aimed at no problem at all. Says dean of Penn's education grad school, Katherine O. Strunk:
Our goal is to leverage AI to foster creativity and critical thinking among students and develop policies to ensure this technology is used effectively and responsibly – while preparing both educators and students for a future where AI and technology will play increasingly central roles.

See, that's a pretty goal, but what's the problem we're solving here. Was it not possible to foster creativity and critical thinking prior to AI? Is the rest of the goal solving the problem of "We have a big fear of missing out"?

Assuaging FOMO is certainly one of the major problems that AI adoption is meant to address. The AI sector makes some huge and shiny predictions, including some that show a fundamental misunderstanding of how education works for real humans (looking at you, Sal Khan and your AI-simulated book characters). Some folks in education leadership are just deathly afraid of being left behind and so default to that old ed tech standard-- "Adopt it now and we'll figure out what we can do with it later."

So if someone in your organization is hollering that you need to pull in this AI large language model Right Now, keep asking that question--

What problem will it help solve?

Acceptable answers do not include: 

* Look at this thing an AI made! Isn't it cool! Shiny!

* I read about a school in West Egg that did some really cool AI thing.

* We could [insert things that you should already be doing].

* I figured once you got your hands on it, you could come up with some ideas.

* We're bringing in someone to do 90 minutes of training that will answer all your questions.

* Just shut up and do it.

The following answers are also not acceptable, but they probably won't be spoken aloud:

* We are going to replace humans and save money.

* It will make it easier to dump work on you that other people don't want to do.

Acceptable answers include:

* We could save time in Task X

* We could do a better job of teaching Content Q and/or Skill Y

Mind you, the proposed AI may still flunk when you move on to the "Can it actually do this, really," but if you don't really know what you want it to do, it's senseless to debate whether or not it can do that.

There's some debate raging currently in the world of AI stuff, and as usual Benjamin Riley has it laid out pretty clearly here. But much of it is set around the questions "Is AI fake" and "Does AI suck," and in the classroom, both of those questions are secondary importance to "What problem is AI supposed to help solve here?" If the person pushing AI can't answer that question, there really isn't any reason to continue the conversation.