Wednesday, April 2, 2025
Where Does AI Fit In The Writing Process
Sunday, March 30, 2025
Ready For An AI Dean?
From the very first sentence, it's clear that this recent Inside Higher Ed post suffers from one more bad case of AI fabulism.
In the era of artificial intelligence, one in which algorithms are rapidly guiding decisions from stock trading to medical diagnoses, it is time to entertain the possibility that one of the last bastions of human leadership—academic deanship—could be next for a digital overhaul.
AI fabulism and some precious notions about the place of deans in the universe of human leadership.
The author is Birce Tanriguden, a music education professor at the Hartt School at the University of Hartford, and this inquiry into what "AI could bring to the table that a human dean can't" is not her only foray into this topic. This month she also published in Women in Higher Education a piece entitled "The Artificially Intelligent Dean: Empowering Women and Dismantling Academic Sexism-- One Byte at a Time."
The WHE piece is academic-ish, complete with footnotes (though mostly about the sexism part). In that piece, Tanriguden sets out her possible solution
AI holds the potential to be a transformative ally in promoting women into academic leadership roles. By analyzing career trajectories and institutional biases, our AI dean could become the ultimate career counselor, spotting those invisible banana peels of bias that often trip up women's progress, effectively countering the "accumulation of advantage" that so generously favors men.
Tanriguden notes the need to balance efficiency with empathy:
Despite the promise of AI, it's crucial to remember that an AI dean might excel in compiling tenure-track spreadsheets but could hardly inspire a faculty member with a heartfelt, "I believe in you." Academic leadership demands more than algorithmic precision; it requires a human touch that AI, with all its efficiency, simply cannot emulate.
I commend the author's turns of phrase, but I'm not sure about her grasp of AI. In fact, I'm not sure that current Large Language Models aren't actually better at faking a human touch than they are at arriving at efficient, trustworthy, data-based decisions.
Back to the IHE piece, in which she lays out what she thinks AI brings to the deanship. Deaning, she argues, involves balancing all sorts of competing priorities while "mediating, apologizing and navigating red tape and political minefields."
The problem is that human deans are, well, human. As much as they may strive for balance, the delicate act of satisfying all parties often results in missteps. So why not replace them with an entity capable of making precise decisions, an entity unfazed by the endless barrage of emails, faculty complaints and budget crises?
The promise of AI lies in its ability to process vast amounts of data and reach quick conclusions based on evidence.
Well, no. First, nothing being described here sounds like AI; this is just plain old programming, a "Dean In A Box" app. Which means it will process vast amounts of data and reach conclusions based on whatever the program tells it to do with that data, and that will be based on whatever the programmer wrote. Suppose the programmer writes the program so that complaints from male faculty members are weighted twice as much as those from female faculty. So much for AI dean's "lack of personal bias."
But suppose she really means AI in the sense of software that uses a form of machine learning to analyze and pull out patterns in its training data. AI "learns: to trade stocks by being trained with a gazillion previous stock trades and situations, thereby allowing it to suss out patterns for when to buy or sell. Medical diagnostic AI is training with a gazillion examples of medical histories of patients, allowing it to recognize how a new entry from a new patient fits in all that the patterns. Chatbots like ChatGPT do words by "learning" from vast (stolen) samples of word use that lead to a mountain of word patter "rules" that allow it to determine what words are likely next.
All of these AI are trained on huge data sets of examples from the past.
What would you use to train AI Dean? What giant database would you use to train it, what collection of info about the behavior of various faculty and students and administrators and colleges and universities in the past? More importantly, who would label the data sets as "successful" or "failed"? Medical data sets come with simple metrics like "patient died from this" or "the patient lived fifty more years with no issues." Stock markets come with their own built in measure of success. Who is going to determine which parts of the Dean Training Dataset are successful or not.
This is one of the problems with chatbots. They have a whole lot of data about how language has been used, but no meta-data to cover things like "This is horrifying racist nazi stuff and is not a desirable use of language" and so we get the multiple examples of chatbots going off the rails.
Tanriguden tries to address some of this. Under the heading of how AI Dean would evaluate faculty.
With the ability to assess everything from research output to student evaluations in real time, AI could determine promotions, tenure decisions and budget allocations with a cold, calculated rationality. AI could evaluate a faculty member’s publication record by considering the quantity of peer-reviewed articles and the impact factor of the journals in which they are published.
Followed by some more details about those measures. Which raises another question. A human could do this-- if they wanted to. But if they don't want to, why would they want a computer program to do it?
The other point here is that once again, the person deciding what the algorithm is going to measure is the person whose biases are embedded in the system.
Tanriguden also presents "constant availability, zero fatigue" as a selling point. She says deans have to do a lot of meetings, but (her real example) when, at 2 AM, the department chair needs a decision on a new course offering, AI Dean can provide an answer "devoid of any influence of sleep deprivation or emotional exhaustion."
First, is that really a thing that happens? Because I'm just a K-12 guy, so maybe I just don't know. But that seems to me like something that would happen in an organization that has way bigger problems than any AI can solve. But second, once again, who decided what AI Dean's answer will be based upon? And if it's such a clear criterion that it can be codified in software, why can't even a sleepy human dean apply it?
Finally, she goes with "fairness and impartiality," dreaming of how AI Dean would apply rules "without regard to the political dynamics of a faculty meeting." Impartial? Sure (though we could argue about how desirable that is, really). Fair? Only as fair as it was written to be, which starts with the programmer's definition of "fair."
Tanriguden wraps up the IHE piece once again acknowledging that leadership needs more than data as well as "the issue of the academic heart."
It is about understanding faculty’s nuanced human experiences, recognizing the emotional labor involved in teaching and responding to the unspoken concerns that shape institutional culture. Can an AI ever understand the deep-seated anxieties of a faculty member facing the pressure of publishing or perishing? Can it recognize when a colleague is silently struggling with mental health challenges that data points will never reveal?
In her conclusion she arrives at Hybrid Dean as an answer:
While the advantages of AI—efficiency, impartiality and data-driven decision-making—are tantalizing, they cannot fully replace the empathy, strategic insight and mentorship that human deans provide. The true challenge may lie not in replacing human deans but in reimagining their roles so that they can coexist with AI systems. Perhaps the future of academia involves a hybrid approach: an AI dean that handles (or at least guides) the operational decisions, leaving human deans to focus on the art of leadership and faculty development.
We're seeing lots of this sort of resigned knuckling under in lots of education folks who seem resigned to the predicted inevitability of AI (as always in ed tech, predicted by people who have a stake in the biz). But the important part here is that I don't believe that AI can hold up its half of the bargain. In a job that involves management of humans and education and interpersonal stuff in an ever-changing environment, I don't believe AI can bring any of the contributions that she expects from it.
ICYMI: One Week To Go Edition (3/30)
Banned Books, School Walkouts, Child Care Shortages: Military Families Confront Pentagon's Shifting Rules
Friday, March 28, 2025
And Now, Thought Crime
Oh, Bill. Hush.
Wednesday, March 26, 2025
Losing The Federal Education Mission
AR: Attempting To Make Non-conforming Haircuts Illegal
Arkansas state legislature is deeply worried about trans persons. Rep. Mary Bentley (R- 73rd Dist) has been trying to make trans kids go away for years as with her 2021 bill to protect teachers who used students dead names or misgender them (that's the same year she pushed a bill to require the teaching of creationism in schools).
In 2023, Bentley successfully sponsored a bill that authorizes malpractice lawsuits against doctors who provide gender-affirming care for transgender youth. Now Bentley has proposed HB 1668, "The Vulnerable Youth Protection Act" which takes things a step or two further.
The bill authorizes lawsuits, and the language around the actual suing and collecting money part is long and complex-- complex enough to suspect that Bentley, whose work experience is running rableware manufacturer Bentley Plastics, might have had some help "writing" the bill. The part where it lists the forbidden activities is short, but raises the eyebrows.
The bill holds anyone who "knowingly causes or contributes to the social transitioning of a minor or the castration, sterilization, or mutilation of a minor" liable to the minor or their parents. The surgical part is no shocker-- I'm not sure you could find many doctors who would perform that surgery without parental consent, and certainly not in Arkansas (see 2023 law). But social transitioning? How does the bill define that?
"Social transitioning" means any act by which a minor adopts or espouses a gender identity that differs from the minor’s biological sex as determined by the sex organs, chromosomes, and endogenous profiles of the minor, including without limitation changes in clothing, pronouns, hairstyle, and name.
So a girl who wears "boy" jeans? A boy who wears his hair long? Is there an article of clothing that is so "male" that it's notably unusual to see a girl wearing it? I suppose that matters less because trans panic is more heavily weighted against male-to-female transition. But boy would I love to see a school's rules on what hair styles qualify as male or female.
Also, parental consent doesn't make any difference. Rep. Nicole Clowney keyed on that, as reported by the Arkansas Times:
“Is there anything in the bill that addresses the parental consent piece?” Clowney asked. “Even if a parent says, ‘Please call my child by this pronoun or this name,’ it appears to me that anybody who follows the wishes of that parent … that they would be subject to the civil liability you propose here. Is that correct?”
“That is correct,” Bentley said. “I think that we’re just stating that social transitioning is excessively harmful to children and we want to change that in our state. We want to make sure that our children are no longer exposed to that danger.”
In other words, this is not a "parental rights" issue, but a "let's not have any Trans Stuff in our state" issue.
In hearing, an attorney from the Arkansas Attorney General's office observed that this was pretty much an indefensible violation of student's First Amendment rights, and the AF office wouldn't be able to defend it. According to the Times, Bentley agreed to tweak the bill a bit, but we can already see where she wants to go with this.
The person filing the suit against a teacher who used the wrong pronoun or congratulated the student on their haircut could be liable for $10 million or more, and they've got 20 years to file a suit.
I'm never going to pretend that these issues are simple or easy, that it's not tricky for a school to look out for the interests and rights of both parents and students when those parents and students are in conflict. But I would suggest remembering two things-- trans persons are human beings and they are not disappearing. They have always existed, they will always exist, and, to repeat, they are actual human persons.
I was in school with trans persons in the early seventies. I have had trans students in my classroom. They are human beings, deserving of the same decency and humanity as any other human. I know there are folks among us who insist on arguing from the premise that some people aren't really people and decency and humanity are not for everyone (and empathy is a weakness). I don't get why some people on the right, particularly many who call themselves Christians, are so desperately frightened/angry about trans persons, but I do know that no human problems are solved by treating some human beings as less-than-human. And when your fear leads to policing children's haircuts to fit your meager, narrow, brittle, fragile view of how humans should be, you are a menace to everyone around you. You have lost the plot. Arkansas, be better.