Sunday, March 30, 2025

Ready For An AI Dean?

From the very first sentence, it's clear that this recent Inside Higher Ed post suffers from one more bad case of AI fabulism. 

In the era of artificial intelligence, one in which algorithms are rapidly guiding decisions from stock trading to medical diagnoses, it is time to entertain the possibility that one of the last bastions of human leadership—academic deanship—could be next for a digital overhaul.

AI fabulism and some precious notions about the place of deans in the universe of human leadership.

The author is Birce Tanriguden, a music education professor at the Hartt School at the University of Hartford, and this inquiry into what "AI could bring to the table that a human dean can't" is not her only foray into this topic. This month she also published in Women in Higher Education a piece entitled "The Artificially Intelligent Dean: Empowering Women and Dismantling Academic Sexism-- One Byte at a Time."

The WHE piece is academic-ish, complete with footnotes (though mostly about the sexism part). In that piece, Tanriguden sets out her possible solution

AI holds the potential to be a transformative ally in promoting women into academic leadership roles. By analyzing career trajectories and institutional biases, our AI dean could become the ultimate career counselor, spotting those invisible banana peels of bias that often trip up women's progress, effectively countering the "accumulation of advantage" that so generously favors men.

Tanriguden notes the need to balance efficiency with empathy:

Despite the promise of AI, it's crucial to remember that an AI dean might excel in compiling tenure-track spreadsheets but could hardly inspire a faculty member with a heartfelt, "I believe in you." Academic leadership demands more than algorithmic precision; it requires a human touch that AI, with all its efficiency, simply cannot emulate.

I commend the author's turns of phrase, but I'm not sure about her grasp of AI. In fact, I'm not sure that current Large Language Models aren't actually better at faking a human touch than they are at arriving at efficient, trustworthy, data-based decisions.  

Back to the IHE piece, in which she lays out what she thinks AI brings to the deanship. Deaning, she argues, involves balancing all sorts of competing priorities while "mediating, apologizing and navigating red tape and political minefields."

The problem is that human deans are, well, human. As much as they may strive for balance, the delicate act of satisfying all parties often results in missteps. So why not replace them with an entity capable of making precise decisions, an entity unfazed by the endless barrage of emails, faculty complaints and budget crises?

The promise of AI lies in its ability to process vast amounts of data and reach quick conclusions based on evidence. 

Well, no. First, nothing being described here sounds like AI; this is just plain old programming, a "Dean In A Box" app. Which means it will process vast amounts of data and reach conclusions based on whatever the program tells it to do with that data, and that will be based on whatever the programmer wrote. Suppose the programmer writes the program so that complaints from male faculty members are weighted twice as much as those from female faculty. So much for AI dean's "lack of personal bias." 

But suppose she really means AI in the sense of software that uses a form of machine learning to analyze and pull out patterns in its training data. AI "learns: to trade stocks by being trained with a gazillion previous stock trades and situations, thereby allowing it to suss out patterns for when to buy or sell. Medical diagnostic AI is training with a gazillion examples of medical histories of patients, allowing it to recognize how a new entry from a new patient fits in all that the patterns. Chatbots like ChatGPT do words by "learning" from vast (stolen) samples of word use that lead to a mountain of word patter "rules" that allow it to determine what words are likely next.

All of these AI are trained on huge data sets of examples from the past.

What would you use to train AI Dean? What giant database would you use to train it, what collection of info about the behavior of various faculty and students and administrators and colleges and universities in the past? More importantly, who would label the data sets as "successful" or "failed"? Medical data sets come with simple metrics like "patient died from this" or "the patient lived fifty more years with no issues." Stock markets come with their own built in measure of success. Who is going to determine which parts of the Dean Training Dataset are successful or not.

This is one of the problems with chatbots. They have a whole lot of data about how language has been used, but no meta-data to cover things like "This is horrifying racist nazi stuff and is not a desirable use of language" and so we get the multiple examples of chatbots going off the rails

Tanriguden tries to address some of this. Under the heading of how AI Dean would evaluate faculty.

With the ability to assess everything from research output to student evaluations in real time, AI could determine promotions, tenure decisions and budget allocations with a cold, calculated rationality. AI could evaluate a faculty member’s publication record by considering the quantity of peer-reviewed articles and the impact factor of the journals in which they are published.

Followed by some more details about those measures. Which raises another question. A human could do this-- if they wanted to. But if they don't want to, why would they want a computer program to do it?

The other point here is that once again, the person deciding what the algorithm is going to measure is the person whose biases are embedded in the system. 

Tanriguden also presents "constant availability, zero fatigue" as a selling point. She says deans have to do a lot of meetings, but (her real example) when, at 2 AM, the department chair needs a decision on a new course offering, AI Dean can provide an answer "devoid of any influence of sleep deprivation or emotional exhaustion." 

First, is that really a thing that happens? Because I'm just a K-12 guy, so maybe I just don't know. But that seems to me like something that would happen in an organization that has way bigger problems than any AI can solve. But second, once again, who decided what AI Dean's answer will be based upon? And if it's such a clear criterion that it can be codified in software, why can't even a sleepy human dean apply it?

Finally, she goes with "fairness and impartiality," dreaming of how AI Dean would apply rules "without regard to the political dynamics of a faculty meeting." Impartial? Sure (though we could argue about how desirable that is, really). Fair? Only as fair as it was written to be, which starts with the programmer's definition of "fair."

Tanriguden wraps up the IHE piece once again acknowledging that leadership needs more than data as well as "the issue of the academic heart." 

It is about understanding faculty’s nuanced human experiences, recognizing the emotional labor involved in teaching and responding to the unspoken concerns that shape institutional culture. Can an AI ever understand the deep-seated anxieties of a faculty member facing the pressure of publishing or perishing? Can it recognize when a colleague is silently struggling with mental health challenges that data points will never reveal?

In her conclusion she arrives at Hybrid Dean as an answer:

While the advantages of AI—efficiency, impartiality and data-driven decision-making—are tantalizing, they cannot fully replace the empathy, strategic insight and mentorship that human deans provide. The true challenge may lie not in replacing human deans but in reimagining their roles so that they can coexist with AI systems. Perhaps the future of academia involves a hybrid approach: an AI dean that handles (or at least guides) the operational decisions, leaving human deans to focus on the art of leadership and faculty development.

We're seeing lots of this sort of resigned knuckling under in lots of education folks who seem resigned to the predicted inevitability of AI (as always in ed tech, predicted by people who have a stake in the biz). But the important part here is that I don't believe that AI can hold up its half of the bargain. In a job that involves management of humans and education and interpersonal stuff in an ever-changing environment, I don't believe AI can bring any of the contributions that she expects from it. 

1 comment:

  1. Hi Peter,
    Thanks for taking the time to engage with my piece. I’ve written a response on my own blog, Dr. Bees’ Nest, where I had a bit of fun continuing the conversation—curmudgeon to satirist.
    You can find it here: https://drbeesnest.com/buzz-through-the-articles/f/a-curmudgeonly-response-to-peter-greene

    Appreciate the dialogue!
    – Dr. Birce Tanriguden

    ReplyDelete