An AI arms race is under way. In a board room at every major college in America there is a consultant touting AI’s potential to lower costs, create new markets, and deliver more value to students.
Latham is certain of not only the inevitability, but the dominance of AI. And the FOMO is strong with this one. Here's just one of his broad sweeping portraits of the future.
Across the country some institutions are already piloting fully AI-instructed courses and utilizing AI to enable higher yields and improve retention, graduation rates, and job placement. Over the course of the next 10 years, AI-powered institutions will rise in the rankings. US News & World Report will factor a college’s AI capabilities into its calculations. Accrediting agencies will assess the degree of AI integration into pedagogy, research, and student life. Corporations will want to partner with universities that have demonstrated AI prowess. In short, we will see the emergence of the AI haves and have-nots. Sadly, institutions that need AI the most, such as community colleges and regional public universities, will be the last to get it. Prepare for an ever-widening chasm between resource-rich, technologically advanced colleges and those that are cash-starved and slow to adapt to the age of AI.
Yes, I am sure that wealthy, elite parents will send their children off to the ivies along with a note to the college saying, "Now don't try to stick my child with one of those dumb old human professors. I want that kid hooked up to an AI-driven computer."
Latham seems to think so, asserting that
Colleges that extol their AI capabilities will be signaling that they offer a personalized, responsive education, and cutting-edge research that will solve the world’s largest problems. Prospective students will ask, “Does your campus offer AI-taught courses?” Parents will ask: “Does your institution have AI advisers and tutors to help my child?”
I am the non-elite parent of two potential future college students, and this sounds like an education hellscape to me.
But Latham says this is all just "creative destruction," like when digital photography killed off film photography. He seriously mischaracterizes film photography to make his point, but there's no question that cheap and easy digital photography kneecapped the film variety.
Latham argues that the market will force this, that the children of the Amazon, Netflix and Google generation want "a speedy, on-demand, and low-friction experience." Of course, they may also have learned that increasingly enshittified tech platforms are the enemy that provides whole new versions of friction. Latham also argues that these students see college as a transaction, a bit of advanced job training, a commodity to be purchased in hopes of an acceptable Return On Investment, and while I'd like to say he's wrong, he probably has a point here because A) that's what some folks have been telling them their whole lives and B) we are in an increasingly scary country where a safe economic future is hard to come by. Still, his belief in consumer short-sightedness is a bit much.
So they regard college much like any other consumer product, and like those other products, they expect it to be delivered how they want, when they want. Why wouldn’t they?
Maybe because somewhere along the way they learned that they aren't the center of the universe?
Latham is sure that AI is an "existential threat" to the livelihood of professors. Faculty costs are a third of institutions cost structure, he tells us, and AI "can deliver more value at lower cost." One might be inclined to ask what, exactly, is the value that AI is delivering more of, but Latham isn't going to answer that. I guess "education" is just a generic substance squeezed out of universities like tofu out of a pasta press.
If Latham hasn't pissed you off yet, this should do it:
Professors need to dispense with the delusional belief that AI can’t do their job. Faculty members often claim that AI can’t do the advising, mentoring, and life coaching that humans offer, and that’s just not true. They incorrectly equate AI with a next-generation learning-management system, such as Blackboard or Canvas, or they point out AI’s current deficiencies. They’re living in a fantasy. AI is being used to design cars and discover drugs: Do professors really think it can’t narrate and flip through PowerPoints as well as a human instructor?
And here is why colleges and universities are going to the first to be put through the AI wringer-- there is a lot of really shitty teaching going on in colleges and universities. I would love to say that this comes down to Latham getting the professorial function wrong, that no good professor simply narrates through a Power Point deck, and I'd be correct. But do some actual professors just drone and flip? Yeah, I'm pretty sure they do.
In the end, Latham's argument is that shitty AI can replace a sub-optimal human instructor. That may be true, but it's beside the point. Can AI provide bad advising, bad mentoring, and bad life coaching? Probably. But who the heck wants that? Can AI do those jobs well? No, it can't. Because it cannot create a human connection, nor can it figure out what a human has going on in their head.
Latham is sure, however, that it's coming. By the end of the decade, there will be avatars, and Latham says to think about how your iPhone can recognize your face. Well,
Now imagine AI avatars that will be able to sense subtle facial expressions and interpret their meaning. If during a personalized lecture an avatar senses on a student’s face, in real time, that they’re frustrated with a specific concept, the avatar will shift the instructional mode to get the student back on track.
"Imagine" is doing a lot of work here, but even if I imagine it, can I imagine a reason that this is better done by AI instead of by an actual human instructor.
Beyond the hopeful expectation of technical capabilities, Latham makes one of the more common-yet-unremarked mistakes here, which is to assume that students will interact with the AI exactly as they would with human beings and not as they would with, say, a soulless lifeless hunk of machinery.
Never mind. Latham is still flying his fancy to a magical future where all your education is on a "portable, scalable blockchain" that includes every last thing you ever experienced. It does not seem to occur to him that he is describing a horrifyingly intrusive mechanized Big Brother, a level of surveillance beyond anything ever conceived.
Latham has news for the other functions of higher ed. AI can replace the registrar. AI will manage those blockchain records that "will be owned by the student and empower the student" because universities won't be able to stand in the way of students sharing records.
AI will create perfect marketing for student recruitment, targeted to individual students. AI will handle filtering admissions as well "by attributes that play to an institution's strength." Because AI magic! Magicky magic.
This is such bullshit, the worst kind of AO fetishization that imagines capabilities for AI that it will not have. AI is good at finding patterns by sifting through data; it does what a human could do if that human had infinite patience and time. Could a human being with infinite time and patience look at an individual 18-year-old and predict what the future holds for them? No. And neither can AI.
AI is going to take over career services, which I suppose could happen if we reach the point that the college AI reaches out to an AI contact it has in a particular business. And if you think students want to deal with human career-services professionals," Latham has a simple answer-- "No, they don't. Human interaction is not as important to today's students." I guess that settles that. It's gonna suck for students who want to go into human-facing professions (like, say, teaching) when they finally have to deal with human beings.
AI will handle accreditation, too! Witness the hellscape Latham describes:
In our unquestioning march to assessment that is driven by standardized processes and outcomes, we have laid the groundwork for AI’s ascendancy. Did the student learn? Did the student have a favorable post-graduation path, i.e., graduate school or employment? Accreditors will have no choice but to offer a stamp of approval even when AI is doing all the work. In the past decade, we have shifted from emphasizing the process of education to measuring the outcome of education when determining institutional effectiveness. We have standardized pedagogy, standardized student assessments, standardized teaching evaluations, and standardized accreditation. Accreditation by its nature is standardized, and we won’t need vice provosts to do that job much longer.
Administration will also be assimilated (I guess the AI can go ahead and shmooze wealthy alumni for contributions). Admins will deal with political pressure by asking, “Did you run this through AI?” or “Did the AI engine arrive at a similar decision?” Because if there's anything that can deal with something like the politics of the Trump regime, it's an AI.
He's not done yet. This is all so far just how AI will commandeer the existing university structure.
But that is only step one of a broader transition. Imagine a university employing only a handful of humans, run entirely by AI: a true AI university. In the next few years, it’s likely that a group of investors in conjunction with a major tech company like X, Google, Amazon, or Meta will launch an AI university with no campus and very few human instructors. By the year 2030, there will be standalone, autonomous AI universities.
Yes, because our tech overlords have always had a keen hand on how education works. Like that time the tech geniuses promised that Massive Open Online Courses would replace universities by, well, now. Or that time that Bill Gates failed to be right about education for decades. What a bold, baseless, inevitably wrong prediction for Latham to make--but he's not done.
AI U will have a small, tight leadership team who will select a "tight set of academic disciplines that lend themselves to the early-stage capabilities of artificial intelligence, such as accounting or history." Good God-- is there any discipline that lends itself to automation less than history? History only lends itself to this if you are one of those ahistorical illiterates who believes that history is just learning a bunch of dates and names because all history is known and set in stone. It is not, and this one sentence may be the most disqualifying sentence in the whole article.
Will AI U succeed? Latham allows that a vast majority will fail (like the dot-com bubble era) but dozens will survive and prosper, because this will work for non-traditional students (you know--like those predatory for-profit colleges did) who aren't served by the "one size fits all" model currently available, because I guess Latham figures that whether you go to Harvard or Hillsdale or The College of the Atlantic or Poor State U or your local Community College, you're getting pretty much the same thing. Says the guy who earlier asserted that AI would help select students based on how they played to the individual strengths of particular institutions. AI will target the folks who started a degree but never finished it. Sure.
AI U's secret strength will be that it will be cheapo. No campus and stuff. Traditional universities offering "an old-fashioned college experience complete with dorm rooms, a football stadium, and world-class dining" will continue, though they'll be using AI, too.
Winding down, Latham allows as predicting the carnage is easy, but "making people realize the inevitable" is hard (perhaps because it skips right over what reasons there are to think that this time, time #12,889,342, the tech world's prediction of the inevitable should be believed). "Predicting" is always easy when it's mostly just wishful guessing.
Students will benefit "tremendously" and some professors will remain. Jobs will be lost. Some disciplines will benefit, like the science-and-mathy ones. Latham sees a "silver lining" for the humanities-- "as AI fully assimilates itself into society, the ethical, moral, and legal questions will bring the humanities to the forefront." To put it another way, since the AI revolution will be run by people lacking moral and ethical grounding in the humanities, the humanities will have to step up to save society.
I have to stipulate that there is no doubt that Professor Latham is more accomplished and successful than I am. Probably smarter, and for all I know, a wonderful human being who is kind to his mother. But this sure seems like a lot of bunk. Here he has captured most of the features of AI sales. A lack of clarity about what teachers, ideally, actually do (it is not simply pour information into student brains to be recalled later). A lack of clarity about what AI actually does, and what capabilities it does and does not have. A faith that a whole lot of things can be determined with data and objectivity (spoiler alert: AI is not actually all that objective). Complete glossing over the scariest aspects of collecting every single detail of your life digitally, to be sorted through by future employers or hostile American governments (like the one we have right now which is trying to amalgamate all the data the feds have so that they can sift through it to find the people they want to attack).
Is AI going to have some kind of effect on universities? Sure. Are those effects inevitable? Not at all. Will the AI revolution resemble many other "transformational" education revolutions of the past, and how they failed? You betcha-- especially MOOCs. Are people going to find ways to use AI to cut some corners and make their lives easier, even if it means sacrificing quality? Yeah, probably. Is all of this going to get way more expensive once AI companies decide it's time to make some of their money back? Positively.
Would we benefit from navigating all of this with realistic discussions based on something other than hyperbolic marketing copy? Please, God. The smoke is supposed to stay inside the crystal ball.
Maybe AI can replace football coaches.
ReplyDelete