Showing posts sorted by relevance for query AI. Sort by date Show all posts
Showing posts sorted by relevance for query AI. Sort by date Show all posts

Monday, April 14, 2025

Predicting AI Armageddon For Universities

Once again, the Chronicle of Higher Education is hosting some top-notch chicken littling about the coming  of our robot overlords. This time it's "Are You Ready for the AI University," from Scott Latham, and it is some top notch hand waving.

Latham is a professor at the Manning School of Business at the University of Massachusetts, with a background in tech and business, which certainly fits with the pitch he's making here. It's worth looking at because it leans hard on every marketing note we encounter in the current full court AI press.

The hyperbole here is huge. AI will be "forever altering the relationship between students and professors." Latham waves away mundane cheating concerns, the "tired debate about academic ethics" because students have always cheated and always will, so, I guess, never mind that ethics baloney. 
An AI arms race is under way. In a board room at every major college in America there is a consultant touting AI’s potential to lower costs, create new markets, and deliver more value to students.

Latham is certain of not only the inevitability, but the dominance of AI. And the FOMO is strong with this one. Here's just one of his broad sweeping portraits of the future.

Across the country some institutions are already piloting fully AI-instructed courses and utilizing AI to enable higher yields and improve retention, graduation rates, and job placement. Over the course of the next 10 years, AI-powered institutions will rise in the rankings. US News & World Report will factor a college’s AI capabilities into its calculations. Accrediting agencies will assess the degree of AI integration into pedagogy, research, and student life. Corporations will want to partner with universities that have demonstrated AI prowess. In short, we will see the emergence of the AI haves and have-nots. Sadly, institutions that need AI the most, such as community colleges and regional public universities, will be the last to get it. Prepare for an ever-widening chasm between resource-rich, technologically advanced colleges and those that are cash-starved and slow to adapt to the age of AI.

Yes, I am sure that wealthy, elite parents will send their children off to the ivies along with a note to the college saying, "Now don't try to stick my child with one of those dumb old human professors. I want that kid hooked up to an AI-driven computer."  

Latham seems to think so, asserting that 

Colleges that extol their AI capabilities will be signaling that they offer a personalized, responsive education, and cutting-edge research that will solve the world’s largest problems. Prospective students will ask, “Does your campus offer AI-taught courses?” Parents will ask: “Does your institution have AI advisers and tutors to help my child?”

I am the non-elite parent of two potential future college students, and this sounds like an education hellscape to me.

But Latham says this is all just "creative destruction," like when digital photography killed off film photography. He seriously mischaracterizes film photography to make his point, but there's no question that cheap and easy digital photography kneecapped the film variety. 

Latham argues that the market will force this, that the children of the Amazon, Netflix and Google generation want "a speedy, on-demand, and low-friction experience." Of course, they may also have learned that increasingly enshittified tech platforms are the enemy that provides whole new versions of friction. Latham also argues that these students see college as a transaction, a bit of advanced job training, a commodity to be purchased in hopes of an acceptable Return On Investment, and while I'd like to say he's wrong, he probably has a point here because A) that's what some folks have been telling them their whole lives and B) we are in an increasingly scary country where a safe economic future is hard to come by. Still, his belief in consumer short-sightedness is a bit much.

So they regard college much like any other consumer product, and like those other products, they expect it to be delivered how they want, when they want. Why wouldn’t they?

Maybe because somewhere along the way they learned that they aren't the center of the universe? 

Latham is sure that AI is an "existential threat" to the livelihood of professors. Faculty costs are a third of institutions cost structure, he tells us, and AI "can deliver more value at lower cost." One might be inclined to ask what, exactly, is the value that AI is delivering more of, but Latham isn't going to answer that. I guess "education" is just a generic substance squeezed out of universities like tofu out of a pasta press. 

If Latham hasn't pissed you off yet, this should do it:

Professors need to dispense with the delusional belief that AI can’t do their job. Faculty members often claim that AI can’t do the advising, mentoring, and life coaching that humans offer, and that’s just not true. They incorrectly equate AI with a next-generation learning-management system, such as Blackboard or Canvas, or they point out AI’s current deficiencies. They’re living in a fantasy. AI is being used to design cars and discover drugs: Do professors really think it can’t narrate and flip through PowerPoints as well as a human instructor?

 And here is why colleges and universities are going to the first to be put through the AI wringer-- there is a lot of really shitty teaching going on in colleges and universities. I would love to say that this comes down to Latham getting the professorial function wrong, that no good professor simply narrates through a Power Point deck, and I'd be correct. But do some actual professors just drone and flip? Yeah, I'm pretty sure they do.

In the end, Latham's argument is that shitty AI can replace a sub-optimal human instructor. That may be true, but it's beside the point. Can AI provide bad advising, bad mentoring, and bad life coaching? Probably. But who the heck wants that? Can AI do those jobs well? No, it can't. Because it cannot create a human connection, nor can it figure out what a human has going on in their head. 

Latham is sure, however, that it's coming. By the end of the decade, there will be avatars, and Latham says to think about how your iPhone can recognize your face. Well, 

Now imagine AI avatars that will be able to sense subtle facial expressions and interpret their meaning. If during a personalized lecture an avatar senses on a student’s face, in real time, that they’re frustrated with a specific concept, the avatar will shift the instructional mode to get the student back on track.

"Imagine" is doing a lot of work here, but even if I imagine it, can I imagine a reason that this is better done by AI instead of by an actual human instructor.

Beyond the hopeful expectation of technical capabilities, Latham makes one of the more common-yet-unremarked mistakes here, which is to assume that students will interact with the AI exactly as they would with human beings and not as they would with, say, a soulless lifeless hunk of machinery. 

Never mind. Latham is still flying his fancy to a magical future where all your education is on a "portable, scalable blockchain" that includes every last thing you ever experienced. It does not seem to occur to him that he is describing a horrifyingly intrusive mechanized Big Brother, a level of surveillance beyond anything ever conceived. 

Latham has news for the other functions of higher ed. AI can replace the registrar. AI will manage those blockchain records that "will be owned by the student and empower the student" because universities won't be able to stand in the way of students sharing records. 

AI will create perfect marketing for student recruitment, targeted to individual students. AI will handle filtering admissions as well "by attributes that play to an institution's strength." Because AI magic! Magicky magic. 

This is such bullshit, the worst kind of AO fetishization that imagines capabilities for AI that it will not have. AI is good at finding patterns by sifting through data; it does what a human could do if that human had infinite patience and time. Could a human being with infinite time and patience look at an individual 18-year-old and predict what the future holds for them? No. And neither can AI.

AI is going to take over career services, which I suppose could happen if we reach the point that the college AI reaches out to an AI contact it has in a particular business. And if you think students want to deal with human career-services professionals," Latham has a simple answer-- "No, they don't. Human interaction is not as important to today's students." I guess that settles that. It's gonna suck for students who want to go into human-facing professions (like, say, teaching) when they finally have to deal with human beings.

AI will handle accreditation, too! Witness the hellscape Latham describes:

In our unquestioning march to assessment that is driven by standardized processes and outcomes, we have laid the groundwork for AI’s ascendancy. Did the student learn? Did the student have a favorable post-graduation path, i.e., graduate school or employment? Accreditors will have no choice but to offer a stamp of approval even when AI is doing all the work. In the past decade, we have shifted from emphasizing the process of education to measuring the outcome of education when determining institutional effectiveness. We have standardized pedagogy, standardized student assessments, standardized teaching evaluations, and standardized accreditation. Accreditation by its nature is standardized, and we won’t need vice provosts to do that job much longer.

Administration will also be assimilated (I guess the AI can go ahead and shmooze wealthy alumni for contributions). Admins will deal with political pressure by asking, “Did you run this through AI?” or “Did the AI engine arrive at a similar decision?” Because if there's anything that can deal with something like the politics of the Trump regime, it's an AI.

He's not done yet. This is all so far just how AI will commandeer the existing university structure. 

But that is only step one of a broader transition. Imagine a university employing only a handful of humans, run entirely by AI: a true AI university. In the next few years, it’s likely that a group of investors in conjunction with a major tech company like X, Google, Amazon, or Meta will launch an AI university with no campus and very few human instructors. By the year 2030, there will be standalone, autonomous AI universities.

Yes, because our tech overlords have always had a keen hand on how education works. Like that time the tech geniuses promised that Massive Open Online Courses would replace universities by, well, now. Or that time that Bill Gates failed to be right about education for decades. What a bold, baseless, inevitably wrong prediction for Latham to make--but he's not done.

AI U will have a small, tight leadership team who will select a "tight set of academic disciplines that lend themselves to the early-stage capabilities of artificial intelligence, such as accounting or history." Good God-- is there any discipline that lends itself to automation less than history? History only lends itself to this if you are one of those ahistorical illiterates who believes that history is just learning a bunch of dates and names because all history is known and set in stone. It is not, and this one sentence may be the most disqualifying sentence in the whole article.

Will AI U succeed? Latham allows that a vast majority will fail (like the dot-com bubble era) but dozens will survive and prosper, because this will work for non-traditional students (you know--like those predatory for-profit colleges did) who aren't served by the "one size fits all" model currently available, because I guess Latham figures that whether you go to Harvard or Hillsdale or The College of the Atlantic or Poor State U or your local Community College, you're getting pretty much the same thing. Says the guy who earlier asserted that AI would help select students based on how they played to the individual strengths of particular institutions. AI will target the folks who started a degree but never finished it. Sure.

AI U's secret strength will be that it will be cheapo. No campus and stuff. Traditional universities offering "an old-fashioned college experience complete with dorm rooms, a football stadium, and world-class dining" will continue, though they'll be using AI, too. 

Winding down, Latham allows as predicting the carnage is easy, but "making people realize the inevitable" is hard (perhaps because it skips right over what reasons there are to think that this time, time #12,889,342, the tech world's prediction of the inevitable should be believed). "Predicting" is always easy when it's mostly just wishful guessing.

Students will benefit "tremendously" and some professors will remain. Jobs will be lost. Some disciplines will benefit, like the science-and-mathy ones. Latham sees a "silver lining" for the humanities-- "as AI fully assimilates itself into society, the ethical, moral, and legal questions will bring the humanities to the forefront." To put it another way, since the AI revolution will be run by people lacking moral and ethical grounding in the humanities, the humanities will have to step up to save society. 

I have to stipulate that there is no doubt that Professor Latham is more accomplished and successful than I am. Probably smarter, and for all I know, a wonderful human being who is kind to his mother. But this sure seems like a lot of bunk. Here he has captured most of the features of AI sales. A lack of clarity about what teachers, ideally, actually do (it is not simply pour information into student brains to be recalled later). A lack of clarity about what AI actually does, and what capabilities it does and does not have. A faith that a whole lot of things can be determined with data and objectivity (spoiler alert: AI is not actually all that objective). Complete glossing over the scariest aspects of collecting every single detail of your life digitally, to be sorted through by future employers or hostile American governments (like the one we have right now which is trying to amalgamate all the data the feds have so that they can sift through it to find the people they want to attack). 

Is AI going to have some kind of effect on universities? Sure. Are those effects inevitable? Not at all. Will the AI revolution resemble many other "transformational" education revolutions of the past, and how they failed? You betcha-- especially MOOCs. Are people going to find ways to use AI to cut some corners and make their lives easier, even if it means sacrificing quality? Yeah, probably. Is all of this going to get way more expensive once AI companies decide it's time to make some of their money back? Positively. 

Would we benefit from navigating all of this with realistic discussions based on something other than hyperbolic marketing copy? Please, God. The smoke is supposed to stay inside the crystal ball. 


Monday, June 9, 2025

Another Bad AI Classroom Guide

We have to keep looking at these damned things because they share so many characteristics that we need to recognize so we can recognize them when we see them again and react properly, i.e. by throwing moldy cabbage at them. I read this one so you don't have to.

And this one will turn up lots of places, because it's from the Southern Regional Education Board

SREB was formed in 1948 by governors and legislators; it now involves 16 states and is based in Atlanta. Although it involves legislators from each of the states, some appointed by the governor, it is a non-partisan, nonprofit organization. In 2019 they handled about $18 million in revenue. In 2021, they received at $410K grant from the Gates Foundation. Back in 2022, SREB was a cheerful sock puppet for folks who really wanted to torpedo tenure and teacher pay in North Carolina. 

But hey-- they're all about "helping states advance student achievement." 

SREB's "Guidance for the Use of AI in the K-12 Classroom" has big fat red flag right off the top-- it lists no authors. In this golden age of bullshit and slop, anything that doesn't have an actual human name attached is immediately suspect.

But we can deduce who was more or less behind this-- the SREB Commission on Artificial Education in Education. Sixteen states are represented by sixty policymakers, so we can't know whose hands actually touched this thing, but a few names jump out.

The chair is South Caroline Governor Henry McMaster, and his co-chair is Brad D. Smith, president of Marshall University in West Virginia and former Intuit CEO. As of 2023, he passed Jim Justice as richest guy in WV. And he serves on lots of boards, like Amazon and JPMorgan Chase. Some states (like Oklahoma) sent mostly legislators, while some sent college or high school computer instructors. There are also some additional members including Youngjun Choi (UPS Robotics AI Lab), Kim Majerus (VP US Public Sector Education for Amazon Wen Services) and some other corporate folks.

The guide is brief (18 pages). It's basic pitch is, "AI is going to be part of the working world these students enter, so we need schools to train these future meat widgets so we don't have to." The introductory page (which is certainly bland, vague, and voiceless enough to be a word string generated by AI) offers seven paragraphs that show us where we're headed. I'll paraphrase.

#1: Internet and smartphones means students don't have to know facts. They can just skip to the deep thinking part. But they need critical thinking skills to sort out online sources. How are they supposed to deep and critically think when they don't have a foundation of content knowledge? The guide hasn't thought about that. AI "adds another layer" by doing all the work for them so now they have to be good prompt designers. Which again, would be hard if you didn't know anything and had never thought about the subject.

#2: Jobs will need AI. AI must be seen as a tool. It will do routine tasks, and students will get to engage in "rich and intellectually demanding" assignments. Collaborative creativity! 

#3: It's inevitable. It is a challenge to navigate. Shareholders need guidance to know how to "incorporate AI tools while addressing potential ethical, pedagogical, and practical concerns." I'd say "potential" is holding the weight of a world on its shoulders. "Let's talk about the potential ethical concerns of sticking cocaine in Grandma's morning coffee." Potential.

#4: This document serves as a resource. "It highlights how AI can enhance personalized learning, improve data-driven decision-making, and free up teachers’ time for more meaningful student interactions." Because it's going to go ahead and assume that AI can, in fact, do any of that. Also, "it addresses the potential risks, such as data privacy issues, algorithmic biases, and the importance of maintaining the human element in teaching." See what they did there? The good stuff is a given certainty, but the bad stuff is just a "potential" down side.

#5: There's a "skills and attributes" list in the Appendix.

#6: This is mostly for teachers and admins, but lawmakers could totally use it to write laws, and tech companies could develop tech, and researchers could use it, too! Multitalented document here.

#7: This guide is to make sure that "thoughtful and responsible" AI use makes classrooms hunky and dory.

And with that, we launch into The Four Pillars of AI Use in the Classroom, followed with uses anbd cautions.

Pillar #1
Use AI-infused tools to develop more cognitively demanding tasks that increase student engagement with creative problem-solving and innovative thinking.

"To best prepare students for an ever-evolving workforce..." 

"However, tasks that students will face in their careers will require them..."

That's the pitch. Students will need to be able think "critically and creatively." So they'll need really challenging and "cognitively demanding" assignment. Now more than ever, students need to be creators rather than mere purveyors of knowledge. "Now more than ever, students need to be creators rather than mere purveyors of knowledge."

Okay-- so what does AI have to do with this?
AI draws on a broad spectrum of knowledge and has the power to analyze a wide range of resources not typically available in classrooms.
This is some fine tuned bullshit here, counting on the reader to imagine that they heard something that nobody actually said. AI "draws on" a bunch of "knowledge" in the sense that it sucks up a bunch of strings of words that, to a human, communicate knowledge. But AI doesn't "know" or "understand" any of it. Does it "analyze" the material? Well, in the sense that it breaks the words into tokens and performs complex maths on them, there is a sort of analysis. But AI boosters really, really want you to anthropomorphize AI, to think about it as human-like un nature and not alien and kind of stupid.

"While AI should not be the final step in the creative process, it can effectively serve in the early stages." Really? What is it about the early stages that makes them AI-OK? I get it--up to a point. I've told students that they can lift an idea from somewhere else as long as they make it their own. But is the choice of what to lift any less personal or creative than what one does with it? Sure, Shakespeare borrowed the ideas behind many of his plays, but that decision about what to borrow was part of his process. I'd just like to hear from any of the many people who think AI in beginning stages is okay why exactly they believe that the early stages are somehow less personal or creative or critical thinky than the other stages. What kind of weird value judgment is being made about the various stages of creation?

Use AI to "streamline" lesson planning. Teach critical thinking skills by, and I'm only sort of paraphrasing here, training students to spot the places where AI just gets stuff wrong. 

Use AI to create "interactive simulations." No, don't. Get that AI simulation of an historical figure right out of your classroom. It's creepy, and like much AI, it projects a certainty in its made-up results that it does not deserve. 

Use AI to create a counter-perspective. Or just use other humans.

Cautions? Everyone has to learn to be a good prompt engineer. In other words, humans must adjust themselves to the tool. Let the AI train you. 

Recognize AI bias, or at least recognize it exists. Students must learn to rewrite AI slop so that it sounds like the student and not the AI, although how students develop a voice when they aren't doing all the writing is rather a huge challenge as well. 

Also, when lesson planning, don't forget that AI doesn't know about your state standards. And if you are afraid that AI will replace actual student thinking, make sure your students have thought about stuff before they use the AI. Because the assumption under everything in this guide is that the AI must be used, all the time.

Pillar #2
Use AI to streamline teacher administrative and planning work.

The guide leads with an excuse-- "teachers' jobs have become increasingly more complex." Have they? Compared to when? The guide lists the usual features of teaching-- same ones that were there when I entered the classroom in 1979. I call bullshit. 

But use AI as your "planning partner." I am sad that teachers are out there doing this. It's not a great idea, but for a generation that entered the profession thinking that teacher autonomy was one of those old-timey things, as relevant as those penny-farthings that grampa goes on about. And these suggestions for use. Yikes.

Lesson planning! Brainstorming partner! And, without a trace of irony, a suggestion that you can get more personalized lessons from an impersonal non-living piece of software.

Let it improve and enhance a current assignment. Meh. Maybe, though I don't think it would save you a second of time (unless you didn't check whether AI was making shit up again). 

But "Help with Providing Feedback on and Grading Student Work?" Absolutely not. Never, ever. It cannot assess writing quality, it cannot do plagiarism detection, it cannot reduce grading bias (just replace it). If you think it even "reads" the work, check out this post. Beyond the various ways in which AI is not up to the task, it comes down to this-- why would your students write a work that no other human being was going to read?

Under "others," the guide offers things like drafting parent letters and writing letters of recommendation, and again, for the love of God, do not do this! Use it for translating materials for ESL students? I'm betting translation software would be more reliable. Inventory of supplies? Sure, I'm sure it wouldn't take more than twice as much time as just doing it by eyeball and paper. 

Oh, and maybe someday AI will be able to monitor student behavior and engagement. Yeah, that's not creepy (and improbable) at all.

Cautions include a reminder of AI bias, data privacy concerns, and overreliance on AI tools and decisions, and I'm thinking "cautions" is underselling the issues here. 

Pillar #3
Use AI to support personalized learning.

The guide starts by pointing out that personalized learning is important because students learn differently. Just in case you hadn't heard. That is followed by the same old pitch about dynamically adaptive instruction based on data collected from prior performance, only with "AI" thrown in. Real time! Engagement! Adaptive!

AI can provide special adaptations for students with special needs. Like text-to-speech (is that AI now). Also, intelligent tutoring systems that " can mimic human tutors by offering personalized hints, encouragement and feedback based on each student’s unique needs." So, an imitation of what humans can do better. 

Automated feedback. Predictive analytics to spot when a student is in trouble. AI can pick student teams for you (nope). More of the same.

Cautions? There's a pattern developing. Data privacy and security. AI bias. Overreliance on tech. Too much screen time. Digital divide. Why those last two didn't turn up in the other pillars I don't know. 

Pillar #4
Develop students as ethical and proficient AI users.

I have a question-- is it possible to find ethical ways to use unethical tools? Is there an ethical way to rob a bank? What does ethical totalitarianism look like?

Because AI, particularly Large Language Models, is based on massive theft of other peoples' work. And that's before we get to the massive power and water resources being sucked up by AI. 

But we'll notice another point here-- the problems of ethical AI are all the responsibility of the student users. "Teaching students to use AI ethically is crucial for shaping a future where technology serves humanity’s best interests." You might think that an ethical future for AI might also involve the companies producing it and the lawmakers legislating rules around it, but no-- this is all on students (and remember-- students were not the only audience the guide listed) and by extension, their teachers. 

Uses? Well, the guide is back on the beginning stages of writing
AI can also help organize thoughts and ideas into a coherent outline. AI can recommend logical sequences and suggest sections or headings to include by analyzing the key points a student wants to cover. AI can also offer templates, making it easier for students to create well-structured and focused outlines.

These are all things the writer should be doing. Why the guide thinks using AI to skip the "planning stages" is ethical, but using it in any other stages is not, is a mystery to me.

Students also need to develop "critical media literacy" because the AI is going to crank out well-polished turds, and it's the student's job to spot them. "Our product helps dress you, but sometimes it will punch you in the face. We are not going to fix it. It is your job to learn how to duck."

Cross-disciplinary learning-- use the AI in every class, for different stuff! Also, form a student-led AI ethics committee to help address concerns about students substituting AI for their own thinking. 

Concerns? Bias, again. Data security-- which is, incidentally, also the teacher's responsibility. AI research might have ethical implications. Students also might be tempted to cheat- the solution is for teachers to emphasize integrity. You know, just in case the subject of cheating and integrity has never ever come up in your classroom before. Deepfakes and hallucinations damage the trustworthiness of information, and that's why we are calling for safeguards, restrictions, and solutions from the industry. Ha! Just kidding. Teachers should emphasize that these are bad, and students should watch out for them.

Appendix

A couple of charts showing aptitudes and knowledge needed by teachers and admins. I'm not going to go through all of this. A typical example would be the "knowledge" item-- "Understand AI's potential and what it is and is not" and the is and is not part is absolutely important, and the guide absolutely avoids actually addressing what AI is and is not. That is a basic feature of this guide--it's not just that it doesn't give useful answers, but it fails to ask useful questions. 

It wraps up with the Hess Cognitive Rigor Matrix. Whoopee. It's all just one more example of bad guidance for teachers, but good marketing for the techbros. 



Tuesday, January 28, 2025

AI Is For The Ignorant

Well, here's a fun piece of research about AI and who is inclined to use it.

The title for this article in the Journal of Marketing-- "Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity"-- gives away the game, and the abstract tells us more than enough about what the research found.

You may think that familiarity with technology leads to more willingness to use it, but AI runs the opposite direction.

Contrary to expectations revealed in four surveys, cross country data and six additional studies find that people with lower AI literacy are typically more receptive to AI.

That linkage is explained simply enough. People who don't really understand what AI is or what it actually does "are more likely to perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes." 

The researchers are Stephanie Tully (USC Marshall School of Business), Chiara Longoni (Bocconi University), and Gil Appel (GW School of Business) are all academics in the world of business and marketing, and while I wish they were using their power for Good here, that's not entirely the case.

Having determined that people with "lower AI literacy" are more likely to fork over money for AI products, they reach this conclusion:

These findings suggest that companies may benefit from shifting their marketing efforts and product development towards consumers with lower AI literacy. Additionally, efforts to demystify AI may inadvertently reduce its appeal, indicating that maintaining an aura of magic around AI could be beneficial for adoption.

To sell more of this non-magical product, make sure not to actually educate consumers. Emphasize the magic, and go after the low-information folks. Well, why not. It's a marketing approach that has worked in certain other areas of American life. In a piece about their own research, the authors suggest a tiny bit of nuance, but the idea is the same. If you show AI doing stuff that "only humans can do" without explaining too clearly how the illusion is created, you can successfully "develop and deploy" new AI-based products "without causing a loss of the awe that inspires many people to embrace this new technology." Gotta keep the customers just ignorant enough to make the sale.

And lord knows lots of AI fans are already on the case. Lord knows we've been subjected to an unending parade of lazy journalism of the "Wow! This computer can totally write limericks like a human" variety. For a recent example, Reid Hoffman, co-founder of LinkedIn, Microsoft board member, and early funder of OpenAI, unleashed a warm, fuzzy, magical woo-woo invocation of AI in the New York Times that is all magic and zero information.

Hoffman opens with an anecdote about someone asking ChatGPT "based on everything you know about me, draw a picture of what you think my current life looks like." This is Grade A magical AI puffery; ChatGPT does not "know" anything about you, nor does it have thoughts or an imagination to be used to create a visual image of your life. "Like any capable carnival mind reader," continues Hoffman, comparing computer software not just to a person, but to a magical person. And when ChatGPT gets something wrong, like putting a head of broccoli on your desk, Hoffman paints that "quirky charm" as a chance for the human to reflect and achieve a flash of epiphany. 

But what Hoffman envisions is way more magical than that-- a world in which the AI knows you better than you know yourself, that could record the details of your life and analyze them for you. 

Decades from now, as you try to remember exactly what sequence of events and life circumstances made you finally decide to go all-in on Bitcoin, your A.I. could develop an informed hypothesis based on a detailed record of your status updates, invites, DMs, and other potentially enduring ephemera that we’re often barely aware of as we create them, much less days, months or years after the fact.

When you’re trying to decide if it’s time to move to a new city, your A.I. will help you understand how your feelings about home have evolved through thousands of small moments — everything from frustrated tweets about your commute to subtle shifts in how often you’ve started clicking on job listings 100 miles away from your current residence.

The research trio suggested that the more AI imitates humanity, the better it sells to those low-information humans. Hoffman suggests that the AI can be more human than the user. But with science!

Do we lose something of our essential human nature if we start basing our decisions less on hunches, gut reactions, emotional immediacy, faulty mental shortcuts, fate, faith and mysticism? Or do we risk something even more fundamental by constraining or even dismissing our instinctive appetite for rationalism and enlightenment?

 Software will make us more human than humans?

So imagine a world in which an A.I. knows your stress levels tend to drop more after playing World of Warcraft than after a walk in nature. Imagine a world in which an A.I. can analyze your reading patterns and alert you that you’re about to buy a book where there’s only a 10 percent chance you’ll get past Page 6.

Instead of functioning as a means of top-down compliance and control, A.I. can help us understand ourselves, act on our preferences and realize our aspirations.

I am reminded of Knewton, a big ed tech ball of whiz-bangery that was predicting it would collect so much information about students that it would be able to tell students what they should eat for breakfast on test day. It did not do that; instead it went out of business. Even though it did its very best to market itself via magic.

If I pretend that I think Hoffman's magical AI will ever exist, I still have other questions, not the least of which is why would someone listen to an AI saying "You should go play World of Warcraft" or "You won't be able to finish Ulysses" when people tend to ignore other actual humans with similar advice. And where do we land if Being Human is best demonstrated by software rather than actual humans? What would it do to humans to offload the business of managing and understanding their own lives? 

We have a hint. Research from Michael Gerlich (Head of Center for Strategic Corporate Foresight and Sustainability, SBS Swiss Business School) has published "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking"* and while there's a lot of scholaring going on here, the result is actually unsurprising.

Let's say you were really tired of walking everywhere, so you outsourced the walking to someone else, and you sat on the couch every waking hour. Can we predict what would happen to the muscles in your legs? Sure--when someone else bears the load, your own load-bearing members get weaker.

Gerlich finds the same holds true for outsourcing your thinking to AI. "The correlation between AI tool usage and critical thinking was found to be strongly negative." There are data and charts and academic talk, but bottom line is that "cognitive offloading" damages critical thinking. That makes sense several ways. Critical thinking is not a free-floating skill; you have to think about something, so content knowledge is necessary, and if you are using AI to know things and store your knowledge for you, your thinking isn't in play. Nor is it working when the AI writes topic sentences and spits out other work for you.

In the end, it's just like your high school English teacher told you-- if someone else does your homework for you, you won't learn anything.

You can sell the magic and try to preserve the mystery and maybe move a few more units of whatever AI widget you're marketing this week, but if you're selling something that people have to be ignorant to want so that they can offload some human activity then what are you doing? To have more time for World of Warcraft? 

If AI is going to be any use at all, it will not be because it hid itself behind a mask of faux human magical baloney, but because it can do something useful and be clear and honest about what it is actually, really doing, and not because it used an imitation of magic to capitalize on the ignorance of consumers. 


*I found this article thanks to Audrey Watters


Sunday, March 30, 2025

Ready For An AI Dean?

From the very first sentence, it's clear that this recent Inside Higher Ed post suffers from one more bad case of AI fabulism. 

In the era of artificial intelligence, one in which algorithms are rapidly guiding decisions from stock trading to medical diagnoses, it is time to entertain the possibility that one of the last bastions of human leadership—academic deanship—could be next for a digital overhaul.

AI fabulism and some precious notions about the place of deans in the universe of human leadership.

The author is Birce Tanriguden, a music education professor at the Hartt School at the University of Hartford, and this inquiry into what "AI could bring to the table that a human dean can't" is not her only foray into this topic. This month she also published in Women in Higher Education a piece entitled "The Artificially Intelligent Dean: Empowering Women and Dismantling Academic Sexism-- One Byte at a Time."

The WHE piece is academic-ish, complete with footnotes (though mostly about the sexism part). In that piece, Tanriguden sets out her possible solution

AI holds the potential to be a transformative ally in promoting women into academic leadership roles. By analyzing career trajectories and institutional biases, our AI dean could become the ultimate career counselor, spotting those invisible banana peels of bias that often trip up women's progress, effectively countering the "accumulation of advantage" that so generously favors men.

Tanriguden notes the need to balance efficiency with empathy:

Despite the promise of AI, it's crucial to remember that an AI dean might excel in compiling tenure-track spreadsheets but could hardly inspire a faculty member with a heartfelt, "I believe in you." Academic leadership demands more than algorithmic precision; it requires a human touch that AI, with all its efficiency, simply cannot emulate.

I commend the author's turns of phrase, but I'm not sure about her grasp of AI. In fact, I'm not sure that current Large Language Models aren't actually better at faking a human touch than they are at arriving at efficient, trustworthy, data-based decisions.  

Back to the IHE piece, in which she lays out what she thinks AI brings to the deanship. Deaning, she argues, involves balancing all sorts of competing priorities while "mediating, apologizing and navigating red tape and political minefields."

The problem is that human deans are, well, human. As much as they may strive for balance, the delicate act of satisfying all parties often results in missteps. So why not replace them with an entity capable of making precise decisions, an entity unfazed by the endless barrage of emails, faculty complaints and budget crises?

The promise of AI lies in its ability to process vast amounts of data and reach quick conclusions based on evidence. 

Well, no. First, nothing being described here sounds like AI; this is just plain old programming, a "Dean In A Box" app. Which means it will process vast amounts of data and reach conclusions based on whatever the program tells it to do with that data, and that will be based on whatever the programmer wrote. Suppose the programmer writes the program so that complaints from male faculty members are weighted twice as much as those from female faculty. So much for AI dean's "lack of personal bias." 

But suppose she really means AI in the sense of software that uses a form of machine learning to analyze and pull out patterns in its training data. AI "learns: to trade stocks by being trained with a gazillion previous stock trades and situations, thereby allowing it to suss out patterns for when to buy or sell. Medical diagnostic AI is training with a gazillion examples of medical histories of patients, allowing it to recognize how a new entry from a new patient fits in all that the patterns. Chatbots like ChatGPT do words by "learning" from vast (stolen) samples of word use that lead to a mountain of word patter "rules" that allow it to determine what words are likely next.

All of these AI are trained on huge data sets of examples from the past.

What would you use to train AI Dean? What giant database would you use to train it, what collection of info about the behavior of various faculty and students and administrators and colleges and universities in the past? More importantly, who would label the data sets as "successful" or "failed"? Medical data sets come with simple metrics like "patient died from this" or "the patient lived fifty more years with no issues." Stock markets come with their own built in measure of success. Who is going to determine which parts of the Dean Training Dataset are successful or not.

This is one of the problems with chatbots. They have a whole lot of data about how language has been used, but no meta-data to cover things like "This is horrifying racist nazi stuff and is not a desirable use of language" and so we get the multiple examples of chatbots going off the rails

Tanriguden tries to address some of this. Under the heading of how AI Dean would evaluate faculty.

With the ability to assess everything from research output to student evaluations in real time, AI could determine promotions, tenure decisions and budget allocations with a cold, calculated rationality. AI could evaluate a faculty member’s publication record by considering the quantity of peer-reviewed articles and the impact factor of the journals in which they are published.

Followed by some more details about those measures. Which raises another question. A human could do this-- if they wanted to. But if they don't want to, why would they want a computer program to do it?

The other point here is that once again, the person deciding what the algorithm is going to measure is the person whose biases are embedded in the system. 

Tanriguden also presents "constant availability, zero fatigue" as a selling point. She says deans have to do a lot of meetings, but (her real example) when, at 2 AM, the department chair needs a decision on a new course offering, AI Dean can provide an answer "devoid of any influence of sleep deprivation or emotional exhaustion." 

First, is that really a thing that happens? Because I'm just a K-12 guy, so maybe I just don't know. But that seems to me like something that would happen in an organization that has way bigger problems than any AI can solve. But second, once again, who decided what AI Dean's answer will be based upon? And if it's such a clear criterion that it can be codified in software, why can't even a sleepy human dean apply it?

Finally, she goes with "fairness and impartiality," dreaming of how AI Dean would apply rules "without regard to the political dynamics of a faculty meeting." Impartial? Sure (though we could argue about how desirable that is, really). Fair? Only as fair as it was written to be, which starts with the programmer's definition of "fair."

Tanriguden wraps up the IHE piece once again acknowledging that leadership needs more than data as well as "the issue of the academic heart." 

It is about understanding faculty’s nuanced human experiences, recognizing the emotional labor involved in teaching and responding to the unspoken concerns that shape institutional culture. Can an AI ever understand the deep-seated anxieties of a faculty member facing the pressure of publishing or perishing? Can it recognize when a colleague is silently struggling with mental health challenges that data points will never reveal?

In her conclusion she arrives at Hybrid Dean as an answer:

While the advantages of AI—efficiency, impartiality and data-driven decision-making—are tantalizing, they cannot fully replace the empathy, strategic insight and mentorship that human deans provide. The true challenge may lie not in replacing human deans but in reimagining their roles so that they can coexist with AI systems. Perhaps the future of academia involves a hybrid approach: an AI dean that handles (or at least guides) the operational decisions, leaving human deans to focus on the art of leadership and faculty development.

We're seeing lots of this sort of resigned knuckling under in lots of education folks who seem resigned to the predicted inevitability of AI (as always in ed tech, predicted by people who have a stake in the biz). But the important part here is that I don't believe that AI can hold up its half of the bargain. In a job that involves management of humans and education and interpersonal stuff in an ever-changing environment, I don't believe AI can bring any of the contributions that she expects from it. 

Tuesday, May 13, 2025

GOP Proposes Unregulated AI

The current regime may not have a clue what AI actually is, but they are determined to get out in front of it.

First we had Dear Leader's bonkers executive order back in April to set up an AI task force that would create an AI challenge that would boost the use of AI in education. Plus "improving education through artificial intelligence" (an especially crazypants turn of phrase) that would 
seek to establish public-private partnerships with leading AI industry organizations, academic institutions, nonprofit entities, and other organizations with expertise in AI and computer science education to collaboratively develop online resources focused on teaching K-12 students foundational AI literacy and critical thinking skills.

Does the person who whipped this together think AI and critical thinking are a package, or does this construction acknowledge that AI and critical thinking are two separate things? The eo also promises all sorts of federal funding to back all this vague partnering. The eo also contains this sad line:

the Secretary of Education shall identify and implement ways to utilize existing research programs to assist State and local efforts to use AI for improved student achievement, attainment, and mobility.

"Existing research programs"? Are there some? And "achievement, attainment, and mobility" mean what? 

The eo also touts using Title II funds for boosting AI training for teachers, like reducing "time-intensive administrative tasks" and training that would help teachers "effectively integrate AI-based tools and modalities in classrooms."

Bureaucratic bloviating. Fine. Whatever. But House Republicans decided to take their game up a notch this week by adding this tasty piece of baloney. Budget reconciliation now includes this chunk of billage. The first part has to do with selling off some pieces of the broadcast spectrum, but the second part--

no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10- year period beginning on the date of the enactment of this Act.

There are exceptions, mostly of the "anything that helps AI companies expand or make money is okay" variety.

A ban on AI regulation is dumb, particularly given that folks are still trying to figure out what it can or can't do. 

But a ban on regulation for the next decade??!! Who knew that the GOP would be involved in launching Skynet? 

"Sir, it looks like Skynet is about to send something called a terminator to kill us all. Should we take action to prevent it?"

"Stand down, kid. The Republican party has forbidden us to take action. Kiss your children goodbye."

Seriously, we can already see that AI is taking us to some undesirable places, and God only knows what might develop over the next decade. To tie our regulatory hands, to unilaterally disarm and give up any ability to put restraints on the cyber-bull in our cultural china shop is just foolish.

Of course, what the proposed anti-regulation and the eo have in common is that they prioritize the chance for corporations to profit from AI. That's common to many actions of the regime, all based on the notion that there is nothing so precious in our country or culture that it should be protected from impulse to make a buck. What the GOP proposes is a "drill, baby, drill" for AI with the nation's youths, education system, and culture playing the part of the great outdoors.

Anti-regulation for AI is worse than the other brands of deregulation being pushed, because while we have some idea what deforesting a national park might look like, we have no way of imagining what may appear under the banner of AI in the next ten years. New ways to steal content for training? Out of control faux humans who intrude in scary and dangerous ways? Whole new versions of identity theft? There are so many terrible AI ideas out there (international diplomacy by AI, anyone) and so many more to come--even as AI may be actually getting worse at doing its thing. Not all of them need to be regulated, but to pre-emptively deregulate the industry, dark future unseen, in the hopes of cashing in-- that's venal, careless stupidity of the highest order. 

Thursday, December 12, 2024

AI in Ed: The Unanswered Question

It is just absolutely positively necessary to get AI into education. I know this because on social media and in my email, people tell me this dozens of times every day. 

Just two examples. UCLA is excited to announce that a comparative literature course next semester will be "built around" UCLA's Kudu artificial intelligence platform. Meanwhile, Philadelphia schools and the University of Pennsylvania are teaming up to make Philadelphia a national AI in education model. The AI-in-education list goes on and on, and there are soooo many questions. Ethical questions. Questions about the actual capabilities of AI? Questions of resource use?

But here's the question I wish more --well, all, actually-- of these folks would ask.

What problem does it solve?

This is the oldest ed tech problem of them all, an issue that every teacher has encountered-- someone introduces a new piece of tech starting from the premise, "We must use this. Now let's figure out how." This often leads to the next step of, "If you just change your whole conception of your job, then this tech will be really useful. Will it get the job done better? Hey, shut up." 

This whole process is why so many, many, many, many pieces of ed tech ended up gathering dust, as well as birthing painfully useless sales pitchery masquerading as professional development. And when it comes to terrible PD, AI is right on top of things (see this excellent taxonomy of AI educourses, courtesy of Benjamin Riley)

So all AI adoption should start with that question.

What problem is this supposed to solve? 

Only after we answer that question can we ask the next important question, which is, will it actually solve the problem? Followed closely by asking what other problems it will create.

Sometimes there's a real answer. It turns out that once you dig through the inflated verbiage of the UCLA piece, what's really happening is that AI is whipping up a textbook for the course, using the professors notes and materials from previous iterations of the course. So the problem being solved is "I wish I had a text for this course." Time will tell whether having to meticulously check all of the AI's work for accuracy is less time consuming than just writing the text herself.

[UPdate: Nope, it's more than the text. It's also the assignments and the TA work. What problem can this possibly solve other than "The professor does not know how to do their job" or "The professor thinks work is way too hard." Shame on UCLA.]

On the other hand, Philadelphia's AI solution seems to be aimed at no problem at all. Says dean of Penn's education grad school, Katherine O. Strunk:
Our goal is to leverage AI to foster creativity and critical thinking among students and develop policies to ensure this technology is used effectively and responsibly – while preparing both educators and students for a future where AI and technology will play increasingly central roles.

See, that's a pretty goal, but what's the problem we're solving here. Was it not possible to foster creativity and critical thinking prior to AI? Is the rest of the goal solving the problem of "We have a big fear of missing out"?

Assuaging FOMO is certainly one of the major problems that AI adoption is meant to address. The AI sector makes some huge and shiny predictions, including some that show a fundamental misunderstanding of how education works for real humans (looking at you, Sal Khan and your AI-simulated book characters). Some folks in education leadership are just deathly afraid of being left behind and so default to that old ed tech standard-- "Adopt it now and we'll figure out what we can do with it later."

So if someone in your organization is hollering that you need to pull in this AI large language model Right Now, keep asking that question--

What problem will it help solve?

Acceptable answers do not include: 

* Look at this thing an AI made! Isn't it cool! Shiny!

* I read about a school in West Egg that did some really cool AI thing.

* We could [insert things that you should already be doing].

* I figured once you got your hands on it, you could come up with some ideas.

* We're bringing in someone to do 90 minutes of training that will answer all your questions.

* Just shut up and do it.

The following answers are also not acceptable, but they probably won't be spoken aloud:

* We are going to replace humans and save money.

* It will make it easier to dump work on you that other people don't want to do.

Acceptable answers include:

* We could save time in Task X

* We could do a better job of teaching Content Q and/or Skill Y

Mind you, the proposed AI may still flunk when you move on to the "Can it actually do this, really," but if you don't really know what you want it to do, it's senseless to debate whether or not it can do that.

There's some debate raging currently in the world of AI stuff, and as usual Benjamin Riley has it laid out pretty clearly here. But much of it is set around the questions "Is AI fake" and "Does AI suck," and in the classroom, both of those questions are secondary importance to "What problem is AI supposed to help solve here?" If the person pushing AI can't answer that question, there really isn't any reason to continue the conversation. 



Saturday, January 5, 2019

Terror, Hubris and AI (or Can Artificial Intelligence Fake Being A Self-important Pompous Tool?)

Artificial Intelligence (AI) is, if not a hot new product itself, the additive that helps sell a million other products ("New! Improved!! Now with AI!!!"). And the proponents of AI are loaded with big brass cyberballs when it comes to making claims about their product. And all the most terrible and frightening things are happening in China. Come down this terrifying baloney-stuffed rabbit hole with me.

It starts with an absolutely glorious website-- PR Newswire. Newswire is part of Cision, a company that promises to move your pr by connecting using their Influencer Graph built with their Communications Cloud. Super social media PR (pr.cision-- get it?). Anyway, PR Newswire is a website of nothing but press releases, and it's kind of awesome. This news release has been run in many outlets. I'm digressing a bit here, but we need to remember through this whole trip that this is about a company that wants to sell a product.

Tucked in amidst the press releases is some "news provided by Squirrel AI Learning" about the AI Summit held about a month ago in New York, featuring over 350 experts, professors and business executives from the AI field discussing the technology and the commercial applications thereof. Why "squirrel" (there is such a thing as anti-squirrel AI)? Because squirrel is the symbol for "agility, diligence and management." Founded in 2014, Yixue Squirrel AI is headquartered in Shanghai and claims to be "the first K12 EdTech company which specializes in intelligent adaptive education in China" and, just so you know, they're "the market leader." They have opened "over 700 schools and have 3000 teaching staff in more than 100 cities." They have $44 million of investor money, an education lab in New York, and an AI lab in Silicon Valley run jointly with Stanford Research Institute, and they've sponsored some iNACOL stuff as well.

Do some quick math and you'll realize that those figures work out to 4.28 teachers per school. How do they do that? You can guess:

Like the AlphaGo simulated Go master, the AI system simulated human teacher giving the student a personalized learning plan and one-on-one tutoring, with 5 to 10 times higher efficiency than traditional instructions. YiXue Squirrel AI offers the high-quality after-school courses in subjects such as Chinese, Math, English, Physics, and Chemistry. Powered by its proprietary AI-driven adaptive engine and custom-built courseware, YiXue’s “Squirrel AI” platform provides students with a supervised adaptive learning experience that has been proven to improve both student efficacy and engagement across YiXue’s online learning platform and offline learning centers. 

Simulated human teacher? Great.

There's very little about Squirrel on the English-speaking web that the company didn't put there themselves, and little of that dates to before 2018. But what they have to say about themselves is not shy. You can watch founder Derek Haoyang Li deliver his speech at the AI summit in December, in which he says, among other things, that his company's AI tutors outperformed even the best human teachers.

Squirrel's press release conjures up some of the most grandiose language I've seen for pushing new education ideas. You may think the stakes in education AI are just better education for students. Shows what you know:

To some extent, education must keep up with the development of AI technology, so as to ensure the historical status of mankind at the top of civilization.

I've seen education ideas marketed by invoking a nation's need to stay ahead of other nations. This is the first time I'm ever seen someone suggest that our primacy as a species is resting on how quickly we buy their product. Sadly, there is no suggestion about which species might replace mankind at the top of civilization.

Derek Li's keynote address was entitled "How To Give Every Child Adaptive Education." Spoiler Alert: the answer is not "by putting way more than 4.28 human teachers in every school"). After doing a quick "AI so far" bit, we go big once again:

The upcoming AI era, like the industrial revolution, will be one of the epochal events that can change the course of human history.

And then he ticks off the reasons. As is often the case, they say as much about the speaker's understanding (or lack thereof) of education as they say about his ability to oversell AI.

First, AI is "know-it-all." Examples? It can read a lot of books quickly and prepare for a debate on any topic. Which suggests that we're unclear on the difference between reading something and understanding it. But hey-- here's an explanation of education you might not have encountered before. Here's how the "know-it-all factor applies to education:

In terms of education, AI can break down and master all the nanoscale knowledge points, have a good grasp of the countless correlations between knowledge points, and provide "one-on-one tutoring" for thousands of students, which is an almost impossible task for human teachers.

So that's education. A bunch of knowledge points and map that connects them. Basically just building an html encyclopedia. I assume that somewhere in there will be the explanation of the difference between knowledge and wisdom. I mean, I am a big believer in the value of rich content, but that's because you need to know things in order to understand things-- but understanding and applying and using your insight to be more fully yourself, more fully human in the world-- that's the point. Not to just pile up a bunch of nanoscale knowledge points.

But this is one of the huge dangers of tech in education-- redefining education so that it fits what the software can do. Can AI cope with a thoughtful synthesis of understanding and insight expressed through a long, in-depth piece of writing? It cannot-- and so higher level operations simply disappear from the definition of education.

Second, "it can tell big stories from small things." We're talking about amassing huge amounts of detailed data here, like the way that "Netflix can detect the cognition and preference of every viewer from the data of every frame, and then find the logic of making and reshaping a film from the subtle differences."

I don't have enough space to plumb the depths of why this is a bad idea, but let me just note a few things. The way that food engineering has produced unhealthy food that trips our hardwiring. The way that social media have essentially unleashed psy-ops on users to create a virtual dependence. It may be that software can figure out how to push our buttons, but that's simply a process, and a process that has its own natural tendency to NOT play to the angels of our better nature. Squirrel plugs its own ability to figure out which knowledge points students have or have not mastered, but that completely skips the question of the validity of the whole knowledge points model and the huge HUGE question of who is going to decide what the knowledge points are, how important each one is, how it should be sequenced, how its mastery can be measured-- and whether or not all those questions should be answered by tech companies rather than educators.

Third, it has infinite computing power. Yes, I'm sure that's not hyperbolic PR baloney at all. What he seems to mean is that the computer can do many things very swiftly, and definitely faster than humans. It can play chess, and so it can "know the user profile of each student" in detail by carefully labeling each question. This is certainly a clerical job that computers can do quickly, but we need to be more careful in throwing around words-- a piece of software does not "know" a student any more than an old school filing cabinet filled with folders of student work "knows" those students.

Fourth, it's self-evolving. "Sunshine gathers, and light flows into my dreams," wrote Microsoft's Xiaoice. He calls that "a line of verse" and then says "although it has been criticized by many experts, it's better than 90% of what humans can do" and then I say, "Holy scheikies! Listen, Mr. Hard Data Guy, let me see your criteria for deciding that a line of poetry is "better" and then the methodology that allowed you to know what 90% of humans are capable of writing?! But this is ed tech at its ballsiest-- experts in the field we're disrupting don't know a damn thing; poetry is just some pretty words strung together and our word-stringing software is the bestest. He's not done bragging:

In the field of education, the evolutionary power of Squirrel AI is also amazing. It has skills that human teachers do not have. It can automatically help students make up their knowledge gaps and stimulate students' creativity and imagination.

Software now knows the secret of creativity and imagination. SMH. "Self-evolving" has been a moving target for programmers, but most often it means that it's just collecting more knowledge points and examples, which means it "learns" what is put into it, which is why facial recognition software works better with white guys and a previous Microsoft AI project called Watson didn't express poetry so much as it kept using the word "bullshit" inappropriately. (Can the word "bullshit" be used appropriately? If you are still reading this article, you don't really have to ask.) All the infinite computing power we've added hasn't changed the oldest rule of computing-- Garbage In, Garbage Out.

Okay, take a deep breath. Here's a picture of cute puppies to help sooth you. Because we aren't done-- Squirrel is now going to tout its breakthroughs in "I+ Education."

So what can they do? Let's introduce the section:

At present, auxiliary AI tools, such as pronunciation assessment and emotion recognition, cannot get involved in children's cognitive learning process. What parents are most concerned about is how a product can help their children to learn. AI+ education will eventually return to teaching and learning. Therefore, AI adaptive education is the best application scenario of AI+ education. Squirrel AI Learning has been in the forefront in this field in China.

Huh. I'm considering for the first time the possibility that this press release was written by an AI.

Oh hell-- Knewton is connected to these guys, via Richard Tong, who used to be the Asia Pacific tech director for Knewton. That's not good. Squirrel has also poached from RealizeIT and ALEKS. So that's who they've enlisted so far (spoiler alert: there's a surprise coming). What have they done?

First, super-nano knowledge point splitting. They broke a junior high subject into 30,000 knowledge points. And I'm thinking first, that's all? And I'm thinking second, so much for my question about whether we'd have tech guys making educational decisions.

Second, learning ability and learning method splitting to come up with "definable, measurable and teachable ability" theory. They've come up with 500, like learning by analogy or inference, which is apparently (honestly, this is hard to read) something Squirrel's AI tries to teach. And this-- "They also pay attention to the cultivation and promotion of students' creative ability." Says the guy who thinks "Sunshine gathers, and light flows into my dreams" is better than what 90% of humans could write. 

Third, well... "they initiated the correlation probability of uncorrelated knowledge points. Squirrel AI builds correlations between knowledge points and uses information theory to test and teach students efficiently." I'm pretty sure he's just making shit up now.

Fourth, they came up with the "concept of map reconstruction based on mistakes." Squirrel AI will personalize learning plans for students "on the basis of finding the real reason for making mistakes," which is a hugely impressive feat of mind reading. I can believe they could handle figuring out which step in a math problem a student flubbed. I'm less confident they could, say, spot the source of flawed reasoning in an analysis of wheel imagery in MacBeth. And I'd be really curious, when it comes to writing or the humanities, how the software distinguished between "mistakes" and the "creative ability" they're careful to cultivate.

Fifth, they initiated a versus model. Somehow, by using Bayes' theorem, they "simulate learning and competition between students and teachers." Only simulate? And Bayes' theorem is about calculating probabilities-- is this supposed to be about beating the odds.

With these technologies, Squirrel AI can accurately locate the knowledge points of each student and continuously push the knowledge points most suitable to their intellectual development and learning ability according to each student's knowledge point mastery in the process of dynamic learning, so as to establish a personalized learning path for each student and enable them to learn the most knowledge points in the shortest time, putting an end to the "cramming model" and "excessive assignments tactic" in traditional education.

Maybe I'm not fully grasping the awesome here, but I'd swear that basically Squirrel has broken down education into a big list of competencies, and the computer keeps track of which ones the student hasn't checked off yet, and gives the student material to help fill in the blanks on the big list. Am I missing something?

Li saved a big announcement for late in the speech-- just in case you doubted Squirrel's clout, they have just hired Tom Mitchell, the dean of Carnegie Mellon University School of Computer Science, as their Chief AI Officer. So while you're processing the rest of this, you might also want to think about how thoroughly China is grabbing the reins on this emerging tech biz. Mitchell will lead a bunch of teams conducting "basic AI research in the field of intelligent adaptive education, as well as the development and application of related products." Because products.

To sum up its own awesomeness, Squirrel ends with

In addition, Squirrel AI Learning not only has mastered the world's most advanced technology, but also has greatly expanded its online + offline education retail business model nationwide. Online, Squirrel AI Learning gets traffic. Students receive one-on-one tutoring online. Different from other education enterprises, 70% courses of Squirrel AI Learning are taught and lectured by AI teachers. Human teachers are responsible for the remaining 30% of teaching for monitoring and emotional help. Offline, Squirrel AI Learning opens physical learning centers in the form of franchised chain centers, cooperative schools and self-run centers in various places. 

They've mastered the world's most advanced technology. Nothing left to work on their, because it is mastered, baby. And in case you missed this before, let me point it out again-- "70% courses...are taught and lectured by AI teachers." Also, the retail of this business is going great.

Scared yet?

In fact, Squirrel AI Learning's new education retail business model is also a reform of the traditional education model. On the one hand, Squirrel AI Learning not only has changed the traditional model of teaching by teachers, replacing it with teaching by AI, but also has realized intelligent management of students' learning. In the past, every student's data was opaque. Now through the Internet and AI, all students' learning process data and teaching data are collected, to provide better quality services for students. Transparent data management has changed the traditional offline education model.

No schools. No teachers. Just a highly lucrative batch of software and a mountain of Big Data on each child. And they are working on "testing brain wave patterns." Says Derek Li, man-god standing astride the great computerized colossus that can do everything and rule us all, next, "Squirrel AI will become a super AI teaching robot integrating personalized learning, dynamic learning objective management, human-computer dialogue, emotion and brain wave monitoring, to provide every student with high quality education and teaching services."

Let's hope that this misguided dystopic vision is just as bullshitty as it sounds. These are people who think a large hard drive stuffed with data points has been educated and that poetry has nothing to do with human experience and that education should be chopped down to fit the limitations of computers software. And these are people without an ounce of humility, absolutely confident that they are correct to commandeer and re-create an entire sector of human endeavor can make a buck. They have failed to answer two critical questions-- what can they do, really, and should they actually attempt to do it.

Wednesday, January 22, 2025

Against AI Theft

Among the many reasons to give Artificial Intelligence some real side-eye is the business model that rests entirely on plagiarism-- stealing the works of human creators to "train" the systems. Now a new paper attacks a defense of the AI cyber-theft machines.

"Generative AI's Illusory Case for Fair Use" comes from Jacqueline Charlesworth (who appears to be a real person, a necessary check that we all need to do now whenever we come across scholarly work because AI is cranking out sludge swiftly). Charlesworth was a general counsel of the US Copyright Office and specializes in copyright litigation


The folks hoping to make bank on AI insist that piracy is not their business model, and one of their favorite arguments to hide behind is Fair Use. Teachers are familiar with Fair Use rules, which tell us that we can show movies if they are being used for legitimate teaching stuff but not for entertainment. 

But as Charlesworth explains it, the Big Boys of AI argue that while the programs are copying the wo4rks used for training, the AI only "learns" uncopyrightable information about the works. 
Once trained, they say, the model does not comprise or make use of the content of the training works. As such, they contend, the copying is a fair use under U.S. law.

That, says Charlesworth, is bunk.

The 42 page paper combines hard-to-understand AI stuff with hard-to-understand law stuff. But it includes lots of useful insights and illustrations of AI's lack of smartitude. And Charlesworth is a clear and incisive writer. And she dismantles the defense used by Big AI companies pretty thoroughly.

Despite wide employment of anthropomorphic terms to describe their behavior, AI machines do not learn or reason as humans do. They do not “know” anything independently of the works on which they are trained, so their output is a function of the copied materials. Large language models, or LLMs, are trained by breaking textual works down into small segments, or “tokens” (typically individual words or parts of words) and converting the tokens into vectors—numerical representations of the tokens and where they appear in relation to other tokens in the text. The training works thus do not disappear, as claimed, but are encoded, token by token, into the model and relied upon to generate output.
Furthermore, the earlier cases don't fit the current situation as far as business aspects go-
The exploitation of copied works for their intrinsic expressive value sharply distinguishes AI copying from that at issue in the technological fair use cases relied upon by AI’s fair use advocates. In these earlier cases, the determination of fair use turned on the fact that the alleged infringer was not seeking to capitalize on expressive content—exactly the opposite of generative AI.

Charlesworth also notes that in the end, these companies fall back on the claim of their "overwhelming need to ingest massive amounts of copyrighted material without permission from or payment to rightsholders." In other words, "Please let us steal this stuff because we really, really need to steal this stuff to make a big mountain of money."

Charlesworth does a good job of puncturing the attempts to anthropomorphize AI, when, in fact, AI is not "smart" at all. 

Unlike humans, AI models “do not possess the ability to perform accurately in situations not encountered in their training.” They “recite rather than imagine.” A group of AI researchers has shown, for instance, that a model trained on materials that say “A is B” does not reason from that knowledge, as a human would, to produce output that states the reverse, that B is A. To borrow one of the researchers’ examples, a model trained on materials that say Valentina Tereshkova was the first woman to travel in space may respond to the query, “Who was Valentina Tereshkova?” with “The first woman to travel in space.” But asked, “Who was the first woman to travel in space?,” it is unable to come up with the answer. Based on experiments in this area, the research team concluded that large language models suffer from “a basic inability to generalize beyond the training data.”

Charlesworth gets into another area-- the ability of AI to reconstruct the data it was trained on. One of her examples is one that shows up in the New York Times lawsuit against OpenAI, in which, with just a little prompting, ChatGPT was able to "regurgitate" nine paragraphs verbatim of a NYT article. This ability isn't one we often seen demonstrated (certainly it is not in OpenAI's interest to show it off), but it certainly creates a problem for the Fair Use argument. They may not have a copy of the copyrighted work stored, but they can pull one up any time they want.

And she notes that the cases cited in defense are essentially different:

Pointing to a handful of technology-driven fair use cases, AI companies and their advocates claim that large-scale reproduction of copyrighted works to develop and populate AI systems constitutes a fair use of those works. But Google Books, HathiTrust, Sega and other key precedents relied upon by AI companies to defend their unlicensed copying—mainly Kelly v. Arriba Soft Corp., Perfect 10, Inc. v. Amazon.com, Inc., A.V. v. iParadigms, LLC (“iParadigms”), Sony Computer Entertainment, Inc. v. Connectix Corp. (“Sony Computer”) and Google, LLC v. Oracle America, Inc. (“Oracle”)—are all in a different category with respect to fair use. That is because these cases were concerned with functional rather than expressive uses of copied works. The copying challenged in each was to enable a technical capability such as search functionality or software interoperability. By contrast, copying by AI companies serves to enable exploitation of protected expression.

There's lots more, and her 42 pages include 237 footnotes. It's not a light read. But it is a powerful argument against the wholesale plagiarism fueling the AI revolution. It remains for the courts to decide just how convincing the argument is. But if you're trying to bone up on this stuff, this article is a useful read.


Wednesday, April 2, 2025

Where Does AI Fit In The Writing Process

Pitches and articles keep crossing my desk that argue for including AI somewhere in the student writing process. My immediate gut-level reaction is similar to my reaction upon finding glass shards in my cheeseburger, but, you know, maybe my reaction is a just too visceral and I need to step back and think this through.

So let's do that. Let's consider the different steps in a student essay, both for teachers and students, and consider what AI could contribute.

The Prompt

The teacher will have to start the ball rolling with the actual assignment. This could be broad ("Write about a major theme in Hamlet") or very specific ("How does religious imagery enhance the development of ideas related to the role of women in early 20th century New Orleans in Kate Chopin's The Awakening?"). 

If you're teaching certain content, I am hoping that you know the material well enough to concoct questions about it that are A) worth answering and B) connected to your teaching goals for the unit. I have a hard time imagining a competent teacher who says, "Yeah, I've been teaching about the Industrial Revolution for six weeks, but damned if I know what anyone could write about it." 

I suppose you could try to use ChatGPT to bust some cobwebs loose or propose prompts that are beyond what you would ordinarily set. But evaluating responses to a prompt that you haven't thought through yourself? Also, will use of AI at this stage save a teacher any real amount of time?

Choosing the Response

Once the student has the prompt, they need to do their thinking and pre-writing to develop an idea about which to write. 

Lord knows that plenty of students get stuck right here, so maybe an AI-generated list of possible topics could break the logjam. But the very best way to get ready to write about an idea starts when you start developing the idea. 

The basic building block of an essay is an idea, and the right question to ask is "What do I have to say about this prompt?" Asking ChatGPT means you're starting with the question, "What could I write an essay about?" Which is a fine question if your goal is to create an artifact, a piece of writing performance. 

I'm not ruling out the possibility that a student see a topic on a list and have a light bulb go off-- "OOoo! That sounds interesting to me!" But mostly I think asking LLMs to pick your topic is the first step down the wrong road, particularly when you consider the possibility that the AI will spit out an idea that is simply incorrect.

Research and Thinking

So the student has picked a topic and is now trying to gather materials and formulate ideas. Can AI help now?

Some folks think that AI is a great way to summarize sources and research. Maybe combine that with having AI serve as a search engine. "ChatGPT, find me sources about symbiosis in water-dwelling creatures." The problem is that AI is bad at all those things. Its summarizing abilities are absolutely unreliable and it is not a good search engine, both because it tends to make shit up and because its training data is probably not up to date.

But here's the thing about the thinking part of preparing to write. If you are writing for real, and not just filling in some version of a five paragraph template, you have to think about the idea and their component parts and how they relate, because that is where the form and organization of your essay comes from. 

Form follows function. If you start with five blank paragraphs and then proceed to ask "What can I put in this paragraph, you get a mediocre-at-best artifact that can be used for generating a grade. But if you want to communicate ideas to other actual humans, you have to figure out what you want to say first, and that will lead you straight to How To Say It. 

So letting AI do the thinking part is a terrible idea. Not just because it produces a pointless artifact, but because the whole thinking and organizing part is a critical element of the assignment. It exercises exactly the mental muscles that a writing assignment is supposed to build. In the very best assignments, this stage is where the synthesis of learning occurs, where the student really grasps understanding and locks it in place. 

So many writing problems are really thinking problems-- you're not sure how to say it because you're not sure what to say. And every problem encountered is an opportunity. Every point of friction is the place where learning occurs.

Organization

See above. If you have really done the thinking part, you can organize the elements of the paper faster and better than the AI anyway. 

Drafting

You've got a head full of ideas, sorted and organized and placed in a structure that makes sense. Now you just have to put them into words and sentences and paragraphs. Well, maybe not "just." This composing stage is the other major point of the whole assignment-- how do we take the thoughts into our heads and turn them into sequences of words that communicate across the gulf between separate human beings? That's a hell of a different challenge than "how does one string together words to fill up a page in a way that will collect grade tokens?" 

And if you've done all the thinking part, what does tagging in AI do for you anyway? You know better than the AI what exactly you have in mind, and by the time you've explained all that in your ChatGPT prompt box, you might as well have just written the essay yourself.

I have seen the argument--from actual teachers-- that having students use AI to create a rough draft is a swell idea. Then the student can just "edit" the AI product-- just fix the mistakes, organize things more in line with what you were thinking, maybe add a little voice here and there. 

But if you haven't done the thinking part, how can you edit? If you don't know what the essay is intended to say--or if, in fact, it came from a device that cannot form intent-- how can you judge how well it is working?

Proof and edit

The AI can't tell you how well you communicated what you intended to communicate because, of course, it has no grasp of your intent. That said, this is a step that I can imagine some useful of computerized analysis, though whether it all rises to the level of AI is debatable.

I used to have my students do some analysis of their own writing to illuminate and become more conscious of their own writing patterns. Some classics like counting the forms of "be" in the essay (shows if you have a love for passive or weak verbs). Count the number of words per sentence. Do a grammatical analysis of the first four words of every sentence. All data points that can help a writer see and then try to break certain unconscious habits. Students can do this by hand; computers could do it faster, and that would be okay.

The AI could be played with for some other uses. Ask the AI to summarize your draft, to see if you seem to have said what you meant to say. I suppose students could ask AI for editing suggestions, but only if we all clearly understand that many of those suggestions are going to be crappy. I've seen suggestions like having students take the human copy and the edited-by-AI copy and perform a critical comparison, and that's not a terrible assignment, though I would hope that the outcome would be realization that human editing is better. 

I'm also willing to let my AI guard down here because decades of classroom experience taught me that students would, generally speaking, rather listen to their grandparents declaim loudly about the deficiencies of Kids These Days than do meaningful proofreading of their own writing. So if playing editing games with AI can break down that barrier at all, I can live with it. But so many pitfalls; for instance, the students who comply by writing the most half-assed rough draft ever and just letting ChatGPT finish the job. 

Final Draft

Another point at which, if you've done all the work so far, AI won't save you any time or effort. On the other hand, if this is the main "human in the loop" moment in your process, you probably lack the tools to make any meaningful final draft decisions.

Assessing the Essay

As we have noted here at the institute many, many times over the years, computer scoring of essays is the self-driving car of the academic world. It is always just around the corner, and it never, ever arrives. Nor are there any signs that is about to. 

No responsible school system (or state testing system) should use computers to assess human writing. Computers, including AI programs, can't do it well for a variety of reasons, but let's leave it at "They do not read in any meaningful sense of the word." They can judge is the string of words is a probable one. They can check for some grammar and usage errors (but they will get much of that wrong). They can determine if the student has wandered too far from the sort of boring mid sludge that AI dumps every second onto the internet. And they can raise the philosophical question, "Why should students make a good faith attempt to write something that no human is going to make a good faith attempt to read?"

Yes, a ton of marketing copy is being written (probably by AI) about how this will streamline teacher work and make it quicker and more efficient and even more fair (based on the imaginary notion that computers are impartial and objective). The folks peddling these lies are salivating at the dreams of speed and efficiency and especially all the teachers that can be fired and replaced with servers that don't demand raises and don't join unions and don't get all uppity with their bosses. 

But all the wishing in the world will not bring us effective computer assessment of student writing. It will just bring us closer to the magical moment when AI teachers generate an AI assignment which student AI then generate to be fed into AI assessment programs. The AI curriculum is thereby completed in roughly eight and a half minutes, and no actual humans even have to get out of bed. What that gets us other than wealthy, self-satisfied tech overlords, is not clear. 

Bottom Line

All of the above is doubly true if you are in classroom where writing is used as an assessment of content knowledge. 

This is all going to seem like quibbling to people who having an artifact to exchange for grade tokens is the whole point of writing. But if we want to foster writing as a real meaningful means of expression and communication, AI doesn't have much to offer the process. Call me an old fart, but I still haven't seen much of a use case for AI in the classroom when it comes to any sort of writing. 

What AI mostly promises is the classroom equivalent of having someone come to the weight room and do the exercises for you. Yeah, it's certainly easier than doing it yourself, but you can't be surprised that you aren't any stronger when your substitute is done.