From its title-- "How AI Destroys Institutions"-- this draft essay pulls no punches. It's heavily researched (166 footnotes) and plain in its language. I'm going to hit the highlights here, but I hope you'll be motivated to go read the entire work yourself.
The essay is from two Boston University law professors. Woodrow Hartzog focuses on privacy and technology law; Jessica Silbey teaches and writes about intellectual property and technology law (she also has a PhD in comparative literature--yay, humanities). Their forty-page draft essay breaks down neatly into sections. Let's go.
Institutions are society's superheroes
When we use the term “institutions,” we mean the commonly circulating norms and values covering a recognizable field of human action, such as medicine or education. Institutions form the invisible but essential backbone of social life through their familiar yet iterative and adaptable routines across wide populations in space and time.
These are really important because these "bundles of normative commitments and conventions" help to reduce "uncertainty while promoting human cooperation and efficacy of mission." In other words, they keep things flowing smoothly, particularly for people involved in moving a certain mission forward.
However, they note, "People both inside and outside an institution must believe in its mission and competency for it to remain durable and sustain legitimacy." Institutions also rely on expertise which helps because it "values and promotes competence, innovativeness, and trustworthiness."
So, institutions really matter, and they depend on certain factors. And here our trouble begins.
The destructive affordances of AI
Hartzog and Silbey explain that we'll be using AI to mean main generative AI systems (chatbots), predictive AI (facial recognition), and automated decision AI (content moderation). They can tempt institution folks by promising to be both fast and correct.
So surface-level use cases for AI in institutions exist. But digging deeper, things quickly fall apart. We are a long way from the ideal conditions to implement accountability guardrails for AI. Even well-intentioned information, technology rules, and protective frameworks are often watered down, corrupted, and distorted in environments where people face powerful incentives to make money or simply get the job done as fast as possible.
Perhaps if human nature were a little less vulnerable to the siren’s call of shortcuts, then AI could achieve the potential its creators envisioned for it. But that is not the world we live in. Short-term political and financial incentives amplify the worst aspects of AI systems, including domination of human will, abrogation of accountability, delegation of responsibility, and obfuscation of knowledge and control.
But despite the seductive lure of AI, the authors point out that it "requires the pillaging of personal data and expression, and facilitates the displacement of mental and physical labor." But mostly it reproduces existing patterns, amplifies biases, and just generally pumps harmful slop into the information ecosystem, all while pretending to be both authoritative and objective.
And its faux-conscious, declarative, and confident prose hides normative judgments behind a Wizard-of-Oz-esque curtain that masks engineered calculations, all the while accelerating the reduction of the human experience to what can be quantified or expressed in a function statement.
What we end up with is the "outsourcing of human thought and relationships to algorithmic outputs." And that means that AI does some serious damage in three main ways.
First, AI undermines expertise
First, AI systems undermine and degrade institutional expertise. Because AI gives the illusion of accuracy and reliability, it encourages cognitive offloading and skill atrophy, and frustrates back-end labor required to repair AI’s mistakes and “hallucinations.”
This doesn't just substitute unreliable bot answers for the work of human experts; it also "denies the displaced person the ability to hone and refine their skills." We get this in education; if you have someone or something do your assignment for you, you don't develop the skills that would have come from doing the work yourself. Same thing in the workplace. Would you rather have a nurse who can say "I have seen this kind of problem a hundred times" or one who can say "I have referred this kind of problem to a medibot a hundred time."
Hartzog and Silbey also remind us that AI can only look backwards; they are bound by pre-existing information. As Arvind Naryann and Sayash Kapoor point out in the AI Snake Oil, predictive AI won't work because the only way it can make good predictions is if nothing else changes. AI is your mother explaining to you how to get a job in today's market based on how she got her job thirty years ago, as if conditions have not changed since then.
AI may appear "hyper-competent," but the authors correctly point out that hallucinations are not a bug, but an inevitable feature of how these systems are designed. Remember, the "stochastic" in "stochastic parrot" means "randomly determined," a guess. When the guesses are correct, the humans in the institution lose skill and value; when the guess is wrong, the institution has to compensate for that failure,
AI short-circuits decisionmaking
Important moral decisions get sloughed off to AI, justified by the notion that they are somehow objective and efficient and therefor not involved in making any moral choices.
To start, the decision to implement an AI system in an institution in any significant way is not just about efficiency. Technologies have a way of obscuring the fact that moral choices that should be made by humans have been outsourced to machines.
When your insurance company uses AI to approve or deny your claim, it is making a moral choice, and furthermore, it's making that choice based on rules that are hidden inside the black box of AI. Then, the authors note, "When AI systems obscure the rules of institutions, the legitimacy of those rules degrades."
The authors further argue that AI is incapable of "a willingness to
learn, engage, critique, and express yourself even though you are vulnerable or
might be wrong." Humans can stretch beyond what is known, make big jumps or wide connections. Those kinds of creative leaps are beyond AI, which gives us more of what is already out there.
The authors also argue that AI cannot challenge the status quo "because its voice has no weight." In other words, humans might speak up, confront management, or even resign loudly in protest, creating pressure for the institution to be better. Raise your hand if you think that this is exactly why some leaders think AI employees are an awesome idea. But the authors argue that "moral courage and insight" are "necessary for institutions to adapt and survive." One would hope.
AI isolates humans
Finally, AI systems isolate people by displacing opportunities for human connection and interpersonal growth. This deprives institutions of the necessary solidarity and space required for good faith debate and adaptability in light of constantly changing circumstances. AI displaces and degrades human-to-human relationships and—through its individualized engagement and sycophancy— erodes our capacity for reflection about and empathy towards other and different humans.If an institution isn't working out roles and the rules that guide the roles, the rules that make the institution function start to waste away. Then "there is only institutional chaos or the rule of the powerful."
This strikes me as a drawback that people are really blind to. The consistent assumption in every single plan to have students taught by an AI bot is the assumption that those students will react to the bot as they would to a human teacher, that they will behave as if a real live teacher is in the room, and not, instead, simply throw out the rules about what it means to be a student in a classroom.
The institutions on AI's death row
Hartzog and Silbey offer DOGE as a prime example of an institution that rotted from AI dependence, but they see many areas that are susceptible.
For instance, if the rule of law is handed to AI, we've got trouble. The idea of enforcing rules is that enforcement makes the rules visible and therefor easier for everyone to follow. But when the rules are obscured or unclear or simply hidden in the black box of AI, nobody knows what the rules are or what we are supposed to do.
Imagine, they suggest, you get a notice that the IRS AI has determined that you owe $100,000 in back taxes. Nobody can tell you why, exactly, but they assume that the efficient and unbiased AI must have it right. Or a judge who hits you with a fine far above the recommended range, based on AI recommendation. Again, without explanation, but with the assumption of accuracy.
I'm imagining an AI that grades your student essay, but can't answer any of your questions about why you got that particular grade.
It's all much like having someone in charge of government who sets rules based on his own personal whims and quirks from day to day and offers no explanation except that it's what he wants and he will use power to force compliance. Imagine how much that would suck. AI is also an authoritarian bully, except that its mechanized nature allows folks to pretend that its rule is unbiased and accurate.
Hartzog and Silbey unsurprisingly also see trouble for higher education. AI taking over the cognitive load needed for learning. AI producing mediocre and homogenized content. AI shifting the questions researchers ask "from qualitative mysteries to quantifiable puzzles." If your main tool is an AI hammer, you are going to look for only nails that it will work on.
And then there's trust, emerging more and more as an AI issue in education. Can you trust your students' work? Can they trust yours as a teacher? And what does all this do to the human connections needed for education to work? More distrust means more vulnerability to outside authorities trying to control the institution.
Then there's journalism...
As AI slop, the cheap, automatic, and thoughtless content made possible by AI, contaminates our public discourse and companies jam AI features into all possible screens, few institutions are more vital to preserve than the free press.
Too much slop and junk, particularly when it devalues expertise and knowledge, leads to a "scarcity of attention" and a lessened ability to respond to misinformation and disinformation. Everyone trying to do journalism of any sort knows the problem-- how do you get anyone to actually pay attention to what you have to say. We suffer from a collective thirteenth clown problem-- if there are twelve clowns on stage frolicking about, you can jump on stage and start reciting Shakespeare, but to the audience, you'll just be the thirteenth clown.
Plus, the generation of mountains of slop means that AI is both generating and feeding on slop, and slop made out of slop is--well, not good.
Journalism is defined by its adaptive, responsive dialogue in the face of shifting social, political, and economic events and by its sensitivity to power. But AI systems are not adaptive in a way that is responsive to human complexity, and they are agnostic to power. AI systems are pattern matchers; they cannot discern or produce “news.”
Democracy and civic life
Hartzog and Silbey pull out Robert Putnam's Bowling Alone, a standard on my list of books everyone should read.One key concept necessary for a society to function is the idea of “generalized reciprocity: I’ll do this for you without expecting anything specific back from you, in the confident expectation that someone else will do something for me down the road.” Putnam wrote, “[a] society characterized by generalized reciprocity is more efficient than a distrustful society. . . . Trustworthiness lubricates social life.” As people become isolated and withdraw from public life, trust disappears, and social capital along with it.
If we continue to embrace AI unabated, social capital and norms of reciprocity will abate, and our center—democracy and civil life—will not hold. Because AI systems undermine expertise, short-circuit decision-making, and isolate humans, they are the perfect machines to destroy social capital.
There is an irony in the AI industry's attempt to solve the "loneliness crisis" by offering chatbot companions-- which is looking more and more like a very bad idea. Nor does it seem helpful for society if everyone sits at home and has AI agents handle everything from shopping to email correspondence. Working stuff out with other humans requires social capital, and your handy AI agent cannot do that for you. And again-- every scenario in which an AI agent replaces a human assumes that the transaction will go on as if it still involved a human. You'll use AI to answer emails and, the assumption goes, people will respond to those emails as they would had you written them yourself and not, say, dismiss and ignore them because they did not come from a human. Meanwhile, how does one build empathy and reciprocity when two AIs are talking back and forth on your behalf?
The section ends with a reporting about the techbro dream of a world in which AI runs everything (and they run AI), a new brand of technofacsism. They quote Jill Lepore's NYT story from last fall:
More recently, Mr. Altman, for his part, pondered the idea of replacing a human president of the United States with an A.I. president. “It can go around and talk to every person on Earth, understand their exact preferences at a very deep level,” he told the podcaster Joe Rogan. “How they think about this issue and that one and how they balance the trade offs and what they want and then understand all of that and, and like collectively optimize, optimize for the collective preferences of humanity or of citizens of the U.S. That’s awesome.” Is that awesome? Replacing democratic elections with machines owned by corporations that operate by rules over which the people have no say? Isn’t that, in fact, tyranny?
Well, it's not tyranny from Altman's point of view. It's just him living with absolute freedom from anything that would impede his will or that would involve him actually dealing with meat widgets. Meanwhile, Oracle is shopping around AI to help run your local municipal government
So, this paper
It's not a pretty or encouraging picture, but it is a thorough one and a compelling articulation of the argument against indiscriminate AI use in our institutions. I'm not sure how many people are really listening, but I recommend the essay as a worthwhile read. You can get to it here.

No comments:
Post a Comment