Wednesday, April 2, 2025

Where Does AI Fit In The Writing Process

Pitches and articles keep crossing my desk that argue for including AI somewhere in the student writing process. My immediate gut-level reaction is similar to my reaction upon finding glass shards in my cheeseburger, but, you know, maybe my reaction is a just too visceral and I need to step back and think this through.

So let's do that. Let's consider the different steps in a student essay, both for teachers and students, and consider what AI could contribute.

The Prompt

The teacher will have to start the ball rolling with the actual assignment. This could be broad ("Write about a major theme in Hamlet") or very specific ("How does religious imagery enhance the development of ideas related to the role of women in early 20th century New Orleans in Kate Chopin's The Awakening?"). 

If you're teaching certain content, I am hoping that you know the material well enough to concoct questions about it that are A) worth answering and B) connected to your teaching goals for the unit. I have a hard time imagining a competent teacher who says, "Yeah, I've been teaching about the Industrial Revolution for six weeks, but damned if I know what anyone could write about it." 

I suppose you could try to use ChatGPT to bust some cobwebs loose or propose prompts that are beyond what you would ordinarily set. But evaluating responses to a prompt that you haven't thought through yourself? Also, will use of AI at this stage save a teacher any real amount of time?

Choosing the Response

Once the student has the prompt, they need to do their thinking and pre-writing to develop an idea about which to write. 

Lord knows that plenty of students get stuck right here, so maybe an AI-generated list of possible topics could break the logjam. But the very best way to get ready to write about an idea starts when you start developing the idea. 

The basic building block of an essay is an idea, and the right question to ask is "What do I have to say about this prompt?" Asking ChatGPT means you're starting with the question, "What could I write an essay about?" Which is a fine question if your goal is to create an artifact, a piece of writing performance. 

I'm not ruling out the possibility that a student see a topic on a list and have a light bulb go off-- "OOoo! That sounds interesting to me!" But mostly I think asking LLMs to pick your topic is the first step down the wrong road, particularly when you consider the possibility that the AI will spit out an idea that is simply incorrect.

Research and Thinking

So the student has picked a topic and is now trying to gather materials and formulate ideas. Can AI help now?

Some folks think that AI is a great way to summarize sources and research. Maybe combine that with having AI serve as a search engine. "ChatGPT, find me sources about symbiosis in water-dwelling creatures." The problem is that AI is bad at all those things. Its summarizing abilities are absolutely unreliable and it is not a good search engine, both because it tends to make shit up and because its training data is probably not up to date.

But here's the thing about the thinking part of preparing to write. If you are writing for real, and not just filling in some version of a five paragraph template, you have to think about the idea and their component parts and how they relate, because that is where the form and organization of your essay comes from. 

Form follows function. If you start with five blank paragraphs and then proceed to ask "What can I put in this paragraph, you get a mediocre-at-best artifact that can be used for generating a grade. But if you want to communicate ideas to other actual humans, you have to figure out what you want to say first, and that will lead you straight to How To Say It. 

So letting AI do the thinking part is a terrible idea. Not just because it produces a pointless artifact, but because the whole thinking and organizing part is a critical element of the assignment. It exercises exactly the mental muscles that a writing assignment is supposed to build. In the very best assignments, this stage is where the synthesis of learning occurs, where the student really grasps understanding and locks it in place. 

So many writing problems are really thinking problems-- you're not sure how to say it because you're not sure what to say. And every problem encountered is an opportunity. Every point of friction is the place where learning occurs.

Organization

See above. If you have really done the thinking part, you can organize the elements of the paper faster and better than the AI anyway. 

Drafting

You've got a head full of ideas, sorted and organized and placed in a structure that makes sense. Now you just have to put them into words and sentences and paragraphs. Well, maybe not "just." This composing stage is the other major point of the whole assignment-- how do we take the thoughts into our heads and turn them into sequences of words that communicate across the gulf between separate human beings? That's a hell of a different challenge than "how does one string together words to fill up a page in a way that will collect grade tokens?" 

And if you've done all the thinking part, what does tagging in AI do for you anyway? You know better than the AI what exactly you have in mind, and by the time you've explained all that in your ChatGPT prompt box, you might as well have just written the essay yourself.

I have seen the argument--from actual teachers-- that having students use AI to create a rough draft is a swell idea. Then the student can just "edit" the AI product-- just fix the mistakes, organize things more in line with what you were thinking, maybe add a little voice here and there. 

But if you haven't done the thinking part, how can you edit? If you don't know what the essay is intended to say--or if, in fact, it came from a device that cannot form intent-- how can you judge how well it is working?

Proof and edit

The AI can't tell you how well you communicated what you intended to communicate because, of course, it has no grasp of your intent. That said, this is a step that I can imagine some useful of computerized analysis, though whether it all rises to the level of AI is debatable.

I used to have my students do some analysis of their own writing to illuminate and become more conscious of their own writing patterns. Some classics like counting the forms of "be" in the essay (shows if you have a love for passive or weak verbs). Count the number of words per sentence. Do a grammatical analysis of the first four words of every sentence. All data points that can help a writer see and then try to break certain unconscious habits. Students can do this by hand; computers could do it faster, and that would be okay.

The AI could be played with for some other uses. Ask the AI to summarize your draft, to see if you seem to have said what you meant to say. I suppose students could ask AI for editing suggestions, but only if we all clearly understand that many of those suggestions are going to be crappy. I've seen suggestions like having students take the human copy and the edited-by-AI copy and perform a critical comparison, and that's not a terrible assignment, though I would hope that the outcome would be realization that human editing is better. 

I'm also willing to let my AI guard down here because decades of classroom experience taught me that students would, generally speaking, rather listen to their grandparents declaim loudly about the deficiencies of Kids These Days than do meaningful proofreading of their own writing. So if playing editing games with AI can break down that barrier at all, I can live with it. But so many pitfalls; for instance, the students who comply by writing the most half-assed rough draft ever and just letting ChatGPT finish the job. 

Final Draft

Another point at which, if you've done all the work so far, AI won't save you any time or effort. On the other hand, if this is the main "human in the loop" moment in your process, you probably lack the tools to make any meaningful final draft decisions.

Assessing the Essay

As we have noted here at the institute many, many times over the years, computer scoring of essays is the self-driving car of the academic world. It is always just around the corner, and it never, ever arrives. Nor are there any signs that is about to. 

No responsible school system (or state testing system) should use computers to assess human writing. Computers, including AI programs, can't do it well for a variety of reasons, but let's leave it at "They do not read in any meaningful sense of the word." They can judge is the string of words is a probable one. They can check for some grammar and usage errors (but they will get much of that wrong). They can determine if the student has wandered too far from the sort of boring mid sludge that AI dumps every second onto the internet. And they can raise the philosophical question, "Why should students make a good faith attempt to write something that no human is going to make a good faith attempt to read?"

Yes, a ton of marketing copy is being written (probably by AI) about how this will streamline teacher work and make it quicker and more efficient and even more fair (based on the imaginary notion that computers are impartial and objective). The folks peddling these lies are salivating at the dreams of speed and efficiency and especially all the teachers that can be fired and replaced with servers that don't demand raises and don't join unions and don't get all uppity with their bosses. 

But all the wishing in the world will not bring us effective computer assessment of student writing. It will just bring us closer to the magical moment when AI teachers generate an AI assignment which student AI then generate to be fed into AI assessment programs. The AI curriculum is thereby completed in roughly eight and a half minutes, and no actual humans even have to get out of bed. What that gets us other than wealthy, self-satisfied tech overlords, is not clear. 

Bottom Line

All of the above is doubly true if you are in classroom where writing is used as an assessment of content knowledge. 

This is all going to seem like quibbling to people who having an artifact to exchange for grade tokens is the whole point of writing. But if we want to foster writing as a real meaningful means of expression and communication, AI doesn't have much to offer the process. Call me an old fart, but I still haven't seen much of a use case for AI in the classroom when it comes to any sort of writing. 

What AI mostly promises is the classroom equivalent of having someone come to the weight room and do the exercises for you. Yeah, it's certainly easier than doing it yourself, but you can't be surprised that you aren't any stronger when your substitute is done. 






Sunday, March 30, 2025

Ready For An AI Dean?

From the very first sentence, it's clear that this recent Inside Higher Ed post suffers from one more bad case of AI fabulism. 

In the era of artificial intelligence, one in which algorithms are rapidly guiding decisions from stock trading to medical diagnoses, it is time to entertain the possibility that one of the last bastions of human leadership—academic deanship—could be next for a digital overhaul.

AI fabulism and some precious notions about the place of deans in the universe of human leadership.

The author is Birce Tanriguden, a music education professor at the Hartt School at the University of Hartford, and this inquiry into what "AI could bring to the table that a human dean can't" is not her only foray into this topic. This month she also published in Women in Higher Education a piece entitled "The Artificially Intelligent Dean: Empowering Women and Dismantling Academic Sexism-- One Byte at a Time."

The WHE piece is academic-ish, complete with footnotes (though mostly about the sexism part). In that piece, Tanriguden sets out her possible solution

AI holds the potential to be a transformative ally in promoting women into academic leadership roles. By analyzing career trajectories and institutional biases, our AI dean could become the ultimate career counselor, spotting those invisible banana peels of bias that often trip up women's progress, effectively countering the "accumulation of advantage" that so generously favors men.

Tanriguden notes the need to balance efficiency with empathy:

Despite the promise of AI, it's crucial to remember that an AI dean might excel in compiling tenure-track spreadsheets but could hardly inspire a faculty member with a heartfelt, "I believe in you." Academic leadership demands more than algorithmic precision; it requires a human touch that AI, with all its efficiency, simply cannot emulate.

I commend the author's turns of phrase, but I'm not sure about her grasp of AI. In fact, I'm not sure that current Large Language Models aren't actually better at faking a human touch than they are at arriving at efficient, trustworthy, data-based decisions.  

Back to the IHE piece, in which she lays out what she thinks AI brings to the deanship. Deaning, she argues, involves balancing all sorts of competing priorities while "mediating, apologizing and navigating red tape and political minefields."

The problem is that human deans are, well, human. As much as they may strive for balance, the delicate act of satisfying all parties often results in missteps. So why not replace them with an entity capable of making precise decisions, an entity unfazed by the endless barrage of emails, faculty complaints and budget crises?

The promise of AI lies in its ability to process vast amounts of data and reach quick conclusions based on evidence. 

Well, no. First, nothing being described here sounds like AI; this is just plain old programming, a "Dean In A Box" app. Which means it will process vast amounts of data and reach conclusions based on whatever the program tells it to do with that data, and that will be based on whatever the programmer wrote. Suppose the programmer writes the program so that complaints from male faculty members are weighted twice as much as those from female faculty. So much for AI dean's "lack of personal bias." 

But suppose she really means AI in the sense of software that uses a form of machine learning to analyze and pull out patterns in its training data. AI "learns: to trade stocks by being trained with a gazillion previous stock trades and situations, thereby allowing it to suss out patterns for when to buy or sell. Medical diagnostic AI is training with a gazillion examples of medical histories of patients, allowing it to recognize how a new entry from a new patient fits in all that the patterns. Chatbots like ChatGPT do words by "learning" from vast (stolen) samples of word use that lead to a mountain of word patter "rules" that allow it to determine what words are likely next.

All of these AI are trained on huge data sets of examples from the past.

What would you use to train AI Dean? What giant database would you use to train it, what collection of info about the behavior of various faculty and students and administrators and colleges and universities in the past? More importantly, who would label the data sets as "successful" or "failed"? Medical data sets come with simple metrics like "patient died from this" or "the patient lived fifty more years with no issues." Stock markets come with their own built in measure of success. Who is going to determine which parts of the Dean Training Dataset are successful or not.

This is one of the problems with chatbots. They have a whole lot of data about how language has been used, but no meta-data to cover things like "This is horrifying racist nazi stuff and is not a desirable use of language" and so we get the multiple examples of chatbots going off the rails

Tanriguden tries to address some of this. Under the heading of how AI Dean would evaluate faculty.

With the ability to assess everything from research output to student evaluations in real time, AI could determine promotions, tenure decisions and budget allocations with a cold, calculated rationality. AI could evaluate a faculty member’s publication record by considering the quantity of peer-reviewed articles and the impact factor of the journals in which they are published.

Followed by some more details about those measures. Which raises another question. A human could do this-- if they wanted to. But if they don't want to, why would they want a computer program to do it?

The other point here is that once again, the person deciding what the algorithm is going to measure is the person whose biases are embedded in the system. 

Tanriguden also presents "constant availability, zero fatigue" as a selling point. She says deans have to do a lot of meetings, but (her real example) when, at 2 AM, the department chair needs a decision on a new course offering, AI Dean can provide an answer "devoid of any influence of sleep deprivation or emotional exhaustion." 

First, is that really a thing that happens? Because I'm just a K-12 guy, so maybe I just don't know. But that seems to me like something that would happen in an organization that has way bigger problems than any AI can solve. But second, once again, who decided what AI Dean's answer will be based upon? And if it's such a clear criterion that it can be codified in software, why can't even a sleepy human dean apply it?

Finally, she goes with "fairness and impartiality," dreaming of how AI Dean would apply rules "without regard to the political dynamics of a faculty meeting." Impartial? Sure (though we could argue about how desirable that is, really). Fair? Only as fair as it was written to be, which starts with the programmer's definition of "fair."

Tanriguden wraps up the IHE piece once again acknowledging that leadership needs more than data as well as "the issue of the academic heart." 

It is about understanding faculty’s nuanced human experiences, recognizing the emotional labor involved in teaching and responding to the unspoken concerns that shape institutional culture. Can an AI ever understand the deep-seated anxieties of a faculty member facing the pressure of publishing or perishing? Can it recognize when a colleague is silently struggling with mental health challenges that data points will never reveal?

In her conclusion she arrives at Hybrid Dean as an answer:

While the advantages of AI—efficiency, impartiality and data-driven decision-making—are tantalizing, they cannot fully replace the empathy, strategic insight and mentorship that human deans provide. The true challenge may lie not in replacing human deans but in reimagining their roles so that they can coexist with AI systems. Perhaps the future of academia involves a hybrid approach: an AI dean that handles (or at least guides) the operational decisions, leaving human deans to focus on the art of leadership and faculty development.

We're seeing lots of this sort of resigned knuckling under in lots of education folks who seem resigned to the predicted inevitability of AI (as always in ed tech, predicted by people who have a stake in the biz). But the important part here is that I don't believe that AI can hold up its half of the bargain. In a job that involves management of humans and education and interpersonal stuff in an ever-changing environment, I don't believe AI can bring any of the contributions that she expects from it. 

ICYMI: One Week To Go Edition (3/30)

Next weekend the CMO and I will be off to the gathering of the Network for Public Education. It will be a nice road trip for us (the CMO is an excellent travel partner), and it is always invigorating to be around a whole lot of people who believe that public education is important and worth defending. If you're there, be sure to say hi!

In the meantime, keep sharing and amplifying and contacting your Congressperson regularly. These are not the days to sit quietly and hope for the best.

Here's this week's list.

Trump Says He’ll Fully Return Education to the States: Why That’s a Dangerous Idea

Jan Resseger  points to some of what reporters have uncovered about the potential pitfalls of Trusk's "back to the states" plans. 

Coming to Life: Woodchippers and Community Builders

Nancy Flanagan on the moment in Michigan, and some encouragement to keep swinging.

Texas lawmakers advance bill that makes it a crime for teachers to assign "Catcher in the Rye"

Rebecca Crosby and Noel Sims at Popular Information cover the latest censorship bill in Texas

Trump and his allies are selling a story of dismal student performance dating back decades. Don't buy it

The regime is pushing its bad education ideas on the back of false claims about education failures. Jennifer Berkshire talks to Karin Chenoweth about the actual truth.

Embattled Primavera Online owner, who made millions while his charter school students failed, lays off staff but is poised for another major payout
 
In Arizona, the news reports on one more charter scamster filling his own pockets while shafting actual workers.

Are taxpayers footing the bill for out-of-state cyber school students? CASD investigating

In Pennsylvania, one school district discovers it ios paying cyber tuition for students who don't even live there any more.

Tallahassee: Closing Title i Schools and opening Private Schools for the Privileged.

Profiteers at Charter Schools USA have decided there's more money to be made serving the elite, so good bye Renaissance Academy and hello a private school for "advanced and gifted learners." This story is important because it shows the shift from charter schools to private schools under universal vouchers. Sue Kingery Woltanski explains in this picture of some of the most naked money-grubbing to be seen--but not for the last time.


Research might suggest it could become addictive for some folks.

Banned Books, School Walkouts, Child Care Shortages: Military Families Confront Pentagon's Shifting Rules

At Military.com, a look at how the takeover of DOD schools by the regime is going, and how students are fighting back.

The Plagiarism Machine

Have you subscribed to the Audrey Watters newsletter yet? You should do that. And get the paid subscription for extra stuff. She looks this week at how AI is stealing content on an impossible scale.

Dismantling Public Education: No Laughing Matter!

Nancy Bailey on Trusk's dismantling of the education department.

EXCLUSIVE: AI Insider reveals secrets about artificial general intelligence

Ben Riley passes along some AI-skeptic wisdom from Yann LeCun (no, AI will not replace teachers).


John Warner contemplates being an author whose work has been thieved by AI developers. What is the future of writing?

I Teach Memoir Writing. Don’t Outsource Your Life Story to A.I.

Tom McAllister at the New York Times with an exceptional argument for writing by humans, not by bots.


Carlos Greaves at McSweeney's, reminding us that satire isn't always entirely funny.

I've got some Shirley Temple for you this week. Bert Lahr is fine, but when Bill Robinson comes down those steps...!



Also, join me at my newsletter. Free now and always.


Friday, March 28, 2025

And Now, Thought Crime

MAGA has dropped one level of pretense.

Up till now, the culture panic has named its target. BLM. CRT. DEI. They picked a particular policy to attack. They mischaracterized it, but they named it and attacked it.

But with the latest White House edict for the whitewashing of history takes us one step further into Big Brother territory.

The edict is particularly focused on the Smithsonian Institution and the National Zoo (gotta watch out for those Marxist emus, but presumably the face-eating leopards are okay). The Vice President is directed "to remove improper ideology from such properties." 

Improper ideology.

What does that even mean. The edict (and if Trump wrote this himself, I'm a Marxist emu) enumerates assorted offenses such as saying the nation is inherently racist and that institutional racism is a thing and all sorts of stuff coming out of that National Museum of African American History. Also, they heard the upcoming Women's History Museum might include some trans persons. 

That is all lumped into that term "improper ideology." You know-- thinking and believing things that are doubleplus ungood. This on top of an ignorance of what history is and how it works. Insisting that it is not an ongoing discussion and debate about what happened and what it means, but is rather a polished hagiography of the only stories citizens should be allowed to tell about the nation, selected by a man who simultaneously calls the country a hellhole and the most perfect nation ever.

Also, the edict calls for the country to restore statues, monuments, etc that commemorate the treasonous losers of the Civil War (I'm paraphrasing a bit). Because their willingness to kill fellow citizens in order to preserve the "right" to own other human beings is important stuff. Also, there should be no statues that say anything that might "disparage" (I told you he didn't write this) any Americans, past or present (with a special mention of "persons living in colonial times").

Also, no monuments that "minimize the value of certain events or figures" and, of course, none that "include any other improper partisan ideology." Well, except their partisan ideology, but that goes without saying. It always goes without saying.

I don't know exactly why this shift in terminology has ramped up my alarm and displeasure sooo much. Lord knows they've been straddling this line for a while, but this feels like tipping fully over into the idea that The State will tell us what we are not to believe, or even mention or discuss. The nation cannot be great unless everyone in it believes the same things, and Dear Leader will tell us what those things are supposed to be. Colleges and universities will be required to teach only those things. 

How does K-12 education continue under these restrictions? How much will individual teachers be willing to risk? Hell, right now we're grabbing foreign grad students off the street for writing anything the State disapproves of and cuffing foreigners at the border for having mean social media posts on their phones. If we accept the notion that "life, liberty and the pursuit of happiness" are not rights we are born with, but rather rights that are given to us by the State, then it's a short step for the State to cancel those rights for citizens as well. And if we accept that just having an idea or expressing that idea makes you dangerous to the State, then we're in deep trouble. How do we teach students to function in that kind of society? What does education look like in a country where only certain ideas are approved and allowed by Dear Leader? 

Making certain actions illegal is one thing. But making the expression of certain "improper" ideas or beliefs is quite another. Maybe the courts will stop this edict, too. That would be great and also appropriate, because Presidentially-declared thought crimes are not okayed in the Constitution. 

Oh, Bill. Hush.

The important thing to remember is that Bill Gates has never been right about education.

He invested heavily in a small schools initiative. It failed, because he doesn't understand how schools work.

He tried fixing teachers and playing with merit pay. He inflicted Common Core on the nation, because again, he doesn't understand how schools and teaching and education work. He has tried a variety of other smaller fixes, like throwing money at teacher professional development. He has made an almost annual event out of explaining that NOW he has things figured out (spoiler alert: he does not) and with the new tweaks, he will now transform education (spoiler alert: he does not).

I remind you of all this because nobody should be freaking out over the recent headlines that Gates has predicted that AI will replace teachers and doctors in ten years and humans will, just in general, be obsolete. The Economist called this prediction "alarming," and I suppose it might be if there were any reason to imagine that Gates can make such predictions any more accurately than the guy who takes care of my car at Jiffy Lube.

AI tutors will become broadly available and AI doctors provide great medical advice in an era of "free intelligence." It's all “very profound and even a little bit scary — because it’s happening very quickly, and there is no upper bound,” Gates told Harvard professor Arthur Brooks (the happiness research guy).

Meanwhile, tech companies still won't make and market a printer that reliably does what it's supposed to as a reasonable price. 

Ed tech is always predicting terrific new futures, because FOMO is a powerful marketing force, and making your product seem inevitable is the tech version of an old used car sales technique (called "assume the sale," you just frame the conversation as if the decision to buy the car has already been made and now we're just dickering over terms).

I'm not here to predict the future of AI. I'm sure it will be good for some things ("Compare Mrs. Smith's knee MRI image to a million other images to diagnose what's going on") and terrible for others ("ChatGPT, please answer this email from Pat's parents for me"). 

I'm not sure what the future holds for AI in education, and I am sure that Bill Gates has no idea, either. I am also sure I know which one of us has a better understanding of education and schools and teaching (spoiler alert: not the one with all the money).

Ed tech bros are, like Bill, putting a lot of their bot bets on AI tutors--just sit a kid down with a screen set to "Teach the student grammar and usage" and let it rip. The thing is, we've been playing with education-via-screen for decades now, and it has still not proven itself or taken off. You may recall we ran a fairly large experiment in distance learning via screen back in 2020, and people really hated it-- so much that some of them are still bitching about it.

I'm not sure what is going to be "free" about the AIU when it is so expensive to make, and I'm not sure how obsolete Gates imagines humans will be. It may be that he just dreams of a world in which he doesn't have to deal with any those meat sack Lessers.

But the thing to remember is that the Gates track record in education is the story of a lot of money burned to accomplish nothing except choking a lot of people on the smoke from the fire. 

We will never escape our culture's tendency to assume that if someone has a bunch of money, they are expert at anything at which they wish to pretend to be expert. So people are always going to ask Gates what he thinks about education and its intersection with technology. I'd love to see the day when he says, "You know, I don't really know enough about education to make a comment on that," but until that day comes, we don't have to get excited about whatever he says. 


Wednesday, March 26, 2025

Losing The Federal Education Mission

The official assault on the Department of Education has begun.

If it seems like there's an awful lot more talking around this compared to, say, the gutting of the IRS or USAID, that may be because the regime doesn't have the legal authority to do the stuff that they are saying they want to do. The executive order is itself pretty weak sauce-- "the secretary is to investigate a way to form a way to do stuff provided it's legal." And that apparently involves sitting down in front of every camera and microphone and trying to make a case.

A major part of that involves some lies and misdirection. The Trumpian line that we spend more than anyone and get the worst results in the world is a lie. But it is also a misdirection, a misstatement about the department's actual purpose.

Likewise, it's a misstatement when the American Federation of Children characterizes the "failed public policy" of "the centralization of American education." But the Department wasn't meant--or built--to centralize US education. 

The department's job is not to make sure that American education is great. It is expressly forbidden to exert control over the what and how of education on the state and local level. 

The Trump administration is certainly not the first to ignore any of that. One of the legacies of No Child Left Behind is the idea that feds can grab the levers of power to attempt control of education in the states. Common Core was the ultimate pretzel-- "Don't call it a curriculum because we know that would be illegal, but we are going to do our damnedest to standardize the curriculum across every school in every state." For twenty-some years, various reformsters have tried to use the levers of power in DC to reconfigure US education as a centrally planned and coordinated operation (despite the fact that there is nowhere on the globe to point to that model as a successful one). And even supporters of the department are speaking as if the department is an essential hub for the mighty wheel of US education.

Trump is just working with the tools left lying around by the bipartisan supporters of modern education reform. 

So if the department's mission is not to create central organization and coordination, then what is it?

I'd argue that the roots of the department are not the Carter administration, but the civil rights movement of the sixties and the recognition that some states and communities, left to their own devices, would try to cheat some children out of the promise of public education. Derek Black's new book Dangerous Learning traces generations of attempts to keep Black children away from education. It was (roughly) the 1960s when the country started to grapple more effectively with the need for federal power to oppose those who would stand between children and their rights. 

The programs that now rest with the department came before the department itself, programs meant to level the playing field so that the poor (Title I) and the students with special needs (IDEA) would get full access. The creation of the department stepped up that effort and, importantly, added an education-specific Civil Rights office to the effort.

And it was all created to very carefully not usurp the power of the states. When Trump says he'll return control of education to the states, he's speaking bunk, because the control of education has always remained with the states-- for better or worse. 

The federal mission was to make the field more level, to provide guardrails to keep the states playing fair with all students, to make sure that students had the best possible access to the education they were promised. 

Trump has promised that none of the grant programs or college loan programs would be cut (and you can take a Trump promise to the... well, somewhere) but if all the money is still going to keep flowing, then what would the loss of the department really mean?

For one thing, the pieces that aren't there any more. The Office of Civil Rights is now gutted and repurposed to care only about violations of white christianist rights. The National Center of Education Statistics was the source of any data about how education was working out (much of it junk, some of it not). The threat of turning grants into unregulated block grants, or being withheld from schools that dare to vaccinate or recognize diversity or keep naughty books in the library.

So the money will still flow, but the purpose will no longer be to level the playing field. It will not be about making sure every child gets the education they're entitled to-- or rather, it will rest on the MAGA foundation, the assumption that some people deserve less than others. 

That's what the loss of the department means-- a loss of a department that, however imperfectly, is supposed to protect the rights of students to an education, regardless of race, creed, zip code, special needs, or the disinterest and prejudice of a state or community. Has the department itself lost sight of that mission from time to time? Sure has. Have they always done a great job of pursuing that mission? Not at all. But if nobody at all is supposed to be pursuing that goal, what will that get us? 

AR: Attempting To Make Non-conforming Haircuts Illegal

 Arkansas state legislature is deeply worried about trans persons. Rep. Mary Bentley (R- 73rd Dist) has been trying to make trans kids go away for years as with her 2021 bill to protect teachers who used students dead names or misgender them (that's the same year she pushed a bill to require the teaching of creationism in schools).  

In 2023, Bentley successfully sponsored a bill that authorizes malpractice lawsuits against doctors who provide gender-affirming care for transgender youth. Now Bentley has proposed HB 1668, "The Vulnerable Youth Protection Act" which takes things a step or two further.

The bill authorizes lawsuits, and the language around the actual suing and collecting money part is long and complex-- complex enough to suspect that Bentley, whose work experience is running rableware manufacturer Bentley Plastics, might have had some help "writing" the bill. The part where it lists the forbidden activities is short, but raises the eyebrows. 

The bill holds anyone who "knowingly causes or contributes to the social transitioning of a minor or the castration, sterilization, or mutilation of a minor" liable to the minor or their parents. The surgical part is no shocker-- I'm not sure you could find many doctors who would perform that surgery without parental consent, and certainly not in Arkansas (see 2023 law). But social transitioning? How does the bill define that?

"Social transitioning" means any act by which a minor adopts or espouses a gender identity that differs from the minor’s biological sex as determined by the sex organs, chromosomes, and endogenous profiles of the minor, including without limitation changes in clothing, pronouns, hairstyle, and name.

So a girl who wears "boy" jeans? A boy who wears his hair long? Is there an article of clothing that is so "male" that it's notably unusual to see a girl wearing it? I suppose that matters less because trans panic is more heavily weighted against male-to-female transition. But boy would I love to see a school's rules on what hair styles qualify as male or female. 

Also, parental consent doesn't make any difference. Rep. Nicole Clowney keyed on that, as reported by the Arkansas Times:

“Is there anything in the bill that addresses the parental consent piece?” Clowney asked. “Even if a parent says, ‘Please call my child by this pronoun or this name,’ it appears to me that anybody who follows the wishes of that parent … that they would be subject to the civil liability you propose here. Is that correct?”

“That is correct,” Bentley said. “I think that we’re just stating that social transitioning is excessively harmful to children and we want to change that in our state. We want to make sure that our children are no longer exposed to that danger.”

In other words, this is not a "parental rights" issue, but a "let's not have any Trans Stuff in our state" issue.  

In hearing, an attorney from the Arkansas Attorney General's office observed that this was pretty much an indefensible violation of student's First Amendment rights, and the AF office wouldn't be able to defend it. According to the Times, Bentley agreed to tweak the bill a bit, but we can already see where she wants to go with this. 

The person filing the suit against a teacher who used the wrong pronoun or congratulated the student on their haircut could be liable for $10 million or more, and they've got 20 years to file a suit.

I'm never going to pretend that these issues are simple or easy, that it's not tricky for a school to look out for the interests and rights of both parents and students when those parents and students are in conflict. But I would suggest remembering two things-- trans persons are human beings and they are not disappearing. They have always existed, they will always exist, and, to repeat, they are actual human persons. 

I was in school with trans persons in the early seventies. I have had trans students in my classroom. They are human beings, deserving of the same decency and humanity as any other human. I know there are folks among us who insist on arguing from the premise that some people aren't really people and decency and humanity are not for everyone (and empathy is a weakness). I don't get why some people on the right, particularly many who call themselves Christians, are so desperately frightened/angry about trans persons, but I do know that no human problems are solved by treating some human beings as less-than-human. And when your fear leads to policing children's haircuts to fit your meager, narrow, brittle, fragile view of how humans should be, you are a menace to everyone around you. You have lost the plot. Arkansas, be better.