Sunday, March 22, 2026
ICYMI: Maple Syrup Edition (3/22)
Wednesday, March 18, 2026
PA: An AI Safety Bill
Disclosure of nonhuman status.--If a reasonable person interacting with an AI companion would be misled to believe the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the AI companion is artificially generated and not human.
"Reasonable person" is doing a hell of a lot of work here.
The bill would also require AI "operators" to "maintain and implement a protocol" to prevent its bots from producing suicidal ideation, suicide, or self-harm content to users, or content that directly encourages the user to commit acts of violence. That protocol should include suicide hotline or crisis text line if the user expresses thoughts about self-harm.
Even better, the bill would require that if "the operator knows or should have known" that the user is a minor, they must provide notification that the user is not interacting with a human being. They must also provide a "clear and conspicuous notification" at least once every three hours that the user should take a break and, again, that they are talking to a non-human bot. The AI should also be prevented from producing sexually explicit images or giving the minor instructions on sexually explicit conduct.
Bots also have to come with a cyber-label saying "this might not be suitable for minors."
The Attorney General gets to enforce this. The state can fine an operator up to $10,000 for each violation (on top of any other remedies provided by law). $10K is, of course, couch cushion money for most tech companies, but this whole law is a hell of a lot better than one more chorus "Everyone better get their kids on AI before they are left behind in the awesome world of tomorrow that AI is going to launch any day now." Dragging them into court is the only thing that might get our tech overlords' attention, so it's encouraging to see legislatures showing a willingness to make that happen.
Sunday, March 15, 2026
ICYMI: Out Of The Office Edition (3/15)
Nebraska braces for latest private school funding, vouchers fight, now eyeing $3.5M
Thursday, March 12, 2026
Netflix Chief Ready To Help DFER Fix Education
Democrats for Education Reform (DFER) is delighted to announce that Reed Hastings, co-founder of Netflix, has joined their board, "bringing a disruptor's lens to education." That seems about right.
First, a reminder of who DFER really are. One of the key founders of DFER is Whitney Tilson, a big time hedge fund manager (you can read more about him here). Long ago, Leonie Haimson had a great quote from the film version of Tilson's magnum opus about ed reform, "A Right Denied," and it's a dream of mine that every time somebody searches for DFER on line, this quote comes up.
The real problem, politically, was not the Republican party, it was the Democratic party. So it dawned on us, over the course of six months or a year, that it had to be an inside job. The main obstacle to education reform was moving the Democratic party, and it had to be Democrats who did it, it had to be an inside job. So that was the thesis behind the organization. And the name – and the name was critical – we get a lot of flack for the name. You know, “Why are you Democrats for education reform? That’s very exclusionary. I mean, certainly there are Republicans in favor of education reform.” And we said, “We agree.” In fact, our natural allies, in many cases, are Republicans on this crusade, but the problem is not Republicans. We don’t need to convert the Republican party to our point of view…
DFER's mission has always been to convince Democrats that they should be backing ed reform ideas from the right. It's standard to find them trying hard to convince Democrats that it would be a winning strategy, like the recent NYT piece by their chief Jorge Elorza in which he tries to sell taxpayer-funded school vouchers.
Hastings, meanwhile, is a long time fan of school choice programs. Hastings has been plenty active in the charter sector, managing to help push through the California law that not only did away with charter caps, but made it possible to run a chain of charters with just one (unelected) board. Unelected is how he likes them-- in 2014 he told the California School Boards Association in fairly clear terms that elected school boards were a scourge and should be done away with.
Hastings likes to note that way back in the day, he was a teacher. That was with the Peace Corps in Swaziland over 40 years ago. But he's been a busy edu-preneur for decades, and he certainly knows all the classic bits.
There's the whole "unchanged classroom" shtick. Hastings sees schools as being like the entertainment biz thirty years ago-- "a model built for a different era" and has often claimed that "the traditional classroom model—one teacher, 20-to-50 students, sage-on-a-stage—is ripe for reinvention." He declares "the schools of the future won't look like the schools of the past," which is his one accurate observation, though he could easily note that the schools of the present don't look like the schools of the past. Lord, they were ushering the sage off the stage back when I was in teacher school.
Paired with that is the claim that "Netflix replaced a one-size-fits-all broadcast model with something more personal and responsive," which is just a silly claim. In 1997, when Netflix launched, cable tv was achieving great new heights of variety. Hell, Fox News launched in 1996. Back then, boys and girls, cable provided actual variety before free market forces pushed cable channels to become barely distinguishable imitations of each other (you know, back when MTV played music and A&E stood for Arts and Entertainment, and there were two comedy channels). The broadcast model was already well and fully disrupted, and the only thing that Netflix disrupted was the practice of having to go to the store to rent DVDs.
So guess what Hastings thinks is the key to this new shift in education? Here's a hint-- as of last year, Hastings is on the board of Anthropic, the big AI company.
"AI is a once-in-a-thousand-year shift, and what happens in K-12 is at the center of it,” Hastings continued. “The schools that figure out how to combine individualized software with teachers focused on social-emotional development are going to unlock something we’ve never seen before."
Individualized computer instruction is definitely a thing we've seen before, though what we've seen is the many ways that it crashes and burns and fails to deliver its many promises. There is no reason to believe that the newest iteration of the giant plagiarism machines is likely to change that, no reason to believe that education delivered through a screen is somehow superior to education involving other humans, both as teachers and as co-students. Hastings believes AI can help make education more personal, which highlights how oxymoronic it is to propose personalization that is delivered by non-persons.
"He sees AI enabling a shift where teachers become more like coaches and build deep relationships with students."Wednesday, March 11, 2026
The Great Screen Debate
When kids hate learning because it’s boring, it will have far more damaging consequences than if they are playing a game that is helping them find learning more interesting
Sigh. No. First, you know who doesn't find something interesting just because it's pixels on a screen? People who have grown up in a world stuffed full of pixels on screens. Second, when you spend years around teenagers with phones, one thing you notice quickly is that a fascinating new app generally has a half-life of about four weeks. Culatta also trots out this old chestnut
We do have to be really careful that we don’t actually end up harming kids by taking away tools that are really helpful for them for their future
Nope. No student is going to lose ground in reading or math or history or art or music because she didn't have access to EduBlart3000 on her screen.
And I myself once bought the idea that students could benefit from exposure to tech tools so that they were better able to use those tools in the future. I have changed my mind. First, the tools schools teach them to use now will be long gone in the future and second, we are well into the stage in which tech tools can be learned quickly.
Lawmakers across the country are scaring the crap out of tech companies by contemplating restrictions on screens in schools. That new wave yielded this hilarious quote to NBC from Kieth Kruger, CEO of the Consortium for School Networking, an ed tech trade organization.
I think some well-intentioned policymakers trying to do something are rushing so quickly that they haven’t thought through the implications.
Ironic, given that the ed tech industry's motto has always been "Buy our stuff RIGHT NOW and don't pause to think through the implications."
Well, the implications of years of screens in classrooms are starting to catch up with us. Check out Jared Cooney Horvath's set of graphs showing that the much-lamented dip in test scores seems to line up with digital adoption. Endless teacher anecdotes of students having trouble focusing, paying attention, just plain sticking with something for more than five minutes. Increasing numbers of studies suggesting that screens have hurt learning-- and (horrors) news that ed tech companies aren't making mountains of money!
And yet, as Jennifer Berkshire points out, absolute amnesia about how we got here. Folks who cheerfully burbled about the promise of ed tech are now shocked-- shocked!!-- that screens have been allowed to dominate classrooms. Not a surprise-- as Audrey Watters has repeatedly pointed out over the years, the story of ed tech is the story of enthusiastic promises, joyous press coverage, and expensive failure, all wrapped in a blanket of sweet, sweet forgetfulness.
The amnesia would be funny if we weren't already being dragged into the next wave of ed tech, the one powered by "Artificial Intelligence," a marketing term designed to put a pretty, inevitable face on a morally bankrupt industry. "Come take a kick at this hot new ed tech idea! It's inevitable! It's awesome! This time it really will change everything!"
We're still getting back up from the last faked kickoff. Here's hoping we think twice before we fall for this again.
Sunday, March 8, 2026
ICYMI: The River Is Rising Edition (3/8)
Thursday, March 5, 2026
Teach For Awhile For America
Wendy Kopp, the woman who hatched Teach for America, popped up in The Atlantic with an odd reflection on "first jobs" and teaching, and, well, there's a lot of subtext to unpack. After "four decades trying to inspire young people... to work directly with low-income communities," Kopp has some thoughts.
She opens with the story of Jack, who was trying to decide whether or not to go the TFA route, and jumps from there to bigger ideas:
Policy makers and philanthropists aren’t particularly focused on first jobs. But these choices matter—and not only for the individuals beginning their careers. If we want to address society’s most deeply rooted challenges—poverty, polarization, environmental degradation, geopolitical conflict—we need to encourage young people to work on these issues early in their careers, so they can grow into leaders capable of solving them.
In other words, going into teaching as a "first job" doesn't really help anybody, but it gives TFA members the exposure to issues so that they can move on to leadership roles where they can actually accomplish something. You know-- real jobs where the real work gets done.
This is in line with the longtime criticism of TFA that it's for rich white kids from elite universities to get an "experience" being briefly exposed to the poors.
It also points to the less-acknowledged problem of TFA. Plenty has been said about TFA's disrespect for career teachers ("Step aside, Grandma, and let me show you how we smart Ivy Leaguers get the job done") and the absurd condescension of insisting that a top college kid can pretty much master the work in a five week training. But over time it has become clear that a wider danger of TFA is that it keeps producing a bunch of reformster amateur edu-preneurs who go into business and government claiming to have been "in teaching" because they spent two years in a classroom somewhere.
TFA has certainly produced some folks who became real teachers and embarked on real teaching careers-- which I guess would be a disappointment to Kopp, who was rooting for them to zip through their two-year first job so they could get on to important leaderly jobs of solving the world's problems.
Her story of Jack defies parody:
While teaching in Harlem, Jack saw that a lack of resources made failure seem inevitable for the kids at his school. He also saw the incredible resilience and character of the students, families, and teachers. He realized just how entrenched inequity in education is, but he gained confidence in his ability to help address it. Jack is now in his first year at Columbia Law School.
Yup. Jack went face to face with the challenges of poverty, saw what strengths were there, grabbed ahold of the problems of teaching in a low-resource classroom and decided-- to go to law school. But don't worry-- Kopp assures us that he "hopes to litigate for increased funding for education and better compliance with anti-discrimination and disability-rights laws."
But Kopp just can't stop. "Research confirms that working close to the roots of social issues early in one’s career fundamentally reshapes a person’s beliefs and life trajectory." And she connects some of that research to TFA, showing that yes, TFA is great because it provides an important formative experience for the TFA members. The actual students should, I guess, be happy to provide a useful learning experience for those college grads. It's almost as great as if someone provided learning for those students.
Kopp reminds us that her generation was known as the Me Generation. But offering a "prestigious alternative to the corporate track" those college grads proved to be more "idealistic and civically committed than people assumed." So the trick was, I guess, offering a prestigious alternative like TFA and not a non-prestigious alternative like an actual teaching career.
Kopp comes real close to some insights here--
In 2024, 35 percent of Yale’s senior class entering the workforce chose jobs in finance and consulting; add tech into the mix, and the share rises to 46 percent. At other schools—including Harvard, Princeton, Claremont McKenna, and Vanderbilt—at least half of the graduating class moved into those three fields. Meanwhile, the data I’ve seen on the share of students taking jobs close to inequity and injustice suggest a decline across the same period.
Ah, but Wendy-- those graduates going into those fields are taking jobs close to inequity and injustice. They're just close to the winning side of those issues.
Some students, of course, feel they can’t afford to pursue less immediately lucrative careers. But if this was all that was holding graduates back, you’d expect to see more kids from wealthy backgrounds taking these jobs. Yet students from the highest-income backgrounds are the least likely to enter into public service and the most likely to pursue the corporate path.
Huh. Rich people don't want to help poor people, and don't even want to be around them? I feel like there's a really deep vein to be tapped here, but Kopp isn't going there.
Kopp points out that the corporate track has a well-funded recruitment arm and that colleges are eager to hoover up some of that money in a sort of collegiate product placement.
Kopp also sees an opportunity in the AI onslaught. Maybe, since AI is going to do all the entry level jobs, companies could "push back their recruiting timelines" while grads go out and get some human skill jobs, in communities tackling social problems. Not, mind you, that she thinks the grads should stay in that first job:
And young people themselves, even those who might want to run a major company someday, would benefit immensely from devoting the early years of their careers to such challenges.
Get those humaning skills, then move on to your real job.
There are so many blind spots in Kopp's essay, like her observation that "High schools should inspire students to step outside of their comfort zone and wrestle with pressing social issues," as if there are thousands of high schools where the students wrestle with pressing social issues every single day. Philips Exeter Academy is not a typical high school.
But mostly is this whole notion that the direct social work of the world should be done by fresh-faced college grads who only stay for a couple of years before they go on to the real lifetime work of, perhaps, amassing money or political power by occasionally remembering the social issues that they observed up close for a brief time. What does a school system look like when it is staffed mainly by people who never stay long enough to actually get good at the work of teaching? And are those people really fit "experts" to lead the world of education policy?
Takes me back to two classics from The Onion-- the point/counterpoint "My Year Volunteering As A Teacher Helped Educate A New Generation Of Underprivileged Kids vs. Can We Please, Just Once, Have A Real Teacher" and "Teach For America Celebrates 3 Decades Of Helping Recent Graduates Pad Out Law School Applications." I'm going to reread those now to get the taste of Kopp's ideas out of my head.
Sunday, March 1, 2026
ICYMI: Oh Great A New Frickin' War Edition (3/1)
It's hard to really capture the many levels on which the US attack on Iran is just stupid. Stupid stupid stupid. I'm not going to get into it here-- there is plenty of press about it and you probably couldn't miss it if you wanted to. But I surely hope that you are badgering your Congressperson.
In the meantime, the business of helping a country be less stupid remains super-important, so we will continue to pay attention. Here's your list for the week.
Wednesday, February 25, 2026
Google's AI Push For Schools
Google has scored another chance to get its products into schools in the form of a "sizable investment" in AI training. As Greg Troppo reports at The74, training will be offered through ISTE+ASCD (that's the fused Association for Supervision and Curriculum Development and the International Society for Technology in Education).
The justification will seem familiar. Per Troppo:
“We have just heard so much feedback from teachers that are just saying, ‘We are not prepared,’” said Richard Culatta, ISTE+ASCD’s CEO. “‘We don’t have the training, we don’t have the background that we need for the realities of teaching in an AI world, both teaching in the classroom and also, secondarily, but equally as important, preparing students for the world that they’re going to be in.’”
Sigh. I do believe that teachers are feeling swamped by the ongoing wave of AI stuff, the students who are using it, and the folks (including all too often administrators) hollering that they have to get on this bandwagon Right Now. I do believe that teachers need plenty of training to help them cope with this toxic tide of anti-human plagiarism machines.
You know what would be lousy source for that training? The company that has bet the farm on being able to rope in a mountain of money to support that toxic tide. The company that has a vested interest in selling its product to every carbon-based life form on the planet. That company. Google.
Not that other education folks haven't made similarly terrible deals (looking at you, American Federation of Teachers). But why keep falling for this same pitch?
Particularly from Google, a company that was just caught referring to its work in education as "a pipeline for future users." Did we not already do this with the tobacco industry's attempts to enlist customers while they were still young enough to be enticed by cartoons? "You get that loyalty early, and potentially for life," said A) Google or B) RJ Reynolds. Is it bad for them? Who cares. Rake in those dollars!
This is Google, the folks who brought schools Chromebooks (described in education circles as "What if a laptop, only broken?"). We've have let advanced computer tech run loose in schools, a solution in search of a problem, like a puppy looking for a good place to pee.
When the tech has a purpose, it can be great. I spent much of my career on the front lines of using desktop publishing tools to create yearbooks, and it was absolutely awesome. It was also purposeful and useful and sold itself exactly because it had utility, helping us do a job better than we could without it.
But that was not all of ed tech. And the high tech revolution was a nightmare of moving fast and breaking things, bringing us to headlines like the recent Fortune piece by Sasha Rogelberg-- "The U.S. spent $30 billion to ditch textbooks for laptops and tablets: The result is the first generation less cognitively capable than their parents."
Soooo many parents have handed their too-young children high tech tools, soothed at least in part by the fact that such tools were in their child's classroom, and surely the school would only use these tools because they knew the tools were safe and effective. Meanwhile, schools had no damned idea.
So AI is a chance to turbocharge this whole ed tech mess by injecting fantasy, magic, and more desperate profiteering into the equation.
Do schools and teachers need someone to help them cope with these dangerous bots? Do they need to learn how to help students and families cope with a revolution whose outlines we can barely grasp and whose story is a jumbled mash of fantasy, magical thinking, and utter bullshit? Should they be getting those answers from a company whose primary concern is selling as much of the AI service to as many people as possible for as much as they can collect? Gee, that's a stumper.
Meanwhile, we have the steady drumbeat of tech-fueled ecstasy and agony. Everyone should sign up for i-Ready! Oh, no-- turns out that i-Ready is terrible! The idea of putting students in front of a teaching machine is a century old, and yet has not produced a win for students yet-- just the occasional money for investors. And AI companies increasingly don't even try to pretend that they are aimed at helping students learn.
So can organizations that claim to care about education please just take a breath and slow down before selling out. Maybe take a moment to think about how to best serve the interests of students and society before signing up for the latest barely-disguised sales pitch from an AI company whose biggest concern is not education, but how they're going to make back some of the gazillion dollars they've poured into AI.
Monday, February 23, 2026
Is This The Most Bullshitty AI Product Bullshit So Far?
I apologize for the language, Mom. But some days.
I'm not sure anybody can pick the absolute worst AI company; it's like trying to pick the worst toxic waste dump. But this one is certainly a candidate. Here's the pitch for Companion's Einstein:
He logs into Canvas every day, watches lectures, reads essays, writes papers, participates in discussions, and submits your homework — automatically.
What the actual hell. The pitch is broken down into areas, so you know that Einstein can log into Canvas, watch videos, covers every subject, works while you sleep-- everything. In the FAQ section, it promises that your professor will never know, and will in fact get better at meeting the course expectations (well, you know, except the expectation that a human student will learn by doing the work). The FAQ even answers the question, "What if I want to do an assignment myself?" You can tell it to skip that assignment, though you can of course set the bot to auto-submit everything.
But hey-- as the website says:
Stop stressing. Start acing.
Einstein does the busywork so you don't have to.
Today's most powerful AI systems can reason through PhD-level problems, write production code, and generate entire applications from a sentence. They are, by any meaningful measure, brilliant.
Narrators voice: They cannot do those things.
Yet every conversation starts from zero. Bad advice carries no cost, misunderstood values get forgotten by next session, and a decision that derails your month goes unnoticed and unlearned. Nothing compounds—including the responsibility.
The point seems to be that companion won't forget you, like those other goldfish-powered bots (though ChatGPT is among those that is now supposed to remember your other "interactions" to better mine data better meet your needs). But it just gets more and more bizarre--
Oh for crying out loud. I suppose an AI can be "bound to a human," though "bought by a human" seems more accurate. But "loyal"? Nope. Able to figure out a human's long interests and align itself to them? Bullshit. How do I know it's bullshit? Because humans can't figure out their own long term best interests. How else do I know? Because it would not be in the long term best interests of a human to ditch an entire course and dodge an education by having a bot fake it!
But hey-- the company promises that "your companion knows what you're working toward and how you think." This is also bullshit, because no program knows how any human thinks. It does not even "know" what "thinking" is. The pitch here is also that your companion has a "private virtual computer" so that anything a human with a computer can do, your companion can do. I don't even know what to make of that, other than it may be the most effort yet put into trying to anthropomorphize a computer program. "No, this bot isn't a computer! It's a little tiny person, sitting inside the computer working on its own tiny little computer." I mean, damn-- how do I know that my companion isn't even logging onto its virtual computer, but has hired a companion of its own to do the work. I'm envisioning a series of ever-smaller digital Russian nesting dolls, each sitting at tinier and tinier computer desks.
An extension of you so you can be more of you.
Human morality rarely begins as an abstract love for all of humanity. It begins with someone specific. Your child. Your partner. Your team. Your friend. Through concrete responsibility, care expands to the rest of the world.
This may, in fact, how the sociopaths of Silicon Valley go about developing a moral sense, though let me suggest that if loving other humans doesn't start until you have a partner and a child, you may be a very troubled human being. This goes right up there with the Sam Altman quote circulating today
People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.
But Companion isn't just talking about the origins of morality for humans, because "AI should develop the same way." Here's the wrap-up:
A companion shaped by one human life over time develops something closer to genuine responsibility. It learns your boundaries by crossing them and being corrected, your values by watching which suggestions you take and which you ignore, what trust means by earning yours slowly over months.
We believe an AI that cares for one human life is more likely to care for humanity itself.
So while you may think that Companion Inc is just offering an AI bot that can take classes and cheat effectively for you, it is actually a program that will save the entire damned human race by teaching the bots to care about us. Letting Einstein take your class, do your homework, and write your papers will lead it to love you and care for you, and through you, all of humanity. That sounds wonderful, and if we could somehow get the tech overlords who design these bots to care about human beings half as much, the world would be a better place.
I came across Einstein thanks to a former student who is now a college English professor at one of those places where administration thinks teachers should Get With The Program because AI Is The Future and students are going to use this stuff anyway, so maybe take a few minutes to teach them about Using AI Ethically. Which is bullshit on bullshit. Look at this product, AI-friendly administrator, and tell me how it should be used ethically, because ethical use of Einstein strikes me as absolutely impossible. Unless, I guess, you believe that using Einstein will teach our Robot Overlords to love us and care for us in a deeply moral way. But I have my doubts that even a college administrator could wade through that much bullshit.
Sunday, February 22, 2026
ICYMI: Ice Jam Edition (2/22)
My area made some national news this week when the ice started piling up on the Allegheny River and threatening communities. We can watch the river out our back window, but if it ever rises high enough to touch the house it would be signs of a waterpocalypse. We used to have bad winter floods in the region-- a epic ice jam and flood 100 years ago went on for three months-- but a large dam and some smaller bits of technology have made the area safer. It's one of those things where you don't think about what is keeping you safe because the result is a bunch of Not Happening.
Plenty to read this week. Here we go.
Defending the Promise: Public Education and the Fight for DemocracyTen Commandments could go up in Tennessee public school
More performative anti-religion religious law, this time in Tennessee. Sam Stockard reports for Tennessee Lookout.
Wednesday, February 18, 2026
The AI Task Force and Moms For Liberty: It's Complicated
By fostering AI competency, we will equip our students with the foundational knowledge and skills necessary to adapt to and thrive in an increasingly digital society. Early learning and exposure to AI concepts not only demystifies this powerful technology but also sparks curiosity and creativity, preparing students to become active and responsible participants in the workforce of the future and nurturing the next generation of American AI innovators to propel our Nation to new heights of scientific and economic achievement.
The edict established the Artificial Intelligence Education Task Force, five words that, when crammed together by this administration, create some sort of field that overloads and destroys any irony in the vicinity. The federal AI Initiative offers a page of "resources" that looks much like a "list of folks hoping to make money from AI." That goes with the part calling for public-private partnerships
A bunch of organizations and businesses and also more businesses have signed the presidential Pledge To America's Youth in which [Your Name Here] pledges to provide resources that foster early interest in AI technology, promote AI proficiency, and enable comprehensive AI training for parents and educators" all of which sounds much nicer than "We promise to hook customers as soon as they are born and do whatever we can to saturate the market. Ka-ching."
Specifically, over the next 4 years, we pledge to make available resources for youth, parents and teachers through funding and grants, educational materials and curricula, technology and tools, teacher professional development programs, workforce development resources, and/or technical expertise and mentorship.
Well, of course. Hey, did you hear the unsurprising discovery via internal documents that Google is using its education products to turn schools into a "pipeline of future users"? Is it any wonder that Dear Leader, our Grifter In Chief, wants to keep an eye on this new, promising money tree.
The initiative and task force are headed up by Michael Kratsios, whose previous gigs include Chief of Staff to Peter Thiel. He served in the first Trump administration in the Department of Defense, spent his interregnum as managing director of Scale AI and is now the director of the White House Office of Science and Technology Policy. In his current gig, he's calling to "demystify these amazing technologies" and figure out what AI is and is not good for, and then American families, students and educators "can fully take advantage of AI applications with confidence and responsibility." Perhaps he's unfamiliar with the research that shows that the more people know about AI, the less inclined they are to use it.
The task force has been meeting with folks to "discuss AI's impact in the classroom," which of course means everyone except people who actually work in classrooms. At their December confab, they heard from Chris Woolard of the Ohio Department of [Privatizing] Education, Adeel Khan of Magic School, and Tina Descovich, co-founder and current Big Cheese of Moms for Liberty.
M4L has some thoughts about AI in education. And, well, they aren't entirely terrible.
Along with tech companies acting responsibly, policymakers must do everything possible to make sure parents have full transparency into how AI systems operate, what data they collect, and how decisions or recommendations are made
By acting below, together we can ensure parents, not algorithms or activists, shape how AI is used in the education of our children.
Of course, they leave teachers out of the equation, perhaps because they can't quite figure out how to work "we think teachers are sometimes okay, but we hate their evil unions" into this equation. But their slogan for AI-- "Demand transparency, accountability, and boundaries" -- is not bad. And they do better by teachers elsewhere-- we'll get to that.
They've got a pledge to sign, and it hits all the usual M4L notes--
It's the usual "parents' fundamental right etc" song and dance, but that song and dance in the face of a plagiarism-driven data-mining monster makes some sense. It also suggests that M4L and its ilk are not quite ready to jump on the White House's grifty AI bandwagon. The M4L pledge certainly strikes a different tone than the White House's AI Pledge to America's Youth.
M4L also has a model school board policy and a model bill for legislatures. The school board policy lists four purposes:
1. Protect parental rights and student privacy;
2. Preserve the central role of teachers in instruction;
3. Maintain academic integrity; and
4. Ensure transparency and accountability in the use of emerging technologies.
The policy calls for no AI tools used without prior parental consent. The school should annually provide written notice of all AI tools approved for use.
There's a whole section on "instructional safeguards" that states as its first point
Artificial intelligence shall not replace a certified teacher in providing core academic instruction or assigning final grades.
Which doesn't go quite far enough (AI should assign no grades at all), but still is a more blunt defense of actual human teaching than anything the administration has offered.
M4L also seems to understand the AI threat to all manner of data that can be collected from young humans far better than plenty of other folks (for God's sake, stop inviting ChatGPT to scan all your social media content so it can make you a cute cartoon of yourself).
The M4L model legislation is much of the same stuff with more expansive lawmakery language, but again, they seem to understand the issues here:
While artificial intelligence may offer instructional benefits, its use also presents risks, including data privacy violations, diminished academic integrity, ideological bias, and inappropriate replacement of human educators.
Well, yeah.
It's an unusual day when we don't find M4L falling right in behind Dear Leader and nodding along with whatever his crew has to say, and I would love to think that this shows a bit of fissure between pro-any corporate entity that might enrich me MAGA and right-wing conspiracy crew MAGA. It almost smells a bit like that time a whole lot of Very Conservative Folks went rogue over Common Core.
But if the Moms want to join in the resistance to throwing AI into classrooms Right Away because if we don't OMG students won't be ready for the jobs of tomorrow because AI is inevitable and awesome and so much better than all those troublesome human meat widgets-- anyway, if the Moms want to stand up to all of that, I'm happy to see it. I am definitely staying tuned. Can AI make popcorn?







