Monday, June 9, 2025

Another Bad AI Classroom Guide

We have to keep looking at these damned things because they share so many characteristics that we need to recognize so we can recognize them when we see them again and react properly, i.e. by throwing moldy cabbage at them. I read this one so you don't have to.

And this one will turn up lots of places, because it's from the Southern Regional Education Board

SREB was formed in 1948 by governors and legislators; it now involves 16 states and is based in Atlanta. Although it involves legislators from each of the states, some appointed by the governor, it is a non-partisan, nonprofit organization. In 2019 they handled about $18 million in revenue. In 2021, they received at $410K grant from the Gates Foundation. Back in 2022, SREB was a cheerful sock puppet for folks who really wanted to torpedo tenure and teacher pay in North Carolina. 

But hey-- they're all about "helping states advance student achievement." 

SREB's "Guidance for the Use of AI in the K-12 Classroom" has big fat red flag right off the top-- it lists no authors. In this golden age of bullshit and slop, anything that doesn't have an actual human name attached is immediately suspect.

But we can deduce who was more or less behind this-- the SREB Commission on Artificial Education in Education. Sixteen states are represented by sixty policymakers, so we can't know whose hands actually touched this thing, but a few names jump out.

The chair is South Caroline Governor Henry McMaster, and his co-chair is Brad D. Smith, president of Marshall University in West Virginia and former Intuit CEO. As of 2023, he passed Jim Justice as richest guy in WV. And he serves on lots of boards, like Amazon and JPMorgan Chase. Some states (like Oklahoma) sent mostly legislators, while some sent college or high school computer instructors. There are also some additional members including Youngjun Choi (UPS Robotics AI Lab), Kim Majerus (VP US Public Sector Education for Amazon Wen Services) and some other corporate folks.

The guide is brief (18 pages). It's basic pitch is, "AI is going to be part of the working world these students enter, so we need schools to train these future meat widgets so we don't have to." The introductory page (which is certainly bland, vague, and voiceless enough to be a word string generated by AI) offers seven paragraphs that show us where we're headed. I'll paraphrase.

#1: Internet and smartphones means students don't have to know facts. They can just skip to the deep thinking part. But they need critical thinking skills to sort out online sources. How are they supposed to deep and critically think when they don't have a foundation of content knowledge? The guide hasn't thought about that. AI "adds another layer" by doing all the work for them so now they have to be good prompt designers. Which again, would be hard if you didn't know anything and had never thought about the subject.

#2: Jobs will need AI. AI must be seen as a tool. It will do routine tasks, and students will get to engage in "rich and intellectually demanding" assignments. Collaborative creativity! 

#3: It's inevitable. It is a challenge to navigate. Shareholders need guidance to know how to "incorporate AI tools while addressing potential ethical, pedagogical, and practical concerns." I'd say "potential" is holding the weight of a world on its shoulders. "Let's talk about the potential ethical concerns of sticking cocaine in Grandma's morning coffee." Potential.

#4: This document serves as a resource. "It highlights how AI can enhance personalized learning, improve data-driven decision-making, and free up teachers’ time for more meaningful student interactions." Because it's going to go ahead and assume that AI can, in fact, do any of that. Also, "it addresses the potential risks, such as data privacy issues, algorithmic biases, and the importance of maintaining the human element in teaching." See what they did there? The good stuff is a given certainty, but the bad stuff is just a "potential" down side.

#5: There's a "skills and attributes" list in the Appendix.

#6: This is mostly for teachers and admins, but lawmakers could totally use it to write laws, and tech companies could develop tech, and researchers could use it, too! Multitalented document here.

#7: This guide is to make sure that "thoughtful and responsible" AI use makes classrooms hunky and dory.

And with that, we launch into The Four Pillars of AI Use in the Classroom, followed with uses anbd cautions.

Pillar #1
Use AI-infused tools to develop more cognitively demanding tasks that increase student engagement with creative problem-solving and innovative thinking.

"To best prepare students for an ever-evolving workforce..." 

"However, tasks that students will face in their careers will require them..."

That's the pitch. Students will need to be able think "critically and creatively." So they'll need really challenging and "cognitively demanding" assignment. Now more than ever, students need to be creators rather than mere purveyors of knowledge. "Now more than ever, students need to be creators rather than mere purveyors of knowledge."

Okay-- so what does AI have to do with this?
AI draws on a broad spectrum of knowledge and has the power to analyze a wide range of resources not typically available in classrooms.
This is some fine tuned bullshit here, counting on the reader to imagine that they heard something that nobody actually said. AI "draws on" a bunch of "knowledge" in the sense that it sucks up a bunch of strings of words that, to a human, communicate knowledge. But AI doesn't "know" or "understand" any of it. Does it "analyze" the material? Well, in the sense that it breaks the words into tokens and performs complex maths on them, there is a sort of analysis. But AI boosters really, really want you to anthropomorphize AI, to think about it as human-like un nature and not alien and kind of stupid.

"While AI should not be the final step in the creative process, it can effectively serve in the early stages." Really? What is it about the early stages that makes them AI-OK? I get it--up to a point. I've told students that they can lift an idea from somewhere else as long as they make it their own. But is the choice of what to lift any less personal or creative than what one does with it? Sure, Shakespeare borrowed the ideas behind many of his plays, but that decision about what to borrow was part of his process. I'd just like to hear from any of the many people who think AI in beginning stages is okay why exactly they believe that the early stages are somehow less personal or creative or critical thinky than the other stages. What kind of weird value judgment is being made about the various stages of creation?

Use AI to "streamline" lesson planning. Teach critical thinking skills by, and I'm only sort of paraphrasing here, training students to spot the places where AI just gets stuff wrong. 

Use AI to create "interactive simulations." No, don't. Get that AI simulation of an historical figure right out of your classroom. It's creepy, and like much AI, it projects a certainty in its made-up results that it does not deserve. 

Use AI to create a counter-perspective. Or just use other humans.

Cautions? Everyone has to learn to be a good prompt engineer. In other words, humans must adjust themselves to the tool. Let the AI train you. 

Recognize AI bias, or at least recognize it exists. Students must learn to rewrite AI slop so that it sounds like the student and not the AI, although how students develop a voice when they aren't doing all the writing is rather a huge challenge as well. 

Also, when lesson planning, don't forget that AI doesn't know about your state standards. And if you are afraid that AI will replace actual student thinking, make sure your students have thought about stuff before they use the AI. Because the assumption under everything in this guide is that the AI must be used, all the time.

Pillar #2
Use AI to streamline teacher administrative and planning work.

The guide leads with an excuse-- "teachers' jobs have become increasingly more complex." Have they? Compared to when? The guide lists the usual features of teaching-- same ones that were there when I entered the classroom in 1979. I call bullshit. 

But use AI as your "planning partner." I am sad that teachers are out there doing this. It's not a great idea, but for a generation that entered the profession thinking that teacher autonomy was one of those old-timey things, as relevant as those penny-farthings that grampa goes on about. And these suggestions for use. Yikes.

Lesson planning! Brainstorming partner! And, without a trace of irony, a suggestion that you can get more personalized lessons from an impersonal non-living piece of software.

Let it improve and enhance a current assignment. Meh. Maybe, though I don't think it would save you a second of time (unless you didn't check whether AI was making shit up again). 

But "Help with Providing Feedback on and Grading Student Work?" Absolutely not. Never, ever. It cannot assess writing quality, it cannot do plagiarism detection, it cannot reduce grading bias (just replace it). If you think it even "reads" the work, check out this post. Beyond the various ways in which AI is not up to the task, it comes down to this-- why would your students write a work that no other human being was going to read?

Under "others," the guide offers things like drafting parent letters and writing letters of recommendation, and again, for the love of God, do not do this! Use it for translating materials for ESL students? I'm betting translation software would be more reliable. Inventory of supplies? Sure, I'm sure it wouldn't take more than twice as much time as just doing it by eyeball and paper. 

Oh, and maybe someday AI will be able to monitor student behavior and engagement. Yeah, that's not creepy (and improbable) at all.

Cautions include a reminder of AI bias, data privacy concerns, and overreliance on AI tools and decisions, and I'm thinking "cautions" is underselling the issues here. 

Pillar #3
Use AI to support personalized learning.

The guide starts by pointing out that personalized learning is important because students learn differently. Just in case you hadn't heard. That is followed by the same old pitch about dynamically adaptive instruction based on data collected from prior performance, only with "AI" thrown in. Real time! Engagement! Adaptive!

AI can provide special adaptations for students with special needs. Like text-to-speech (is that AI now). Also, intelligent tutoring systems that " can mimic human tutors by offering personalized hints, encouragement and feedback based on each student’s unique needs." So, an imitation of what humans can do better. 

Automated feedback. Predictive analytics to spot when a student is in trouble. AI can pick student teams for you (nope). More of the same.

Cautions? There's a pattern developing. Data privacy and security. AI bias. Overreliance on tech. Too much screen time. Digital divide. Why those last two didn't turn up in the other pillars I don't know. 

Pillar #4
Develop students as ethical and proficient AI users.

I have a question-- is it possible to find ethical ways to use unethical tools? Is there an ethical way to rob a bank? What does ethical totalitarianism look like?

Because AI, particularly Large Language Models, is based on massive theft of other peoples' work. And that's before we get to the massive power and water resources being sucked up by AI. 

But we'll notice another point here-- the problems of ethical AI are all the responsibility of the student users. "Teaching students to use AI ethically is crucial for shaping a future where technology serves humanity’s best interests." You might think that an ethical future for AI might also involve the companies producing it and the lawmakers legislating rules around it, but no-- this is all on students (and remember-- students were not the only audience the guide listed) and by extension, their teachers. 

Uses? Well, the guide is back on the beginning stages of writing
AI can also help organize thoughts and ideas into a coherent outline. AI can recommend logical sequences and suggest sections or headings to include by analyzing the key points a student wants to cover. AI can also offer templates, making it easier for students to create well-structured and focused outlines.

These are all things the writer should be doing. Why the guide thinks using AI to skip the "planning stages" is ethical, but using it in any other stages is not, is a mystery to me.

Students also need to develop "critical media literacy" because the AI is going to crank out well-polished turds, and it's the student's job to spot them. "Our product helps dress you, but sometimes it will punch you in the face. We are not going to fix it. It is your job to learn how to duck."

Cross-disciplinary learning-- use the AI in every class, for different stuff! Also, form a student-led AI ethics committee to help address concerns about students substituting AI for their own thinking. 

Concerns? Bias, again. Data security-- which is, incidentally, also the teacher's responsibility. AI research might have ethical implications. Students also might be tempted to cheat- the solution is for teachers to emphasize integrity. You know, just in case the subject of cheating and integrity has never ever come up in your classroom before. Deepfakes and hallucinations damage the trustworthiness of information, and that's why we are calling for safeguards, restrictions, and solutions from the industry. Ha! Just kidding. Teachers should emphasize that these are bad, and students should watch out for them.

Appendix

A couple of charts showing aptitudes and knowledge needed by teachers and admins. I'm not going to go through all of this. A typical example would be the "knowledge" item-- "Understand AI's potential and what it is and is not" and the is and is not part is absolutely important, and the guide absolutely avoids actually addressing what AI is and is not. That is a basic feature of this guide--it's not just that it doesn't give useful answers, but it fails to ask useful questions. 

It wraps up with the Hess Cognitive Rigor Matrix. Whoopee. It's all just one more example of bad guidance for teachers, but good marketing for the techbros. 



Sunday, June 8, 2025

ICYMI: Birthday Board Edition (6/8)

This week the Board of Directors here at the Institute celebrated their birthday. This involved some extended book store time and a day at Waldameer Park in Erie, an old amusement park that the Chief Marital Officer and I had not visited in many years. The board was both delighted and exhausted, and I got enough steps in that I believe I can just sit for the upcoming week. That's how that works, right?

Have some reading.

Diabolus Ex Machina

Amanda Guinzburg tries some new games with AI and ends up providing yet another demonstration of how terrible chatbots are at doing the most simple reading assignments.

Texas Schools to Get a Bit More Cash and a Lot More Christian Nationalism

Just how bad for public education was this last session of the Texas legislature? Brant Bingamon breaks it down for the Austin Chronicle.

How Educators Can Escape Toxic Productivity

Peter DeWitt and Michale Nelson at Ed Week address one of the oldest problems in education--the expectation that a good, productive teacher will just beat the living crap out of herself to do the job.

Big Changes and Controversy in Oakland

Why do I often include highly specific and local pieces, like this one from Thomas Ultican? Because what is happening elsewhere often illuminates what is about to happen in your neck of the woods. Including twisty board vs. superintendent politics.

Kids: 1, ICE: 0

ICE grabbed a high school kid on his way to volleyball practice, and a whole community rose up to protest. Jennifer Berkshire with an encouraging story from her neck of the woods.

Book-banning, Book-burning, Book reading—and Truth

It is disheartening when a community you love has important institutions commandeered by the anti-book crowd. Nancy Flanagan tells her own story of a small Michigan community.

Hard Times

Audrey Watters finds a connection between Charles Dickens and the modern day "just teach facts" crowd and bad tech, plus a load of excellent links. 

Do "pronatalists" like Musk care about children and babies?

Okay, not a hard question to answer. But Steve Nuzum digs deep into the natalism crowd's issues, and it's not pretty.

Chall’s Missing STAGES OF READING DEVELOPMENT in the Science of Reading

Nancy Bailey points out some critical info that the Science of Reading crowd misses.

Larry Cuban is not excited about the idea of robots providing human care.

They Want Missouri Education Policies to go Nationwide

Just how bad has it gotten in Missouri? Jess Piper, noted activist, paints the broad picture.

Ohio Senate Budget Plan Released on Tuesday Bodes Ill for Ohio Public School Funding

Jan Resseger breaks down the details in Ohio's newest attempt to become the Florida of the Midwest.

Rain, Meet Piss: How Ohio Keeps Screwing Over Public School Kids

Yeah, Stephen Dyer has some thoughts about that budget as well. 

Nary a Deviation From The Playbook

TC Weber continues to chronicle Penny Schwinn's rise from Tennessee embarrassment to national embarrassment. He actually followed her confirmation hearing, and has some notes.

Exposed: University of Michigan Hired Undercover Spies to Target Students

Jullian Vasquez Heilig reacts to reporting that his alma mater has hired goons to spy on students.

From Policy to Prosecution: Florida Raises The Stakes for School Boards

In Florida, right wingers continue to use manufactured outrage over naughty books to attack public schools, and they've decided to throw in threats of criminal prosecution. Sue Kingery Woltanski reports.

A little Gilbert and Sullivan today, with Kevin Kline working really hard!


Thursday, June 5, 2025

NM: Stride Caught Misbehaving Yet Again

A New Mexico school district has terminated its contract with Stride Inc, the 800 pound gorilla of the cyber school world, after a load of legal and academic violations. It's not a new issue with the company, which generally seems to consider educating students a mission secondary to the search for more profit.

Who are these guys?

Stride used to be K-12, a for-profit company aimed at providing on-line and blended learning. It was founded in 2000 by Ron Packard, former banker and Mckinsey consultant, and quickly became the leading national company for cyber schooling.

One of its first big investors was Michal Milken. That investment came a decade after he pled guilty to six felonies in the “biggest fraud case in the securities industry” ending his reign as the “junk bond king.” Milken was sentenced to ten years, served two, and was barred from ever securities investment. In 1996, he had established Knowledge Universe, an organization he created with his brother Lowell and Larry Elison, who both kicked in money for K12.

Also investing in K12, very quietly, was the financial giant Blackrock, founded and run by Larry Fink. Larry graduated from the same high school as Milken. Larry's brother Steve is a member of the Stride board, and at one point ran division of Knowledge Universe. Larry Fink is noted for his privacy about family, and a search for the two brothers’ names turns up only one article— a Forbes piece from 2000 which notes that Steve Fink, in 1984, moved next door to Micheal Milken and went on to become “one of Milken’s most trusted confidants,” a “guy he’s relied on to fix business trouble.”

Have they been in trouble before?

Oh lordy. Here's a partial list.

In 2011, the New York Times detailed how K12's schools were failing miserably, but still making investors and officers a ton of money. Former teachers wrote tell-alls about their experiences. In 2012. Florida caught them using fake teachers. The NCAA put K12 schools on the list of cyber schools that were disqualified from sports eligibility. In 2014, Packard turned out to be one of the highest paid public workers in the country, "despite the fact that only 28% of K12 schools met state standards in 2011-2012."

In 2013 K12 settled a class action lawsuit in Virginia for $6.75 million after stockholders accused the company of misleading them about “the company’s business practices and academic performance.” In 2014, Middlebury College faculty voted to end a partnership with K12 saying the company’s business practices “are at odds with the integrity, reputation and educational mission of the college.”

Packard was himself sued for misleading investors with overly positive public statements, and then selling 43% of his own K12 stock ahead of a bad news-fueled stock dip. Shortly thereafter, in 2014, he stepped down from leading K12 to start a new enterprise.

In 2016 K12 got in yet another round of trouble in California for lying about student enrollment, resulting in a $165 million settlement with then Attorney General Kamala Harris. K12 was repeatedly dropped in some states and cities for poor performance.

In 2020, they landed a big contract in Miami-Dade county (after a big lucrative contribution to an organization run by the superintendent); subsequently Wired magazine wrote a story about their "epic series of tech errors." K12 successfully defended itself from a lawsuit in Virginia based on charges they had greatly overstated their technological capabilities by arguing that such claims were simply advertising “puffery.”

In November of 2021, K12 announced that it would rebrand itself as Stride.

The New York Times had quoted Packard as calling lobbying a “core competency” of the company, and the company has spread plenty of money around doing just that. And despite all its troubles, Stride was still beloved on Wall Street for its ability to make money.

In 2023, Stride found itself wrapped up in a lawsuit with one of its own division over broken promises and attempts to lie their way out of commitment. 

In 2024, analysts were warning investors away from Stride, saying that, among other things, Stride was lying to investors about how many schools were operating and ghost students being used to puss up enrollment numbers. Later that year, Senator and noted MAGA doofus Markwayne Mullin was in trouble for shenanigans with his Stride stock. 

So, yes, Stride has never been tightly bound by rules.

Who's actually running the outfit these days?

Since 2021, the CEO has been the former CFO, James Rhyu. He is a corporate bean counter, not an educator. The Fuzzy Panda report in 2024, discussing Rhyu's "colorful leadership style;" FP says that "the phrase asshole came up frequently." Former execs also told FP about incidents of rage and bullying. "management by fear, bullying control freak." I've read plenty of pages of the man's depositions, and "slippery weasel" also comes to mind. This example captures his style pretty well:
Q: Mr. Rhyu, are you a man of your word?
Rhyu: I’m not sure I understand that question.
Q: Do you do what you say you are going to do, sir?
Rhyu: Under what circumstances?
Q: Do you do what you say you are going to do, Mr. Rhyu?
Rhyu: That’s such a broad question. It’s hard for me to answer.

 Is it hard to answer? Because I feel as if it's really easy to answer. It's one thing to offer the "correct" answer and not mean it, but it's a whole other level to pretend that you can't imagine what the correct answer might be.

So what happened in New Mexico?

Gallup-McKinley County Schools includes 4,957 square miles of territory, including some reservations. There are 12,518 students enrolled. 48% of the children in the district live below the poverty line. 

So the district hired Stride to provide an online program, and that was not going well. According to the district's press release, the data was looking ugly:

* Graduation rates in GMCS's Stride-managed online program plunged from 55.79% in 2022 to just 27.67% in 2024.
* Student turnover reached an alarming 30%.
* New Mexico state math proficiency scores for Stride students dropped dramatically, falling to just 5.6%.
* Ghost enrollments and a lack of individualized instruction further compromised student learning.

At the special May 16 board meeting to terminate the contract, the board was feeling pretty cranky.

The district said that the company is failing to meet requirements outlined in their contract. “This is something we’ve literally been working on since the beginning of the year with stride, and we just finally had a belly full of it and we’re ready to make a change,” said Chris Mortensen, President of Gallup-McKinley Schools Board of Education.

The board voted unanimously not just to end the contract, but to seek damages. Stride filed a motion for a restraining order to keep the board from firing them. The court said no. 

Mortenson has had plenty to say about the situation. From the district's press release:

GMCS School Board President Chris Mortensen stated, "Our students deserve educational providers that prioritize their academic success, not corporate profit margins. Putting profits above kids was damaging to our students, and we refuse to be complicit in that failure any longer."

Stride CEO James Rhyu has admitted to failing to meet New Mexico's legal requirements for teacher-student ratios, an issue that GMCS suspects was not isolated. "We have reason to believe that Stride has raised student-teacher ratios not just in New Mexico but nationwide," said Mortensen. "If true, this could have inflated Stride's annual profit margins by hundreds of millions of dollars. That would mean corporate revenues and stock prices benefited at the expense of students and in some cases, in defiance of the law."
"Gallup-McKinley County Schools students were used to prop up Stride's bottom line," said Mortensen. "This district, like many others, trusted Stride to deliver education. Instead, we got negligence cloaked in corporate branding."

The district is looking for another online school provider, and I wish them luck with that. Parents in the wide-ranging district liked the online option, and want something to replace Stride. But finding a cyber-school company that will provide the oversight, transparency and accountability that GMCS wants (not to mention the non-profiteering) is likely to be a challenge. Because if the high-capacity 800 pound gorilla of cyber-school has to cheat to make a buck in your district, who else is going to do any better? 

Of course, that's the Stride business model, so maybe there's hope. Maybe. Stride, for its part, can be expected to just keep grinding away, unchastened and searching for the next district that hasn't done enough homework that they will fall for Stride's sales pitch. 

 

Sunday, June 1, 2025

The Trouble With Public-Private Partnerships

The McKeesport School District (in the greater Pittsburgh area) thought it has a great deal Dick's Sporting Goods, the massive sporty stuff retailer, wanted to team up its foundation with the not-very-wealthy district. It was just the kind of public-private partnership that some folks would love to see more of.

Launched in 2021, the partnership kicked off with Dick's investing a cool $13 million. The school was seeing some real benefits, especially in facilities. The high school got a weight room. There were playground upgrades. Summer programs.

And now the whole thing is over, with Dick's terminating the agreement. And it seems to be a conflict with the current district leadership. In a statement, Dick's said
From the beginning, we were clear that we weren't just looking to provide funding, we were looking to be a true partner sitting side by side with the McKeesport team to reimagine how the elementary school experience could be approached in a holistic way – one that serves the whole child, their family and the community.
Unfortunately, the current school board and district leadership did not uphold the written partnership agreement we had in place. When we sought a path forward, the school board president made it clear that there was ‘no page to get on.’ That response left no room for continued collaboration.

I could go digging for the nature of the disagreement, but I'm not sure I actually care who's right or wrong here because what jumps out at me is that the corporate partner yanked funding because they didn't approve of the choices made by district leaders and were disappointed that they, the private corporation whose primary business is selling sports equipment, did not have more say in how the school district was run. 

I don't care if the Dick's folks are the rightness right people in the history of being right-- I am extremely uncomfortable with the notion that a private company should be able to buy a controlling interest in a school district. Even if Dick's is on the side of the angels here, this creates a system of control that is too easy to corrupt and which disenfranchises the voters of the district.

On top of that, Dick's apparently told Channel 11 that "it remains open to the possibility of future partnership opportunities under new leadership." In other words, their money is now an election issue in McKeesport. Will they sponsor ads saying, "Vote for this person and we'll give your school some more money," because that seems like a bad thing. 

Most of the press coverage includes folks with all sorts of connections to the district saying that it was great that Dick's invested in the district and the money was a help and it sucks that now it's gone, and I certainly get that. But Dick's is disappointed that they didn't get to redesign "the elementary school experience," and that's just bananas. 

Again, I haven't looked into the specifics. Maybe's Dick's ideas are appallingly terrible, or just the kind of slop we often get from well-meaning amateurs. Maybe Dick's ideas were fabulous and the board and superintendent are dopes. The cure for that is not to have a private corporation come in and buy a say in how the district is run. The cure is to elect board members who aren't dopes. This kind of "partnership" is no way to run a public school system. 

ICYMI: Summer Launch Edition (6/1)

It comes at different times in different areas, but for the Board of Directors and the Chief Marital Officer, summer vacation starts this week. It's a curious custom (which is not related to setting the young'uns free to work on the farm) but some traditions are hard to fight. 

Here we go with this week's reading. Remember to share and amplify.

We Got a Date

T C Weber with an update on Penny Schwinn, an experienced edugrifter headed for a federal job.


Jose Luis Vilson reminds teachers about one particular group we learn from.

One Year in With a Shitty Phone Policy

Matt Brady brings the sass with this reflection on the predictable results of a phone policy at his school.


Oh, the various issues that come up when you decide that nobody is allowed to call a student by the name the student has chosen. Nobody? Hmm...

AI is Maybe Sometimes Better than Nothing

Michael Pershan takes a look at that miracle paper about AI in Nigeria and, well, about the miraculous part...

Georgia high school cancels "The Crucible" after complaints of "demonic" themes

It's panic time in Georgia, where the school administration lacks the backbone to stand up to one wingnut parental unit.

19-Year-Old College Student Pleading Guilty in PowerSchool Data Hack

Massachusetts college student is behind the big Power School jack and subsequent extortion attempt. He's in some trouble now. 

Cybercharter school reform is unfinished business in Pa.

Boy, is it ever. The president of the state school board association makes the case one more time in the Morning Call.

Declining Dems for Education Reform (DFER) Seeks Salvation in MAGA Regime

Dark money expert Maurice Cunningham tracks the latest chapter in the continuing saga of those faux democrats at DFER.


Thomas Ultican digs into the current state of charter shenanigans in California.

When the Middle Fails: What Weak Educational Leadership Really Looks Like

We don't talk about lousy administrators often enough. Julian Vasquez Heilig presents ten qualities too frequently found in education's middle managers.


Paul Thomas explains once again why the Science of Reading folks are leading us down the wrong path.

Doctored Doom

Remember when MOOC was going to kill all the universities. Audrey Watters does, and she has some lessons for us from that marketing-masquerading-as-prediction.

Misty Her admits list of alleged personal attacks by teachers union was AI generated

In Fresno, the superintendent charged that the union was harassing her through social media posts and e-mails. She shared documentation. Turns out her staff handed the compiling job over to AI, and--oopsies! Not quite accurate.

The AI Slop Scandal Around the MAHA Report Is Getting Worse

Fresno superintendent shouldn't feel bad-- the doofus running the Department of Health and Human Services did the same damn thing. But once you look past the really obvious AI slop, turns out you find-- more slop.

Desperate Times, Desperate Measures

If you like your AI skepticism straight up and sharp-edged, Ed Zitron is your guy. 

This week at the Bucks County Beacon, I explained why the voucher language hiding in the Big Beautiful Bill is Bad News.

Over at Forbes.com, I took a look at the newly-released budget request for the Department of Ed. Not pretty. 

If you are a young human of a certain age (or any age really because some cartoon shows work for fans of all ages), the other big news for the upcoming week is that a new season of Phineas and Ferb is dropping next Saturday. Here at the Institute, we are cautiously excited.


As always, you are invited to subscribe to the newsletter, and whenever I drop something onto the interwebs, it will fall into your inbox. Free now and always.


Friday, May 30, 2025

Ted Cruz and Federal Vouchers, Again

Rafael Edward Cruz, noted Canadian-born legislator, has some educational ideas to pitch, and the Washington Post, increasingly shorthanded in the opinion department, gave him the space to do it. 

He's actually got two ideas to pitch as "bold, transformational policies" that "we’ll still be talking about 10, 20, even 30 years from now."

One is a Senate version of the voucher language tucked away in the House's Big Beautiful Bill. Cruz wants to offer $10 billion in tax credits (aka $10 billion if revenue cut from federal budget). And he offers the same old baloney.
School choice is the civil rights issue of the 21st century. Every child in America deserves access to a quality education, regardless of their Zip code, their race or their parents’ income. Parents should be empowered to decide what education is best for their child.

Bullshit. How do we know this isn't a serious argument? Because it does not address the true obstacles to school choice. Spoiler alert: the obstacles are not the teachers union, the deep state, or bureaucratic red tape.

The obstacles to school choice are--

1) Expense. It costs a lot to send your kid to a real private school--far more than almost any school voucher in existence. So far, none of the federal voucher proposals have even put a dollar amount on the vouchers to be offered, and certainly nobody has pledged that the voucher should match private school tuition do that choice is truly, completely available to each and every family. 

2) Discrimination. Voucher laws now routinely make special efforts to keep sacrosanct the right of private schools to reject any students they want to reject. The ability of the school to operate according to whatever rules it wishes to follow is valued above the student's ability to choose. If "every child in America" deserve access the school of their dreams, then propose some laws that value the child's right to choose the school above the school's right to reject that child.

3) Accountability. Cruz says every child deserves access to "quality education," but no federal voucher proposal includes any sort of mechanism for insuring that a voucher school will actually provide quality education and not turn out to be some sad grifter with a 6 month lease in a strip mall. If you want everyone to have a choice of quality options, some sort of regulations must be created and enforced to guarantee families that their choice will be a good one. And no-- saying that the market will regulate this by letting the invisible hand drive bad schools out of business is no real answer. Even if the invisible hand actually works, it works far too slow for students whose years in school are irreplaceable.

There are plenty of other aspects of school vouchers to debate, but if your proposal for choice doesn't address those three factors, I'm going to suspect that you are not serious about the whole "civil rights of our era" shtick. 

Cruz also proposes the "Invest in America Act," which we might also call the "Get Babies Hooked On Capitalism Act"

The Invest America Act will trigger fundamental and transformative changes for the financial security and personal freedoms of American citizens for generations. Every child in America will have private investment accounts that will compound over their lives, enhancing the prosperity and economic participation of the vast majority of Americans.

Every newborn gets a private investment account of $1,000. After that, "family, friends or employers" could pitch in up to $5,000. It can sit there churning away until the young human turns 18, at which point capital gains taxes will apply. 

What's the point here? It would certainly get a whole lot of people interested in eliminating capital gains taxes. And the windfall for companies that manage private investment accounts would be massively massive. But Cruz thinks it would create a cultural shift, because every American would have "skin in the game":

Every child would also now be a shareholder in America’s major businesses. All of us have seen the sad statistics about how many young people distrust capitalism or support socialism. This policy would create a whole generation of capitalists. When a teenager opens her app and sees her investment account, she would see that she owns, say, $50 in Apple, $75 in Boeing and $35 in McDonald’s. Those wouldn’t simply be big, scary corporations — she’d be one of their owners.

I have met teenagers, and this scenario seems unlikely. But how very Cruzian to assume that people would change their mind about the system strictly because they themselves benefited from it personally. "I used to be upset about how capitalistic systems oppressed the poor in other countries, but now that I see my $35 in McDonalds growing, I don't feel so bad about clearcutting the rain forest in order to factory farm more fast food beef." 

And when Cruz starts saying this like this--

First, children across America would experience the miracle of compounded growth.

I get flashbacks to the least-beloved song in Disney's Mary Poppins, when the aged Mr. Dawes tries to convince Michael to hand over his tuppence. 

If you invest your tuppence
Wisely in the bank
Safe and sound
Soon that tuppence, safely
Invested in the bank
Will compound

And you'll achieve that sense of conquest
As your affluence expands
In the hands of the directors
Who invest as propriety demands

Giving money to every child in America? Not the worst thing. Expecting that the feds can somehow bribe children into loving capitalism? That shows the same keen understanding of the thoughts and feelings of carbon based life forms that Cruz has displayed throughout his career. 

Thursday, May 29, 2025

AI Is Not A Calculator

One of the popular pro-AI arguments these days is that adopting AI in classrooms is just like back in the day when calculators wormed their way into the classroom. 

"Even with calculators, students still have to learn fundamentals like times tables," the argument goes. "But calculators simplified things, got rid of penmanship-related errors, and ultimately just helped students do their thing more quickly and efficiently." So AI will just get folded into the educational process, a sort of digital helper. 

Well, no. How is AI not at all like a calculator? Let me count the ways.

Calculators are consistent and reliable. Punch in 3 x 12 and it will spit out 36. If you ask it to multiple 3 and 12 a thousand times, the calculator will spit out 36 a thousand times. But if I ask ChatGPT to write a response to the same prompt a thousand times, it will give me a thousand different answers. Here are just a couple of what I got by asking it to write a single sentence comparing Hamlet and Huckelberry Hound:

While Hamlet broods over existential dilemmas and the weight of revenge, Huckleberry Hound ambles through life with laid-back charm and a carefree tune.

Hamlet is a tormented prince consumed by introspection and tragedy, while Huckleberry Hound is a cheerful, easygoing cartoon dog who breezes through life’s mishaps with a song.

Broadly similar, but with significant differences. Structurally, each sentence uses a subordinate clause to center a different character as the main focus of the sentence. ChatGPT also gives us two Hamlets. One more passive (he's "tormented" and "consumed") and the other is active (even he's brooding). One worries about revenge and existential angst, while the other is introspective and --well, somehow consumed by tragedy, which leaves it unclear whether he's somehow part of the tragedy or just pre-occupied with it. 

You may think I'm being picky in ways that only an English teacher could be, but word choice matters and these sentences are not just two ways of saying the same thing, but are two different statements. They represent two different thoughts--well, would represent two different thoughts if a thinking being had generated them. 

AI is not reliable. You get a different answer every time.

Calculators are also accurate. 3 x 12 is, in fact, 36. AI presents incorrect information, often. Let's not call these errors "hallucination," because the word anthropomorphizes the algorithm has human-ish perceptions that have somehow been tricked. It produces incorrect information through exactly the same process that it produces accurate information; if you want to say its errors are hallucinations, you should acknowledge that it hallucinates 100% of the time. 

Calculators work out matters of fact, and the manufacturer's biases are not a factor. Even if the calculator was manufactured by folks who believe that off numbers are way cooler than even ones, the calculator will still compute that 3 x 12 equals 36. 

But AI deals with many matters that are not factual at all. Chatbots have repeatedly demonstrated a tendency to veer off into wildly racist or misogynist output--and that's just the obvious stuff. AI can be programmed to present any bias its operators care to feed into it. And yet we are encouraged to think of chatbots as objective arbiters of Truth even when there is every reason to believe they are stuffed full of human biases. 

A calculator saves you the trouble of performing operations--operations that could be performed by any other calculator or any human being with the necessary operational knowledge. A chatbot saves you the trouble of thinking, of figuring things out.

A chatbot is not a calculator. There may be valid arguments for AI in the classroom, but this is not one of them.