Pages
Monday, June 23, 2025
PA: Cyber Charters as District Killers
Sunday, June 22, 2025
ICYMI: Pride Edition (6/22)
This weekend my little under-50K county hosted its second annual Pride in the Park event, and it was a lovely day for it. Plenty of friends, many fun booths, some good food, live music--everything necessary for a fun park festival. A really nice way to get the summer under way.
The Institute's mobile office (aka my aging laptop) self-obliterated about a week ago, so purchasing and setting up the replacement has been sucking up time here. You really forget just how many apps and passwords and bits and pieces you have loaded into a machine until you have to replace them all. Meanwhile, I am really trying to keep my resolve to prioritize writing the book over posting and other ancillary activities, but sometimes the world makes it really hard.
A reminder that if you are reading on the original mother ship, there's a whole list of links to excellent writing about education. Now here's the list for the week.
Broad network of anti-student-inclusion groups impacts public educationFriday, June 20, 2025
Should AI Make Students Care?
Over the years I have disagreed with pretty much everything that Thomas Arnett and the Christensen Institute have had to say about education (you can use the search function for the main blog to see), but Arnett's recent piece has some points worth thinking about.
Arnett caught my attention with the headline-- "AI can personalize learning. It can’t make students care." He starts with David Yeager's book 10 to 25: The Science of Motivating Young People.
Yeager challenges the prevailing view that adolescents’ seemingly irrational choices—like taking risks, ignoring consequences, or prioritizing peer approval over academics—result from underdeveloped brains. Instead, he offers a more generous—and frankly more illuminating—framing: adolescents are evolutionarily wired to seek status and respect.
As someone who worked with teenagers for 39 years, the second half of Yeager's thesis feels true. I'd argue that both ideas can be true at once-- teens want status and respect and their underdeveloped brains lead them to seek those things in dopey ways. But Arnett uses the status and respect framing to lead us down an interesting path.
[T]he key to unlocking students’ motivation, especially in adolescence, is helping them see that they have value—that they are valued by the people they care about and that they are meaningful contributors to the groups where they seek belonging. That realization has implications not just for how we understand student engagement, but for how we design schools…and why AI alone can’t get us where we need to go.
This leads to a couple of other points worth looking at.
"Motivation is social, not just internal." In other words, grit and growth mindset and positive self-image all matter, but teens are particularly motivated by how they are seen by others, particularly peers. Likewise, Arnett argues that it's a myth that self-directed learning is just for a handful of smarty-pants auto-didacts. He uses Bill Gates and Mark Zuckerberg as examples, which is interesting as they are both excellent examples of really dumb smart people, so maybe autodacting isn't all it's cracked up to be. But his point is that most students are autodidacts-- just about things like anime and Taylor Swift. And boy does that resonate (I have a couple of self-taught Pokemon scholars right here). I'll note that all these examples point to auto-didactation that results in a fairly narrow band of learning, but let's let that go for now.
Arnett follows this path to an observation about why schools are often motivational dead zones:
The problem is that school content often lacks any social payoff. It doesn’t help them feel valued or earn respect in the social contexts they care about. And so, understandably, they disengage.
And this
Schools typically offer only a few narrow paths to earn status and respect: academics, athletics, and sometimes leadership roles like Associated Student Body (ASB) or student council. If you happen to be good at one of those, great—you’re in the game. But if you’re not? You’re mainly on the sidelines.
Students want to be seen, and based on my years in the classroom, I would underline that a zillion times.
The AI crew's fantasy is that students sitting in front a screen will be motivated because A) the adaptive technology will hit them with exactly the right material for the student and B) shiny! Arnett explains that any dreams of AI-aided motivation are doomed to failure.
![]() |
AI won't fix this |
Arnett's explanation is not exactly where I expected we were headed. Human respect is scarce, he argues, because humans only have so much time and attention to parcel out, and so it's valuable. AI has infinite attention resources, can be programed to be always there and always supportive. Arnett argues that makes its feedback worthless in terms of status and respect.
I'm not sure we have to think that hard about it. Teens want status and respect, especially from their peers. The bot running their screen is neither a peer, not even an actual human. It cannot confer status or respect on the student, nor is it part of the larger social network of peers.
Arnett argues that this might explain the 5% problem-- the software that works for a few students, in part because 95% of students do not use the software as recommended. Because why would they? The novelty wears off quickly, and truly, entertainment apps don't do much better. I don't know what the industry figures say, but my anecdotal observation was that a new app went from "Have you seen this cool thing!" to "That old thing? I haven't used it in a while" in less than a month, tops.
What keeps students coming back, I believe, isn’t just better software. It’s the social context around the learning. If students saw working hard in these programs as something that earned them status and respect—something that made them matter in the eyes of their peers, teachers, and parents—I think we’d see far more students using the software at levels that accelerate their achievement. Yet I suspect many teachers are disinclined to make software usage a major mechanism for conferring status and respect in their classrooms because encouraging more screen time doesn’t feel like real teaching.
From there, Arnett is back to the kind of baloney that I've criticized for years. He argues that increasing student motivation is super-important, and, okay, I expect the sun rise in the East tomorrow. But he points to MacKenzie Price's Alpha School, the Texas-based scam that promises two hour learning, and Khan Academy as examples of super-duper motivation, using their own company's highly inflated results as proof. And he compares software to "high dosage tutoring," which isn't really a thing.
Arnett has always been an edtech booster, and he's working hard here to get the end of a fairly logical path to somehow provide hope for the AI edtech market.
But I think much of what he says here is valuable and valid-- that AI faces a major hurdle in classrooms because it offers no social relationship component, little opportunity to provide students with status or respect. Will folks come up with ways to use AI tools that have those dimensions? No doubt. But the heart of Arnett's argument is an explanation of one more reason that sitting a student in front of an AI-run screen is not a viable future for education.
Wednesday, June 18, 2025
AI, Facing the Dark, and Human Sparknotes
But I confess that I am not as worried about hallucinations as a lot of people — and, in fact, I think they are basically a skill issue that can be overcome by spending more time with the models. Especially if you use A.I. for work, I think part of your job is developing an intuition about where these tools are useful and not treating them as infallible. If you’re the first lawyer who cites a nonexistent case because of ChatGPT, that’s on ChatGPT. If you’re the 100th, that’s on you.
Intuition? I suppose if you lack actual knowledge, then intuition will have to do. But this will be a recurring theme-- AI's lack of expertise in a field can be compensated for by a human with expertise in that field. How does that shake out down the road when people don't have expertise because they have leaned on AI their whole lives? Hush, you crazy Luddite.
Newton says he uses LLM for fact checking spelling, grammatical, and factual errors, and of course the first two aren't really AI jobs, but these days we just slap an AI label on everything a computer can do. Factual errors? Yikes. Roose says he likes AI for tasks where there's no right or wrong error. They both like it for brainstorming. Also for searching documents, because AI is easier than Control F? Mistakes? Well, you know, humans aren't perfect, either.
Roose notes that skeptics say that the bots are just predicting the next word in a sentence, that they aren't capable of creative thinking or reasoning, just a fancy autocomplete, and that all that will just turn this into a flash in the pan, and Roose has neatly welded together two separate arguments because A) bots aren't actually thinking, just running word token prediction models and B) AI will wash out soon-- those are not related. In fact, I think I'm not unusual in thinking that A is true, and B is to be hoped for, but unlikely. Anyway, Roose asks Newton to respond, and the response is basically, "Well, a lot of people are making a lot of money."
Roose and Newton are not complete dopey fanboys, and at one point Roose says something I sort of agree with:
I think there are real harms these systems are capable of and much bigger harms they will be capable of in the future. But I think addressing those harms requires having a clear view of the technology and what it can and can’t do. Sometimes when I hear people arguing about how A.I. systems are stupid and useless, it’s almost as if you had an antinuclear movement that didn’t admit fission was real — like, looking at a mushroom cloud over Los Alamos, and saying, “They’re just raising money, this is all hype.” Instead of, “Oh, my God, this thing could blow up the world.”
"Clear view of the technology" and "hype" are doing a lot of work here, and Roose and Casey fall into the mistake of straw manning AI skeptics by conflating skeptics and deniers (a mistake Newton has made before and to which Ben Riley responded well).
The other widely quoted chunk of the discussion is this one from Roose:
The mental model I sometimes have of these chatbots is as a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30 percent of the time. But also, the bar of 100 percent reliability is not the right one to aim for here: The base rate that we should be comparing with is not complete factuality but the comparable smart human given the same task.
But the bots don't have Ph.D.s, and I don't want to work with someone juiced up on ketamine, and if bots aren't any better than humans, why am I using them?
The article is entitled "Everyone Is Using AI for Everything," which at least captures the concerning state of affairs.
Take the re-emergence of disgraced author and professional asshat James Frey (the guy who was shamed by Oprah for his fake memoir) who just put an AI-created book on the Book of the Month list. If that seems like a problem, Frey explained why he was happy to let AI do most of his work back in 2023.
I have asked the AI to mimic my writing style so you, the reader, will not be able to tell what was written by me and what was generated by the AI. I am also not going to tell you or make any indication of what was written by me and what was generated by the AI. It was I, the writer, who decided what words were put on to the pages of this book, so despite the contributions of the AI, I still consider every word of this book to be mine. And I don’t care if you don’t.
And there's the other article in the NYT section, a piece about using NotebookLM, a bot designed to help writers. "AI Is Poised To Rewrite Hostory," says editorial director Steve Wasik. He talks about how author Steven Johnson used the bot (which he had helped build) to sift through the research and generate story ideas. Muses Wasik:
Like most people who work with words for a living, I’ve watched the rise of large-language models with a combination of fascination and horror, and it makes my skin crawl to imagine one of them writing on my behalf. But there is, I confess, something seductive about the idea of letting A.I. read for me — considering how cruelly the internet-era explosion of digitized text now mocks nonfiction writers with access to more voluminous sources on any given subject than we can possibly process. This is true not just of present-day subjects but past ones as well: Any history buff knows that a few hours of searching online, amid the tens of millions of books digitized by Google, the endless trove of academic papers available on JSTOR, the newspaper databases that let you keyword-search hundreds of publications on any given day in history, can cough up months’ or even years’ worth of reading material. It’s impossible to read it all, but once you know it exists, it feels irresponsible not to read it.
What if you could entrust most of that reading to someone else … or something else?
On one level, I get it. I do a ton of reading. Did a ton of reading when I was teaching so that I could better represent the material. I do a ton of reading for the writing I do, and yes-- sometimes you tug on a string and a mountain falls in your lap and you despair of reading enough of it to get a picture of what's going on.
But, you know, working out is sweaty and painful. What if I could entrust most of that exercising to someone or something else? Keeping in touch with the any farflung members of my family is really hard and time consuming. What if I could entrust most that work to someone or something else? Preparing and eating food is time consuming and not always fun. What if I could entrustmost of that work to someone or something else?
Humaning is hard. Maybe I could just get some tech to human for me.
![]() |
Any day now |
I know. It's not a simple issue. I wear glasses and, in fact, have plastic lenses inserted in my human eyeballs. I drive a car. I enjoy a variety of technological aids that help me do my humaning both personally and professionally. But there's a line somewhere, and some of these folks have uncritically sailed past it, cheerfully pursuing a future in which they can hand off so many tasks to the AI that they can... what? Settle down to a happy life as a compact, warm ball of flash in a comfortable plasticene nest, lacking both cares and autonomy?
At what point do folks say, "No, you can't have that. That business belongs to me, a human."
But back to the specifics at hand.
I don't know how one separates the various parts of writing into categories like Okay If AI Cuts This Corner For Me and This Part Really Matters So That I Should Do It Myself (or, like Frey, simply decide that none of it is important except the part where you get to sign checks). Brainstorming, topic generation, research-- these are often targeted for techification, but why? I am often asked how I am able to write so much and so quickly, and part of my answer has always been "low standards," but it is also that I read so much that I have a ton of stuff constantly being churned over in my brain and my writing is just the result of a compulsion to process all that stuff into a written form.
That points to a major issue that Roose and Newton and Wasik all miss. Using the bot as a research assistant or first reader or brainstormer can only hope to be useful to a human who is already an expert. Steven Johnson can only use what his AI research bot hands him because he is expert enough to understand it. The notion that a human can use intuition to check the AI's work is a dodge-- what the human needs is actual expertise.
That may be fine for the moment, but what happens when first hand experience and expertise are replaced by "I read an AI summary of some of that stuff"?
At least one of Wasik's subjects wrestles with the hypocrisy problem of an educator who tells students to avoid the plagiarism machine and then employs the same bots to help with scholarship. But I wish more were wrestling with the basic questions of what parts of writing and reading shouldn't be handed over to someone or something else.
In some ways, this is an old argument. I talked to my students about Cliff notes and, later, Sparknotes, and I always made two points. First, what you imagine as an objective judgment is not, and by using their work instead of your own brain, you are substituting their judgment for your own. Not only substituting the final project, but skipping your own mental muscle-building exercise. Second, you are cheating yourself of the experience of reading the work. It's like kissing your partner at the end of an excellent date-- if it's worth doing, it's worth doing yourself.
No doubt there are some experiences that aren't necessarily worth having (e.g. spending ten years scanning data about certain kinds of tumors). But I'd appreciate a little more thoughtfulness before we sign everyone up to use sparknotes for humaning.
Monday, June 16, 2025
What Do We Do Now?
When things get wonky in the country, teachers invariably find themselves driven back to the question, "What are we supposed to do in times like these?" How do we teach students when the atmosphere is filled with so many problematic ideas and impulses (including, it has to be noted, in their homes).
We struggle with the question as citizens. How do we navigate contentious and toxic times? But teaching adds a whole other layer. If the work is to help students figure out to grow into their best selves and understand how to be fully human in the world--well, how does a context like the present affect the work?
When there is violence and hatred, when the discourse is soaked in bullshit and falsehood and stuff that is being spun so hard that it generates more heat than light, how does a teacher run a classroom? Stick to just the facts (whatever they are this week)? Seek to liberate students-- and does that mean teach them about tough political ideas or teach them how to read and write on their own? Media literacy? 21st century skills? Critical thinking?
I think there's a guiding principle beyond and underneath the questions of content and methods, and as I watch one event after another (Los Angeles, Padilla, No Kings Day, Minnesota Murders) get blown up into something even worse than the badness they already embody. I watch the Dems flail about trying to come up with a strategy for "winning" while the GOP gaslights endlessly, insisting that we aren't seeing what we are plainly seeing. We are soaked in media that is designed to alarm rather than inform. The outrage machine (which is wired up to the money machine) is goosed repeatedly.
What are we missing? Certainly honesty and certainly love and concern for fellow human beings. Certainly we've seen too much of the idea that some people really are worth more than others. And the hyperbolic bullshit is massive, epic, and numbing. But underneath it all, we're suffering from a destruction of trust. Much of it has been deliberate--one of the tools of authoritarianism is to break people's trust in experts, journalism, scientists--anything and everything except whatever Dear Leader says.
Distrust kills relationships. It short-circuits communication because if you can't trust what a person is saying, then you haven't much to go on except your own ideas about what the person is up to, Distrust leads to overreactions, which aid the cycle. You say it's raining? Since I distrust you, I'll go ahead and say it's not raining at all. Then when I go outside and get wet, I look like a fool and the folks on your team have all the more reason to trust you and not me.
So all the navel gazing and study and vivisecting of society is great and all, but what do we do. What do we actually do?
Trust more? I suppose we could try, but putting your trust in someone who is deliberately untrustworthy is foolish. My classroom rule was always to trust students until they gave me a solid reason I couldn't. Or maybe two or three. But deciding to put our trust in people who have proven untrustworthy dozens of times-- that's just asking for trouble.
So maybe earn trust. But you can't control whether or not someone chooses to trust you, and in fact as a public school teacher, there are a lot of folks delivering the message that you can't be trusted.
So maybe the north star has to be this-- act in a way that is deserving of trust. Honesty, integrity, respect, and a dedication to getting the material right-- those all come under the heading of principles that deserve trust in a teacher (or any other human being). They build trust in the organization, and as Edward Deming pointed out at great length, an organization powered by trust is a healthy one and an organization without trust is in trouble.
It is easy to slide into the idea that ends can justify means, and therefor if those means involve sacrificing principles and thereby making yourself less trustworthy, that's okay. But we very rarely accomplish our ends, so we end up being defined by our means.
The thing about being trustworthy is that it allows for a broad range of beliefs and practices. But if you find that pursue particular beliefs or practices you have to using lying and manipulation, if you have to drop integrity and respect, then maybe consider that these are ends not worth pursuing.
But more than ever, students need teachers they can trust (whether they choose to or not), and of course many (if not most) students already have a trustworthy teacher in the classroom. But as teachers are buffeted about by various claims and demands and suggestions about how to respond to the country's current messiness, and if holding onto the idea of trust as an anchor helps--well, it may not seem like much, but in the long run, it is everything.
Sunday, June 15, 2025
ICYMI: Kingless Edition (6/15)
I hope your day yesterday was a good one, regardless of what you did with it. What times we live in.
I'll remind you this week that everyone can amplify. If you read it and think it's important, share it. Also, subscribe to the blog, newsletter, or whatever. Bigger numbers mean greater visibility. And it doesn't hurt to throw in a little money for those who depend on their writing to help put bread on the table. Clicking and liking and sharing are not quite up there with getting actively involved, but they can provide the information and motivation that get folks out there.
So here's what we've got this week.
New data confirms NC school voucher expansion disproportionately benefits wealthy private school familiesGosh, what a surprise. North Carolina school vouchers are not a rescue for the poor, but a hand out for the wealthy. Kris Nordstrom explains the findings.
12News I-Team finds Arizona's $1 billion voucher experiment hurting high-performing public districts and charter schoolsA news team discovers that besides subsidizing wealthy private school patrons, Arizona's voucher program helped students "escape" top-rated public schools.
Trump and Republicans Want Taxpayers to Fund Their Pet Project: Private SchoolsJeff Bryant reports for Our Schools on the HOP goal of taxpayer-funded private schools.
Audrey Watters hits it again.
We have bent education – its budgets, its practices – to meet the demands of an industry, one that has neatly profited from the neoliberal push to diminish and now utterly dismantle public funding.Some Thoughts about Science Education Reforms in the Past Century
Monday, June 9, 2025
Another Bad AI Classroom Guide
SREB was formed in 1948 by governors and legislators; it now involves 16 states and is based in Atlanta. Although it involves legislators from each of the states, some appointed by the governor, it is a non-partisan, nonprofit organization. In 2019 they handled about $18 million in revenue. In 2021, they received at $410K grant from the Gates Foundation. Back in 2022, SREB was a cheerful sock puppet for folks who really wanted to torpedo tenure and teacher pay in North Carolina.
Pillar #1
Use AI-infused tools to develop more cognitively demanding tasks that increase student engagement with creative problem-solving and innovative thinking.
AI draws on a broad spectrum of knowledge and has the power to analyze a wide range of resources not typically available in classrooms.
Use AI to streamline teacher administrative and planning work.
Use AI to support personalized learning.
Develop students as ethical and proficient AI users.
AI can also help organize thoughts and ideas into a coherent outline. AI can recommend logical sequences and suggest sections or headings to include by analyzing the key points a student wants to cover. AI can also offer templates, making it easier for students to create well-structured and focused outlines.
These are all things the writer should be doing. Why the guide thinks using AI to skip the "planning stages" is ethical, but using it in any other stages is not, is a mystery to me.
Students also need to develop "critical media literacy" because the AI is going to crank out well-polished turds, and it's the student's job to spot them. "Our product helps dress you, but sometimes it will punch you in the face. We are not going to fix it. It is your job to learn how to duck."
Cross-disciplinary learning-- use the AI in every class, for different stuff! Also, form a student-led AI ethics committee to help address concerns about students substituting AI for their own thinking.
Concerns? Bias, again. Data security-- which is, incidentally, also the teacher's responsibility. AI research might have ethical implications. Students also might be tempted to cheat- the solution is for teachers to emphasize integrity. You know, just in case the subject of cheating and integrity has never ever come up in your classroom before. Deepfakes and hallucinations damage the trustworthiness of information, and that's why we are calling for safeguards, restrictions, and solutions from the industry. Ha! Just kidding. Teachers should emphasize that these are bad, and students should watch out for them.
Appendix
A couple of charts showing aptitudes and knowledge needed by teachers and admins. I'm not going to go through all of this. A typical example would be the "knowledge" item-- "Understand AI's potential and what it is and is not" and the is and is not part is absolutely important, and the guide absolutely avoids actually addressing what AI is and is not. That is a basic feature of this guide--it's not just that it doesn't give useful answers, but it fails to ask useful questions.
It wraps up with the Hess Cognitive Rigor Matrix. Whoopee. It's all just one more example of bad guidance for teachers, but good marketing for the techbros.
Sunday, June 8, 2025
ICYMI: Birthday Board Edition (6/8)
This week the Board of Directors here at the Institute celebrated their birthday. This involved some extended book store time and a day at Waldameer Park in Erie, an old amusement park that the Chief Marital Officer and I had not visited in many years. The board was both delighted and exhausted, and I got enough steps in that I believe I can just sit for the upcoming week. That's how that works, right?
Have some reading.
Diabolus Ex MachinaExposed: University of Michigan Hired Undercover Spies to Target Students
Thursday, June 5, 2025
NM: Stride Caught Misbehaving Yet Again
One of its first big investors was Michal Milken. That investment came a decade after he pled guilty to six felonies in the “biggest fraud case in the securities industry” ending his reign as the “junk bond king.” Milken was sentenced to ten years, served two, and was barred from ever securities investment. In 1996, he had established Knowledge Universe, an organization he created with his brother Lowell and Larry Elison, who both kicked in money for K12.
Packard was himself sued for misleading investors with overly positive public statements, and then selling 43% of his own K12 stock ahead of a bad news-fueled stock dip. Shortly thereafter, in 2014, he stepped down from leading K12 to start a new enterprise.
In 2016 K12 got in yet another round of trouble in California for lying about student enrollment, resulting in a $165 million settlement with then Attorney General Kamala Harris. K12 was repeatedly dropped in some states and cities for poor performance.
In 2020, they landed a big contract in Miami-Dade county (after a big lucrative contribution to an organization run by the superintendent); subsequently Wired magazine wrote a story about their "epic series of tech errors." K12 successfully defended itself from a lawsuit in Virginia based on charges they had greatly overstated their technological capabilities by arguing that such claims were simply advertising “puffery.”
In November of 2021, K12 announced that it would rebrand itself as Stride.
The New York Times had quoted Packard as calling lobbying a “core competency” of the company, and the company has spread plenty of money around doing just that. And despite all its troubles, Stride was still beloved on Wall Street for its ability to make money.
Q: Mr. Rhyu, are you a man of your word?
Rhyu: I’m not sure I understand that question.
Q: Do you do what you say you are going to do, sir?
Rhyu: Under what circumstances?
Q: Do you do what you say you are going to do, Mr. Rhyu?
Rhyu: That’s such a broad question. It’s hard for me to answer.
Is it hard to answer? Because I feel as if it's really easy to answer. It's one thing to offer the "correct" answer and not mean it, but it's a whole other level to pretend that you can't imagine what the correct answer might be.
So what happened in New Mexico?
Gallup-McKinley County Schools includes 4,957 square miles of territory, including some reservations. There are 12,518 students enrolled. 48% of the children in the district live below the poverty line.
So the district hired Stride to provide an online program, and that was not going well. According to the district's press release, the data was looking ugly:
* Graduation rates in GMCS's Stride-managed online program plunged from 55.79% in 2022 to just 27.67% in 2024.
* Student turnover reached an alarming 30%.
* New Mexico state math proficiency scores for Stride students dropped dramatically, falling to just 5.6%.
* Ghost enrollments and a lack of individualized instruction further compromised student learning.
At the special May 16 board meeting to terminate the contract, the board was feeling pretty cranky.
The district said that the company is failing to meet requirements outlined in their contract. “This is something we’ve literally been working on since the beginning of the year with stride, and we just finally had a belly full of it and we’re ready to make a change,” said Chris Mortensen, President of Gallup-McKinley Schools Board of Education.
The board voted unanimously not just to end the contract, but to seek damages. Stride filed a motion for a restraining order to keep the board from firing them. The court said no.
Mortenson has had plenty to say about the situation. From the district's press release:
GMCS School Board President Chris Mortensen stated, "Our students deserve educational providers that prioritize their academic success, not corporate profit margins. Putting profits above kids was damaging to our students, and we refuse to be complicit in that failure any longer."
Stride CEO James Rhyu has admitted to failing to meet New Mexico's legal requirements for teacher-student ratios, an issue that GMCS suspects was not isolated. "We have reason to believe that Stride has raised student-teacher ratios not just in New Mexico but nationwide," said Mortensen. "If true, this could have inflated Stride's annual profit margins by hundreds of millions of dollars. That would mean corporate revenues and stock prices benefited at the expense of students and in some cases, in defiance of the law."
"Gallup-McKinley County Schools students were used to prop up Stride's bottom line," said Mortensen. "This district, like many others, trusted Stride to deliver education. Instead, we got negligence cloaked in corporate branding."
The district is looking for another online school provider, and I wish them luck with that. Parents in the wide-ranging district liked the online option, and want something to replace Stride. But finding a cyber-school company that will provide the oversight, transparency and accountability that GMCS wants (not to mention the non-profiteering) is likely to be a challenge. Because if the high-capacity 800 pound gorilla of cyber-school has to cheat to make a buck in your district, who else is going to do any better?
Of course, that's the Stride business model, so maybe there's hope. Maybe. Stride, for its part, can be expected to just keep grinding away, unchastened and searching for the next district that hasn't done enough homework that they will fall for Stride's sales pitch.
Sunday, June 1, 2025
The Trouble With Public-Private Partnerships
From the beginning, we were clear that we weren't just looking to provide funding, we were looking to be a true partner sitting side by side with the McKeesport team to reimagine how the elementary school experience could be approached in a holistic way – one that serves the whole child, their family and the community.
Unfortunately, the current school board and district leadership did not uphold the written partnership agreement we had in place. When we sought a path forward, the school board president made it clear that there was ‘no page to get on.’ That response left no room for continued collaboration.
I could go digging for the nature of the disagreement, but I'm not sure I actually care who's right or wrong here because what jumps out at me is that the corporate partner yanked funding because they didn't approve of the choices made by district leaders and were disappointed that they, the private corporation whose primary business is selling sports equipment, did not have more say in how the school district was run.
I don't care if the Dick's folks are the rightness right people in the history of being right-- I am extremely uncomfortable with the notion that a private company should be able to buy a controlling interest in a school district. Even if Dick's is on the side of the angels here, this creates a system of control that is too easy to corrupt and which disenfranchises the voters of the district.
On top of that, Dick's apparently told Channel 11 that "it remains open to the possibility of future partnership opportunities under new leadership." In other words, their money is now an election issue in McKeesport. Will they sponsor ads saying, "Vote for this person and we'll give your school some more money," because that seems like a bad thing.
Most of the press coverage includes folks with all sorts of connections to the district saying that it was great that Dick's invested in the district and the money was a help and it sucks that now it's gone, and I certainly get that. But Dick's is disappointed that they didn't get to redesign "the elementary school experience," and that's just bananas.
Again, I haven't looked into the specifics. Maybe's Dick's ideas are appallingly terrible, or just the kind of slop we often get from well-meaning amateurs. Maybe Dick's ideas were fabulous and the board and superintendent are dopes. The cure for that is not to have a private corporation come in and buy a say in how the district is run. The cure is to elect board members who aren't dopes. This kind of "partnership" is no way to run a public school system.
ICYMI: Summer Launch Edition (6/1)
It comes at different times in different areas, but for the Board of Directors and the Chief Marital Officer, summer vacation starts this week. It's a curious custom (which is not related to setting the young'uns free to work on the farm) but some traditions are hard to fight.
Here we go with this week's reading. Remember to share and amplify.
We Got a DateDeclining Dems for Education Reform (DFER) Seeks Salvation in MAGA Regime
Doctored Doom