The Invisible Hand: How Dark Money Is Inventing Prestige for Right-Wing Academics
Sunday, February 2, 2025
ICYMI: Groundhog Day Edition (2/2)
The Invisible Hand: How Dark Money Is Inventing Prestige for Right-Wing Academics
Friday, January 31, 2025
FL: Blaming NAEP
However, upon receipt of Florida’s 2024 National Assessment of Educational Progress scores, it is evident that the Biden Department of Education’s administration of what was the previously gold standard exam has major flaws in methodology and calls into question the validity of the results as they pertain to the educational landscape in 2024.This is why I sent incoming Secretary of Education Linda McMahon a letter outlining concerns and providing solutions so that together, we can make NAEP great again.
See, the problem is that over half a million Florida students are homeschooling or using their taxpayer-funded voucher to attend private school, so NAEP is only testing the students left behind in public school (this, somehow, is Biden's fault). Since 2022, Florida has gone from 165,000 voucher students to 524,000, and leaving them out has hurt the state score.
I'm not sure that Diaz fully grasps that he has argued here, in print, that Florida has made its public school system measurably worse.
He also argues that the scores are too heavily weighted toward urban, high-poverty, high-minority districts, which, again, serves as a sort of admission that the state hasn't served those districts well.
NAEP scores are used for a variety of poor purposes-- misNAEPery-- so I sympathize with Diaz. Who knows-- maybe he'll be able to interest McMahon in a little NAEP cookery. On the other hand, private and home schoolers may share some thoughts with him regarding how they feel about being compelled to take the federal Big Standardized Test.
Thursday, January 30, 2025
The White House Dreams Of Ending Radical Stuff
What Is School Choice Week About
It's National School Choice week, as you will have heard from every right-winged organization out there, including Congress and the White House. But why?
As Truthout reporters Alyssa Bowen, Ansev Demirhan, and Lisa Graves, laid out in a recent article about National School Choice Week, this "school privatization PR stunt is a pet project" of some uber-rich folks like the Gleason and Koch families. It's a fine gig; Gleason heir Tracy Gleasons pays herself over half a million bucks in salary just to run the foundation that runs this single week.
Donald Trump supported the week and the school choice shtick in his first go-round, and he's at it again. And if you've ever wondered why, exactly, folks on the far right are so pro-choice when it comes to education, Trump has done you the favor of providing an illuminating context. Because it's that context that explains much of the "why" behind "school choice."
The MAGA/Heritage Foundation vision of government is simple. Government should protect private property (from threats domestic and foreign) and it should support private enterprise and the free market (except when powerful private enterprises demand to be protected from the free market). Government should not be in the business of helping people or trying to make their lives better-- they should take care of that themselves. It very especially should not be in the business of trying to lift people above their proper station in life, particularly people who aren't straight christianist white men. Mind you, they have no objection to those people getting to a better place if they do it the Right Way (by their own bootstraps and following the rules laid out by the people at the top of the ladder).
Much of what the Trump regime is whining about points directly to their guiding principles. When they say that it's okay for immigrants to get to this country as long as they do it the right way, they're also explaining their rules for social mobility and a social safety net. It's okay for Those People to get food and health care and a house and supplemental income when they're thrown out of work--as long as they do it the right way. DEI is, for these folks, just another open border, allowing all sorts of people to get into spaces where they don't belong and have no legitimate right to be. People who are Right should get to make the rules, and people who are Wrong should not get to interfere with those who are Right.
In that context, is it any wonder that the same people who want to end social safety net programs and slam the door on DEI and stop the government from performing any sorts of functions outside of protecting property and enterprise--is it any wonder that these folks also want to dismantle public education? Since the days of Milton Friedman, it has been a far right dream to get government out of education.
Dismantle the system. Make everyone get their own kids an education, based on what best fits their proper place in society and what they are able to prove they deserve.
Except that people like the system, and "I don't want to pay to educate Those Peoples' Children" is not a winning political message.
So don't call it dismantling. Call it freedom! Yes, your public school is falling down, but here's a voucher that you can use (at any school that will let you use it). If you'd like to send your kid to a really nice, expensive school, well, you shouldn't decide to be poor.
The Trumpian/Heritage vision of government seems to be a modern riff on feudalism, where the rich and powerful make the rules and clear away the Deep State, which seems best defined as folks who are inclined to follow the rule of law rather than the rule of what I say goes, and where all the lower clases are forced to contribute to the church. A public education system aimed at providing a good education for all students, no matter the background, has no place in a feudal system.
Now granted-- school choice has collected an assortment of supporters, including people who really believe the free market will make schools better and even people who see choice as one tool to make the larger education system better. Plus, of course, opportunists who see a good chance to make a buck as well as christianists who really like the idea of making taxpayers help fund the church (which is what Those People would be doing if they weren't Wrong). But none of those people are driving the school choice bus.
The dismantlers have a whole long list, which we're seeing rolled out via executive order. Public education just happens to be on it, and "school choice" is the fig leaf they place over the dynamite they want to load around the public school's foundation.
Wednesday, January 29, 2025
PA: AI Cyber Charter Rejected--Hard
While a single deficiency would be grounds for denial, the Department has identified deficiencies in all five of the required criteria.
0 out of 5! That's a hard fail. Let's break it down.
Criterion 1: Unbound Academic has provided no evidence of sustainable support for the cyber charter school plan by teachers, parents or guardians, and students.The Applicant did not provide documentation or description of the curriculum framework which could have provided evidence that learning objectives and outcomes have been established for each course offering in the Application or during the November 7 Hearing. The Applicant also did not provide any information regarding the number of courses required for students, materials to be used, planned activities, or procedures for measurement of the objectives, nor did it adequately explain the amount of time required for students to be online in order to meet the course standards for offered grades.
During the November 7 Hearing, the Applicant shared that the teacher induction plan builds upon itself, and training would be based on an observed teacher’s needs, using assessment benchmarks along the way to determine future employability.
The Prices haven't hired or worked with teachers before--their private school uses AI and guides, supposedly. Looking at their application, I was a little fuzzy on whether they intend to hire actual certified teachers for Unbound. If that was the plan, there was no plan for onboarding them.
Criterion 4: Unbound Academic’s Application is non-compliant with requirements of Section 1747-A.
Artificial intelligence (AI) presents unique opportunities that educators across Pennsylvania are exploring through effective, safe, and ethical implementation. However, the artificial intelligence instructional model being proposed by this school is untested and fails to demonstrate how the tools, methods, and providers would ensure alignment to Pennsylvania academic standards. When questioned at the public November 7 Hearing, the Applicant stated that this model was used “in several private schools across Texas”, although the model has been used for Ukrainian refugees in Poland [both examples are other Price operations]. At the time of the November 7 Hearing, the Applicant had not been approved for a virtual charter school, so there is no data that supports the efficacy of this model.
Trump Ends "Book Ban Hoax"
Amidst all the other slashing and burning of the new regime, we find a press release from the U.S. Department of Education, "U.S. Department of Education Ends Biden’s Book Ban Hoax." Sure.
There are several things going on here, all worth noting.
On the surface, the action is mostly about dismissing 11 complaints filed with the Ed Department Office for Civil Rights, rejecting the notion that anyone can sue over the removal of certain books. If you're gay and your school district decides to eliminate every trace of other gay people from the district, it's an indefensible and hostile act, but is it a violation of your civil rights? That may be open to debate, but it barely scratches the surface of what's going on here.
The press release opens with reference to "so-called 'book bans'" underlining one of the banner points which is that it's not really a ban because you can probably buy the book somewhere if you really want to, and unless every copy of the book that exists has been thrown into the sun and anyone who tries to reproduce it is jailed, it's not really a ban. By the book banner definition of book ban, no book has been banned ever
But if instead of using a new definition of the word "ban," we stick with what native English speakers have generally understood the word to mean (a person in authority stands at the door and says "you can't bring that in here"), then book bans are what we have, from schools where the book has been barred from libraries and classrooms all the way up to Utah, where students are forbidden to bring even their own personal copy to school.
So, yes, these are book bans.
The announcement also covers the elimination of the "book ban coordinator" who was to handle all these various cases.
The release also repeatedly describes book bans as a process of removing "age-inappropriate books" or even "age-inappropriate materials."
That's a hell of a leap beyond the usual demand to remove a book because of "sexual content" or other Naughty Stuff. "Age-inappropriate" is a broad term that can be used to cover anything that authorities want to ban from schools. Anything you don't want children to hear about can be tagged "age-inappropriate."
All of this, of course, in the service of "the deeply rooted American principle that local control over public education best allows parents and teachers alike to assess the educational needs of their children and communities." Because the regime really believes in parental rights, unless those parents are LGBTQ or have LGBTQ kids or want their kids to learn about historic racism or support diversity, equality and inclusion or opposing banning books from the school or--well, they support just the parental right to agree with the administration. Otherwise, just hush. You don't need the right to disagree with the administration, just like your kid doesn't need the right to read anything that the People In Charge say they shouldn't read.
Tuesday, January 28, 2025
AI Is For The Ignorant
Well, here's a fun piece of research about AI and who is inclined to use it.
The title for this article in the Journal of Marketing-- "Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity"-- gives away the game, and the abstract tells us more than enough about what the research found.
You may think that familiarity with technology leads to more willingness to use it, but AI runs the opposite direction.
Contrary to expectations revealed in four surveys, cross country data and six additional studies find that people with lower AI literacy are typically more receptive to AI.
That linkage is explained simply enough. People who don't really understand what AI is or what it actually does "are more likely to perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes."
The researchers are Stephanie Tully (USC Marshall School of Business), Chiara Longoni (Bocconi University), and Gil Appel (GW School of Business) are all academics in the world of business and marketing, and while I wish they were using their power for Good here, that's not entirely the case.
Having determined that people with "lower AI literacy" are more likely to fork over money for AI products, they reach this conclusion:
These findings suggest that companies may benefit from shifting their marketing efforts and product development towards consumers with lower AI literacy. Additionally, efforts to demystify AI may inadvertently reduce its appeal, indicating that maintaining an aura of magic around AI could be beneficial for adoption.
To sell more of this non-magical product, make sure not to actually educate consumers. Emphasize the magic, and go after the low-information folks. Well, why not. It's a marketing approach that has worked in certain other areas of American life. In a piece about their own research, the authors suggest a tiny bit of nuance, but the idea is the same. If you show AI doing stuff that "only humans can do" without explaining too clearly how the illusion is created, you can successfully "develop and deploy" new AI-based products "without causing a loss of the awe that inspires many people to embrace this new technology." Gotta keep the customers just ignorant enough to make the sale.
And lord knows lots of AI fans are already on the case. Lord knows we've been subjected to an unending parade of lazy journalism of the "Wow! This computer can totally write limericks like a human" variety. For a recent example, Reid Hoffman, co-founder of LinkedIn, Microsoft board member, and early funder of OpenAI, unleashed a warm, fuzzy, magical woo-woo invocation of AI in the New York Times that is all magic and zero information.
Hoffman opens with an anecdote about someone asking ChatGPT "based on everything you know about me, draw a picture of what you think my current life looks like." This is Grade A magical AI puffery; ChatGPT does not "know" anything about you, nor does it have thoughts or an imagination to be used to create a visual image of your life. "Like any capable carnival mind reader," continues Hoffman, comparing computer software not just to a person, but to a magical person. And when ChatGPT gets something wrong, like putting a head of broccoli on your desk, Hoffman paints that "quirky charm" as a chance for the human to reflect and achieve a flash of epiphany.
But what Hoffman envisions is way more magical than that-- a world in which the AI knows you better than you know yourself, that could record the details of your life and analyze them for you.
Decades from now, as you try to remember exactly what sequence of events and life circumstances made you finally decide to go all-in on Bitcoin, your A.I. could develop an informed hypothesis based on a detailed record of your status updates, invites, DMs, and other potentially enduring ephemera that we’re often barely aware of as we create them, much less days, months or years after the fact.
When you’re trying to decide if it’s time to move to a new city, your A.I. will help you understand how your feelings about home have evolved through thousands of small moments — everything from frustrated tweets about your commute to subtle shifts in how often you’ve started clicking on job listings 100 miles away from your current residence.
The research trio suggested that the more AI imitates humanity, the better it sells to those low-information humans. Hoffman suggests that the AI can be more human than the user. But with science!
Do we lose something of our essential human nature if we start basing our decisions less on hunches, gut reactions, emotional immediacy, faulty mental shortcuts, fate, faith and mysticism? Or do we risk something even more fundamental by constraining or even dismissing our instinctive appetite for rationalism and enlightenment?
Software will make us more human than humans?
So imagine a world in which an A.I. knows your stress levels tend to drop more after playing World of Warcraft than after a walk in nature. Imagine a world in which an A.I. can analyze your reading patterns and alert you that you’re about to buy a book where there’s only a 10 percent chance you’ll get past Page 6.
Instead of functioning as a means of top-down compliance and control, A.I. can help us understand ourselves, act on our preferences and realize our aspirations.
I am reminded of Knewton, a big ed tech ball of whiz-bangery that was predicting it would collect so much information about students that it would be able to tell students what they should eat for breakfast on test day. It did not do that; instead it went out of business. Even though it did its very best to market itself via magic.
If I pretend that I think Hoffman's magical AI will ever exist, I still have other questions, not the least of which is why would someone listen to an AI saying "You should go play World of Warcraft" or "You won't be able to finish Ulysses" when people tend to ignore other actual humans with similar advice. And where do we land if Being Human is best demonstrated by software rather than actual humans? What would it do to humans to offload the business of managing and understanding their own lives?We have a hint. Research from Michael Gerlich (Head of Center for Strategic Corporate Foresight and Sustainability, SBS Swiss Business School) has published "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking"* and while there's a lot of scholaring going on here, the result is actually unsurprising.
Let's say you were really tired of walking everywhere, so you outsourced the walking to someone else, and you sat on the couch every waking hour. Can we predict what would happen to the muscles in your legs? Sure--when someone else bears the load, your own load-bearing members get weaker.
Gerlich finds the same holds true for outsourcing your thinking to AI. "The correlation between AI tool usage and critical thinking was found to be strongly negative." There are data and charts and academic talk, but bottom line is that "cognitive offloading" damages critical thinking. That makes sense several ways. Critical thinking is not a free-floating skill; you have to think about something, so content knowledge is necessary, and if you are using AI to know things and store your knowledge for you, your thinking isn't in play. Nor is it working when the AI writes topic sentences and spits out other work for you.
In the end, it's just like your high school English teacher told you-- if someone else does your homework for you, you won't learn anything.
You can sell the magic and try to preserve the mystery and maybe move a few more units of whatever AI widget you're marketing this week, but if you're selling something that people have to be ignorant to want so that they can offload some human activity then what are you doing? To have more time for World of Warcraft?
If AI is going to be any use at all, it will not be because it hid itself behind a mask of faux human magical baloney, but because it can do something useful and be clear and honest about what it is actually, really doing, and not because it used an imitation of magic to capitalize on the ignorance of consumers.
*I found this article thanks to Audrey Watters