I apologize for the language, Mom. But some days.
I'm not sure anybody can pick the absolute worst AI company; it's like trying to pick the worst toxic waste dump. But this one is certainly a candidate. Here's the pitch for Companion's Einstein:
He logs into Canvas every day, watches lectures, reads essays, writes papers, participates in discussions, and submits your homework — automatically.
What the actual hell. The pitch is broken down into areas, so you know that Einstein can log into Canvas, watch videos, covers every subject, works while you sleep-- everything. In the FAQ section, it promises that your professor will never know, and will in fact get better at meeting the course expectations (well, you know, except the expectation that a human student will learn by doing the work). The FAQ even answers the question, "What if I want to do an assignment myself?" You can tell it to skip that assignment, though you can of course set the bot to auto-submit everything.
But hey-- as the website says:
Stop stressing. Start acing.
Einstein does the busywork so you don't have to.
Today's most powerful AI systems can reason through PhD-level problems, write production code, and generate entire applications from a sentence. They are, by any meaningful measure, brilliant.
Narrators voice: They cannot do those things.
Yet every conversation starts from zero. Bad advice carries no cost, misunderstood values get forgotten by next session, and a decision that derails your month goes unnoticed and unlearned. Nothing compounds—including the responsibility.
The point seems to be that companion won't forget you, like those other goldfish-powered bots (though ChatGPT is among those that is now supposed to remember your other "interactions" to better mine data better meet your needs). But it just gets more and more bizarre--
Oh for crying out loud. I suppose an AI can be "bound to a human," though "bought by a human" seems more accurate. But "loyal"? Nope. Able to figure out a human's long interests and align itself to them? Bullshit. How do I know it's bullshit? Because humans can't figure out their own long term best interests. How else do I know? Because it would not be in the long term best interests of a human to ditch an entire course and dodge an education by having a bot fake it!
But hey-- the company promises that "your companion knows what you're working toward and how you think." This is also bullshit, because no program knows how any human thinks. It does not even "know" what "thinking" is. The pitch here is also that your companion has a "private virtual computer" so that anything a human with a computer can do, your companion can do. I don't even know what to make of that, other than it may be the most effort yet put into trying to anthropomorphize a computer program. "No, this bot isn't a computer! It's a little tiny person, sitting inside the computer working on its own tiny little computer." I mean, damn-- how do I know that my companion isn't even logging onto its virtual computer, but has hired a companion of its own to do the work. I'm envisioning a series of ever-smaller digital Russian nesting dolls, each sitting at tinier and tinier computer desks.
An extension of you so you can be more of you.
Human morality rarely begins as an abstract love for all of humanity. It begins with someone specific. Your child. Your partner. Your team. Your friend. Through concrete responsibility, care expands to the rest of the world.
This may, in fact, how the sociopaths of Silicon Valley go about developing a moral sense, though let me suggest that if loving other humans doesn't start until you have a partner and a child, you may be a very troubled human being. This goes right up there with the Sam Altman quote circulating today
People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.
But Companion isn't just talking about the origins of morality for humans, because "AI should develop the same way." Here's the wrap-up:
A companion shaped by one human life over time develops something closer to genuine responsibility. It learns your boundaries by crossing them and being corrected, your values by watching which suggestions you take and which you ignore, what trust means by earning yours slowly over months.
We believe an AI that cares for one human life is more likely to care for humanity itself.
So while you may think that Companion Inc is just offering an AI bot that can take classes and cheat effectively for you, it is actually a program that will save the entire damned human race by teaching the bots to care about us. Letting Einstein take your class, do your homework, and write your papers will lead it to love you and care for you, and through you, all of humanity. That sounds wonderful, and if we could somehow get the tech overlords who design these bots to care about human beings half as much, the world would be a better place.
I came across Einstein thanks to a former student who is now a college English professor at one of those places where administration thinks teachers should Get With The Program because AI Is The Future and students are going to use this stuff anyway, so maybe take a few minutes to teach them about Using AI Ethically. Which is bullshit on bullshit. Look at this product, AI-friendly administrator, and tell me how it should be used ethically, because ethical use of Einstein strikes me as absolutely impossible. Unless, I guess, you believe that using Einstein will teach our Robot Overlords to love us and care for us in a deeply moral way. But I have my doubts that even a college administrator could wade through that much bullshit.


No comments:
Post a Comment