Pages

Saturday, March 21, 2020

AI Is Not Going To Drive Trucks (Or Your Classroom)

From Jalopnik, we get this report from the world of self-driving trucks. Mark it the gazillionth cautionary tale for folks who believe that AI will be able to take over critical human functions any time soon.

The article takes a look at Starsky Robotics, a company that was in the business of producing unmanned semis for public highways. Now it's just in the business of shutting down. The co-founder, Stefan Seitz-Axmacher gave an interview to Automotive News that's behind a paywall but Jalponik shares some of the highlights.

Not as smart as it looks.
Starsky is closing up shop because they cant get backers, and they lost their backers because they insisted on putting an emphasis on safety rather than cool new features

But a problem emerged: that safety focus didn’t excite investors. Venture capitalists, Seltz-Axmacher said, had trouble grasping why the company expended massive resources preparing, validating and vetting his system, then preparing a backup system, before the initial unmanned test run. That work essentially didn’t matter when he went in search of more funding.

“There’s not a lot of Silicon Valley companies that have shipped safety-critical products,” he said. “They measured progress on interesting features.”

Seitz-Axmacher also notes that faith in self-driving vehicles is waning in the venture capitalist world, mostly because they've spent a lot of money and self-driven cars still aren't right around the corner, with a bundle of problems still unsolved. Seitz-Axmacher points at "edge cases," the rare-but-significant events that can happen. In driving, these events are rare but significant, like a deer or small child darting into the road. And "teaching" the AI about these events gets exponentially harder and more expensive as the events are rarer.

This is the flip side of our old issue, standardization. To make a measurement algorithm work, you have to set up a system that excludes all the edge events; one simple way to do that is multiple choice test questions. This is why AI still hasn't a hope in hell of actually assessing writing in any meaningful way-- because a written essay will often include edge elements. In fact, the better essays are better precisely because the writer has included an edge element, a piece of something that falls outside the boundaries of basic expectations.

Self-driving vehicles can't use a standardization solution, can't require all roads and drivers to follow the exact same set of rules, and so they have no choice but to find a way to "teach" the AI how to deal with the messy reality of human behavior on US streets. Well, they do have a choice-- they can just not do it, but that keeps ending up in the death of bystanders.

Seitz-Axmacher goes into detail in a blog post which, again, offers parallels to the world of education. For instance, his painful realization that investors like safety far less than they like cool features. The equivalent in education is the desire to promote cool features, features that will help your product stand out in a crowded marketplace, over whether or not the product actually works, and works all the time.

He gets into other problems with the "AV (autonomous vehicle) space," but I'm particularly struck by this:

The biggest, however, is that supervised machine learning doesn’t live up to the hype. It isn’t actual artificial intelligence akin to C-3PO, it’s a sophisticated pattern-matching tool.

AI isn't really AI. It's just pattern recognition algorithms, which, yes, is exactly the kind of fake AI that ed tech folks keep trying to sell to schools.

But wait-- didn't I read about Amazon having a whole fleet of driverless trucks doing their deliveries? Well, yes and no. Amazon is investing in self-driving vehicles, but its trucks are on the I-10; they are trained to handle one specific chunk of road. They have used the standardization route.

Seitz-Axmacher notes one other thing about his former industry-- it's loaded with bullshit to the point that people expect it. Okay, I'm paraphrasing a bit. But he sadly notes that presenting real, modest ac achievements in a sector filled with people trumpeting dramatic features, including plenty that they can't actually deliver, is hard. That tracks. If there's one thing that the ed tech industry does consistently, it's over-promise and under-deliver.

This kind of story is important to file away some place handy for one other reason-- because sooner or later somebody, maybe your brother-in-law or a thinky tank guy or a secretary of education, is going to say, "Gee, AI is revolutionizing all these other sectors, why not education?" The answer, in part, is that AI isn't doing nearly as much revolutionizing of anything as you keep hearing it is. Always--always--examine the claims. Real life is complex and messy and filled with "edge" events; it still takes a human to navigate.

3 comments:

  1. I've been peeved for quite a while with the ever-present use of the term "AI" to refer to cleverly-programmed algorithms that aren't actually intelligent in any proper sense. The entire tech industry "achieved" AI simply by redefining the term, not by actually solving the problems of making a computer system think. So every time I hear AI used by anyone who isn't a science fiction writer, I know they're a dupe or a snakeoil salesman.

    ReplyDelete
  2. It's an "Expert System" -- and to call it that you have to narrowly define the parameters (e.g. that 1 stretch of the I10).

    Computers ARE really good at pattern matching (i.e. Deep Blue), but the controls have to be legendary.

    ReplyDelete
  3. Great post, Peter! And the analogy to testing is profound! Well done!!!!

    I suspect that what will happen is more standardization of roadways to accommodate the dumb machines, and that will take quite a long time.

    ReplyDelete