Yes, this much salt |
After five years and an investment of around $2.5 billion, Uber’s effort to build a self-driving car has produced this: a car that can’t drive more than half a mile without encountering a problem.
We're talking $2.5 billion-with-a-B dollars spent with nothing usable to show for it. Unfortunate for something that has been deemed for Uber as "key to its path to profitability." Meanwhile, corporations gotta corporate-- a "self-driving" Uber killed a pedestrian in Temp, Arizona back in 2018, and the court has just ruled that while Uber itself is off the hook, the "safety driver" will be charged with negligent homicide. She mad the not-very-bright assumption that the car could do what its backers said it could do.
Meanwhile, Microsoft has absorbed partnered with OpenAI, the folks whose GPT-3 language emulator program is giving everyone except actual English speakers chills of excitement. Not everyone is delighted, but Microsoft seems to think this exclusive license will provide an "incredible opportunity" to expand their Azure platform that will "democratizes AI technology" and pump up their AI At Scale initiative. There's a huge amount of hubris here; not only do they assert that the whole grand vision will start--start--by teaching computers human language, but they apparently believe they know how humans learn language-- it's "by understanding semantic meanings of words and how these words relate to other words to form sentences."
Who knew? Thinking, ideas, organization, even paragraphs and whole books-- juat a waste of time. All the time I wasted as a teacher, when that's all there is to it. And hey-- Microsoft claims to have already come up with AI that reads a document (well, a Wikipedia article) and answers questions as well as a human-- did it two years ago, in fact.
And yet, here in the real world, AI still doesn't have any ability with language beyond the superficial areas, because computers--even the "AI" ones-- don't understand anything. They simply respond top surface patterns, which is why there are a dozen on this blog about how badly computers fail at simple read-and-assess tasks for human writing (here's the most recent, which, oddly enough, involves software semi-funded by Bill Gates-- and it sucks).
Ai At Scale repeats a time-honored bit of computer puffery when talking about a shiny future, saying "that future is a lot closer than you might think." That's a lot of wiggly weasel-wording in a short phrase, which remains the AI world's mantrariffic euphemism for "we don't have this figured out yet, but noy, just any day now, or maybe shortly after that, it will be awesome."
AI is still just a bunch of algorithms backed up with an immense capacity and infinite patience for cracking patterns, and whether it's city traffic or a simple paragraph, it's still not enough. Remember-- friends don't let friends fall for ed tech AI marketing nonsense.
No comments:
Post a Comment