Monday, August 10, 2020

Why Isn't AI More Widely Used?

That's the question that Wired asked last month, and it's important to consider because even as a truckload of ed tech folks are "predicting" (aka "marketing") a future in which ed tech is awash in shiny Artificial Intelligence features that read students minds and develop instantaneous perfectly personalized instructional materials. Why is it, do you supposed, that AI is being thrust at education even as private industry is slow to embrace it?

The article looks at a study of data from a 2018 US Census survey. What they found was that only 2.8% of companies had adopted any form of "machine learning," the magical AI process by which computers are supposed to be able to teach themselves. The big advanced tech winner was touchscreens, which are considerably more friendly than AI, and even those only clocked in at 5.9%, so I suspect that schools are ahead of the game on that one. Total share of companies using any kind of AI (which included voice recognition and self-driving vehicles) was a mere 8.9%.

Adoption was heavily tilted toward big companies, aka companies that can afford to buy shiny things that may or may not actually work, aka companies where the distance between those who buy the stuff and those who use the stuff is the greatest.

Another finding of the study is that, shockingly, that many previous "estimates" of AI use were seriously overstated. For instance, consulting giant McKinsey (a company that has steamy dreams about computerized classrooms) claimed that 30% of executives were piloting some AI. Of course, to do that kind of survey, you have to talk only to companies that have "executives" and not just an owner or a boss.

Wired doesn't really doesn't have an answer. It offers a charming contrast between a big-time beer brewing corporation that uses an AI algorithm to monitor its filtration process, and a small beer company where "We sit around tasting beer and thinking about what to make next."

But perhaps AI isn't more widespread because it doesn't work all that well. "AI" often just means "complicated algorithm," and that algorithm has been written by a human--most often, a human with a computer background and not a background in the field being affected.

This leads to problems like the widely noted tendency of AI to be racist--the attitudes and biases or programmers are transferred directly into the programs that they write.

So do their other failings. In her "Introduction to Artificial Intelligence in Education," Sarah Hampton offers some examples of what is or is not an AI program. In the "is" column we find Grammarly. The thing about Grammarly, though, is that this software that is supposed to magically offer useful editorial guidance for your writing--well, it's not very good at its job. In fact, one question that Wired didn't ask was about the success of the companies that did use AI, but consider this report that 1 in 4 AI projects fail. You can surf the net reading about AI failures all day. And some of them have really serious consequences (like being jailed because of a botched AI facial identification).

The answer to the Wired question is twofold. AI isn't more widely used because 1) it doesn't do anything that can't be done as well (often at lower cost) by actual expert humans and 2) because much of what it does, it doesn't do very well. Despite its many, many, many unfulfilled promises, AI continues to be boosted mostly by people who want to make money from it, and not by the people who actually have to work with it. Yes, AI fans, including those in ed tech, will continue to make shiny promises based on what the program would do if it worked perfectly to achieve what its designers imagined it would do in a perfect world, but that's a vision from some other world. In this world, AI still has little to offer the classroom teacher.

No comments:

Post a Comment