Pages

Tuesday, October 3, 2017

AI- Automated Intelligence

Remember when you could buy lots of food that was fried? But then we sort of collectively decided that "fried" was a synonym for "wildly unhealthy" and marketeers searched for a substitute. Now if you look around, you'll notice that the chicken being pitched to you is often not "fried," but "crispy."

It's a basic rule of marketing-- when you're having trouble moving a product, change the language you're using to describe it.

Once upon a time, automation looked like a cool thing. It was a machine word, a word that called up mighty metal limbs that could tirelessly repeat the same action with relentless accuracy. And it freed up humans, whose judgment was not needed. An automated process would just follow the same steps, do exactly what it was designed to do, over and over and over and over and over and over and over and over again. It seemed like a Great Thing.


But over time, the metallic bloom dropped from the tin rose. The very machininess of automation began to remind us how inhuman it all was. The mindless repetition more often conjured up images of tiny humans lost as cogs in the great machinery. If something new, different, outside the rules appeared, automation does not know how to respond and it either chews up the anomaly or chews up itself. Rather than serve humans, automated systems demanded that humans adjust themselves to the machine, because automated systems could not exercise judgment or thought or wisdom or soul. We no longer welcomed the cold, hard, unbending embrace of the machine age; instead we were all inclined to rage against the machine.

Fortunately for systems-loving people, a new age dawned, and as the machine age passed away, machines lost popularity as a positive controlling metaphor. Now we would have computers, and though, in fact, automation had always involve some rudimentary sort of computer-like element, now we focused more on the computer and its unparalleled ability to take the steps it was designed to repeat,and repeat them over and over and over and over and over and over and over and over again. Artificial intelligence is a thing, and scientists are developing it, but in the meantime AI is being used as a catch-all marketing term for oh-so-many forms of automation.

But now we don't call it automation. We call it artificial intelligence.

It's still automation.

As Alexis Madrigal reminds us in the Atlantic, "Google and Facebook Failed Us" yet again during the Law Vegas shootings because their news coverage is not handled by human beings, but by automation. They call it artificial intelligence because it sounds better, but its just a set of algorithms, a set of rules, a set of tasks to be performed over and over etc again, and when faced with unique circumstances that do not fit the rules, the automation screws up. For a while, junk news from the very fringes of intelligent thought popped up at the top of the feeds (that's why, for instance, so many people "heard" that the shooter was ISIS).

When Google responds to this by saying the bad results had algorithmically surfaced and they are going to make "algorithmic improvements," they are admitting that their software is a machine, not a brain. They are admitting that news management on the sites is automated-- not selected by any kind of intelligence, artificial or otherwise.

Automation.


Want to put your child in an automated classroom? An automated classroom, complete with automated teaching machines, sounded pretty cool at one time. But the inhumanity, the requirement of students to adapt to the system. the system's inability to deal with human variables, an environment that runs on one set of unbending rules-- that does not sound cool.


But a classroom that utilizes software-based artificial intelligence? Well, now, that sounds mighty fine. Modern and smart.

And yet, in all but the rarest of cases, it's simply automation with a different name. A computer may allow for a faster, more complicated set of rules, but it's still just a machine following pre-set rules rules over and over to the twelfth power. We've digitized the metal bars, and we've hidden the tool marks of the men who built the machine, but it's still just a machine, chugging away. It's still the same weak teaching machine idea that has been promised as an educational game changer for decades.

It's still just automation. And your crispy chicken is still fried and unhealthy.

4 comments:

  1. AI has left the realm of following rules and most work these days concentrates on training digital neural networks to recognize regularities. The programs learn from samples provided.

    One great example is using AI systems to identify cancers from biopsy samples. We have no idea why the computer is classifying a sample as cancer or not cancer, but the best systems can identify correctly identify 92% of the samples. Top pathologists can correctly identify 96% of the samples. A team of pathologists and the AI can correctly identify 99.5% of the samples. See the discussion here: https://www.livescience.com/55145-ai-boosts-cancer-screen-accuracy.html

    ReplyDelete
    Replies
    1. I read that dogs are also very good at identifying cancers.

      Delete
    2. Rebecca,

      They might well be, and if so I am sure that the people who are told they have cancer when they do not and the people told that they do not have cancer when in fact they do would appreciate the increased accuracy of the diagnosis that a dog would bring. I am sure that they appreciate the increased accuracy of diagnoses when a pathologist is teamed with a trained AI system.

      Delete
    3. Yes, different forms of confirmation used together is good.

      Delete