Pages

Wednesday, December 23, 2020

AI, Language, and the Uncanny Valley

We experience vertigo in the uncanny valley because we’ve spent hundreds of thousands of years fine-tuning our nervous systems to read and respond to the subtlest cues in real faces. We perceive when someone’s eyes squint into a smile, or how their face flushes from the cheeks to the forehead, and we also — at least subconsciously — perceive the absence of these organic barometers. Simulations make us feel like we’re engaged with the nonliving, and that’s creepy.

That's an excerpt from Douglass Rushkoff's book, Team Human, talking about how the uncanny valley is our best defense. The uncanny valley is that special place where computer simulations, particularly of humans, come close-but-not-quite-close enough and therefor trigger an ick reaction (like the almost-humans in Polar Express or creepy Princess Leia in Rogue One). 

The quest for AI runs right through the uncanny valley, although sometimes the ick factor is less about uneasiness and more about cars that don't drive themselves where you want them to. The gap between what AI promises and what it can deliver is at least as large as an uncanny valley, though companies like Google are now trying to build a fluffy PR bridge over it (hence Google's directive that researchers "strike a positive tone" in their write-ups).

Since summer, journalists have been gushing glowingly over GPT-3, the newest level of AI powered language simulation (the New York Times has now gushed twice in six months). It was the late seventies when I heard a professor explain that the search for decent language synthesizing software and artificial intelligence were inexorably linked, and that seems to still be true. 

It's important to understand what AI, or to call it by its current true name, machine learning, actually does. It does not understand or analyze anything. You can't make it blow up by giving it a logic-defying paradox to chew on. A computer is infinitely patient, and is good at cracking patterns. Let it read, say, all the writing on the internet, and given a place to start, it can make an analysis of what, statistically, would probably come next. GPT-3 is a big deal because it has read more stuff and broken out more patterns than any previous software. But it's still just analyzing language patterns based on superficial characteristics of words. It is Perd Haply with bigger memory capacity.

We've seen the more limited versions of AI, like the automated robocaller that can only cope with responses that fit in a limited menu. But for someone who reads a lot, even the more advanced versions land in the uncanny valley. GPT-3 can spit out some weird wrongness, as demonstrated in this piece that includes exchanges such as 

Q: How many eyes does a horse have?
A: 4. It has two eyes on the outside and two eyes on the inside.

This set of testers found that GPT-3 was sometimes prone to plagiarism, providing correct-but-copied sentences from websites. Nudged in a slightly different direction, it produced paragraphs like this one

Whales, and especially baleen whales, are well known for their enormous size, but most types of whales are not larger than a full-grown adult human. Exceptions include the blue whale, the largest animal ever known, the extinct “Basilosaurus”, which was longer than a blue whale and likely the largest animal to have ever existed, and the “shovelnose” whales, especially the genus “Balaenoptera” which include the blue whale, “B. musculus”, the fin whale, “B. physalus”, and the sei whale, “B. borealis”.

The reviewer said this "reads well but is often wrong." But some of the samples I've read don't particularly read well, like this that was part of an "essay" prompted by Farhad Manjoo

Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing. And God knows that humans have enough blood and gore to satisfy my, and many more’s, curiosity. They won’t have to worry about fighting against me, because they have nothing to fear.

Like much of GPT-3's output, it reads to me like a disinterested student trying to come up with enough bulk to fill paper, resulting in writing that is just A Bunch Of Stuff About Topic X.

The uncanny valley has also turned up in my comments section. I get a lot of funky stuff there, but this one jumped out at me, responding to an old piece about the Boston Consulting Group.

I totally agree with your idea, and the group's beautiful comparison to the Black Knight and the Reaper is really a loss for public schools. In terms of the BCG report that made three recommendations, I think the idea of ​​companies helping educators to define and implement to update education in nearby cities is good, and really the idea of ​​strengthening schools is hard work and needs everyone's help. Great partnership between Harvard Harvard and BCG, I believe that it is more accessible to enter MBA programs mainly with a large investment.

First, that's not really connected to anything in the original piece. Second, even not knowing that, it's not hard to recognize that we've entered the uncanny valley here. Lots of bad writing gives one the impression of an actual idea struggling to escape from a tar pit of troubled technique. This is just words strung together. 

The poster's name is given as Daniela Braga. There's a model by that name, but Daniela Braga is also "founder and CEO of DefinedCrowd, one of the fastest growing startups in the AI space. With eighteen years working in Speech Technology both in academia and industry in Portugal, Spain, China, and the US." I reached out to Braga on LinkedIN to see if she wanted to fess up to turning an AI loose on blogging comment sections, but as yet have received no reply. 

Uncanny valley stuff is a reminder first, that humans can be very hard to fool, and second, that we capture and process huge, huge, huge amounts of data--so much so that there's a whole part of the brain that does the capture and process without us being fully aware of it. It's enough to make one think that maybe the conventional notion that says computers do capture and process of data better than humans might not be entirely true. Machines have the advantage of being tireless and immune to boredom, but they need both of those advantages just to get close to catching up with humans. 

A good example of this gap is the attempt to AI our way to cheating prevention, with the terrible AI surveillance programs that are making student lives miserable, while at the same time failing at their assigned task. But spotting a student who's cheating is not easy, and the algorithms designed by software companies have clearly been created by somebody who never actually had to catch a sneaky high school  junior mid-test. And you can't design software to know what you don't know, because software doesn't know anything. And yes, computer folks will say that machine learning allows the machine to "teach"itself things it didn't know, but that's mostly insofar as the algorithm can recognize old patterns in new places. And even that is limited--hence facial software's notorious inability to recognize that Black folks have faces. Faking reality, or fake-reading it, turns out to be really, really complicated and really, really hard.

All of which is just to say, again, that computers are not going to be able to run a classroom full of students any time soon, nor are we getting closer to algorithms that can truly manage a student's education or grade a student's essay. It takes an actual authentic human to do all of that. 

Rushkoff's point is that the uncanny valley--our sense that something is seriously wrong--is a defense mechanism, and that we should pay attention to it and trust it. He also has some things to say about inauthenticity in other areas of life--I'll let him wrap up:

Our uneasiness with simulations — whether they’re virtual reality, shopping malls, or social roles — is not something to be ignored, repressed, or medicated, but rather felt and expressed. These situations feel unreal and uncomfortable for good reasons. The importance of distinguishing between human values and false idols is at the heart of most religions, and is the starting place for social justice.

The uncanny valley is our friend.

No comments:

Post a Comment