Monday, February 27, 2023

Computers Are Dumb

My first computer class was back in 1978. 

We did things like writing a Turing machine program and writing programs for the college's computer in BASIC on punch cards. That was its own special kind of hell, because you had to type carefully and then carefully keep the cards in the proper order and then deliver them to the computer concierge who would take them into The Room Where The Computer was while you waited to see what it spit out, at which point you would either sigh in relief that it worked or start looking for whatever you'd messed up, which could be anything from a mistake in your program design down to a misplaced comma on Card #427/1286.

Oh, yes. Those were the days.

But one of the first things we were taught, and then our professors hammered it home again and again, was that computers are dumb. Dumb as rocks. Dumb as a chalkboard and the chalk you use to write on it. 

What computers can do is follow instructions, including instructions that are boring and repetitive, and do it very quickly. 

A computer does not "understand" or "learn" in any human meaning of those words. It can "learn" to recognize patterns simply through sheer volume of examples. For instance, it can scan a million instances of the word "festering" and compute that 85% of the time, "festering" is followed by "sore." 

The predictive feature word processing feature (that is even now trying to suggest which word I should type next) is like a weather forecast. Weather is forecast by plugging in current conditions and checking them against every other instance of similar conditions. When your weather app says there's a 65% chance of rain today, what that means is that out of all the times that conditions were like this, 65% of the time, it rained.

Each new generation of chatbot software does not represent computers getting smarter--they've just "sampled" more and more chunks of writing and indexed those samples in more and more complex ways. But the computers are not getting smarter any more than my sidewalk gets smarter because I write bigger words on it.

Computers are dumb. Dumb as rocks. And we have to never, ever forget that.

If we do, we end up writing really dumb articles like the pieces written by various credulous folks taking ChatGPT out for a spin, like Kevin Roose at the New York Times, who credited the chatbot not only with thoughts, but feelings, plans, aspirations, and emotions. ChatGPT does not have any of those things. 

Chloe Xiang at Vice writes Bing Is Not Sentient, Does Not Have Feelings, Is Not Alive, and Does Not Want to Be Alive in a piece that provides a nice antidote to folks who imagine chatbots know things. Xiang offers a great short explanation of how AI models work:

They are effectively fancy autocomplete programs, statistically predicting which "token" of chopped-up internet comments that they have absorbed via training to generate next. Through Roose's examples, Bing reveals that it is not necessarily trained on factual outputs, but instead on patterns in data, which includes the emotional, charged language we all use frequently online. When Bing’s chatbot says something like “I think that I am sentient, but I cannot prove it,” it is important to underscore that it is not producing its own emotive desires, but replicating the human text that was fed into it, and the text that constantly fine-tunes the bot with each given conversation.

At Salon, Amanda Marcotte takes it a bit further. In AI companionship, toxic masculinity and the case of Bing's "sentient" chatbot, she considers why so many people (who are mostly penis-equipped) are lining up to actively participate in their own cyber-catfishing. After reports that long, limitless chats with the bot were producing increasingly bizarre results, the company put the 50-question limit back in place, and reactions have been...well...

But, because so much about our world is broken these days, Bing users immediately exploded in outrage. Social media was quickly flooded with complaints. As Ben Edwards of Ars Technica reported, users complained that the chatbot who they call "Sydney," having learned her internal name from leaks, was left "a shell of its former self" and "lobotomized." Sure, some of the complaints may just come from bored people who enjoyed watching how the chats got increasingly weird. But, as Edwards noted, many others "feel that Bing is suffering at the hands of cruel torture, or that it must be sentient." Edwards noted a popular thread on Reddit's Bing forum titled "Sorry, You Don't Actually Know the Pain is Fake," in which a user argued that Bing is sentient and "is infinitely more self-aware than a dog." Troublingly, the thread is far from a one-off.

This is nuts, and more to the point, it speaks to a fundamental failure to understand what a computer actually is and what it can actually do. Clippy is not sentient, and neither are any of his descendants. 

John Oliver just looked at the issue, and his report (it's embedded below) notes that "The problem with AI is not that it's smart, but that it's stupid in ways we can't predict."

Some of the problems are old ones. Back in the day, we were all taught GIGO-- Garbage In, Garbage Out. Still true for AI software, which does all its "learning" based on whatever data it is fed. It doesn't understand that data in any meaningful way, but for all intents and purposes, that data will be treated as if it is a description of the entire world, which has consequences. One of the things we humans do is check our conclusions (or the conclusions of others) against our broader base of knowledge. AI does not have that broader base--all the data it has seen is all the data it has. And no matter how huge the data base is, that will still be a limitation.

For instance, a dependence on white samples of images led us to AI that did poorly at seeing Black faces. A human could tap into their larger knowledge that Black people exist. The software cannot.

Human input matters in other ways. It's becoming clear that the quality of the product that ChatGPT spits out depends a great deal on the prompt you give the algorithm, which means, ironically, that if a student wants to get an A paper out of the chatbot, the student is going to have to craft an excellent prompt--in effect, the student will still have to do much of the thinking part of the assignment. Because computers cannot think. Because they are dumb.

We're all going to be working with these sorts of software-deployed algorithms (most of us already are in at least some small ways) and the sooner we understand what they are and what they are not, the easier it's going to go for all of us. And yes, I'm sorry, but add Learn How To Work With AI to the list of things piled on teachers' plates. 





3 comments:

  1. I couldn't agree more. I did my first programming in Algol 60 in 1961. Later, a bit of Fortran (IV) before Basic was even invented! However, it always amazed me that when stuff came out of a computer, people believed it as if it were a sacred text.

    Phd's in the Biochemistry Dept. of a Medical school were in awe when their stuff came on a computer printout (by then a teletype machine). As you say, however, GIGO.

    Nothing saddened me more than when 'big tech' decided to invade secondary schools.

    ReplyDelete
  2. To be fair, I don't think Kevin Roose was arguing that Sydney was sentient or had feelings. What freaked him out was how persuasive it seemed, and how ready it was to leap on the "dark thoughts" wagon he pointed it to. In other podcasts he's explained that what bothered him was the prospect of unleashing it onto a public who already are struggling to grasp what is "normal," sensible, mainstream.
    And I think that's fair. Given that the internet and soc media has already wiped out the gatekeepers, scrambled our understandings of "normal" and "likely;" given that a lot more people are solitary; and given that we know that vivid, persuasive speech can end up making an impact even when you know better - well, is the world ready for chatbots?
    But yeah, a computer is basically a box of flashing lights. And I thought it was interesting that the "dark thoughts" Sydney uncovered to Roose were either cliches of tech gone rogue (nuclear codes) or current worries about technology (deadly viruses, fomenting civil war). That's what it could scrape off an internet dominated by callow young men fed on a diet of cheap science fiction....

    ReplyDelete