Friday, July 2, 2021

Language Generating AI Still Lacks I

You may remember that last year, a piece of language simulation AI software appeared touted as the next big thing. OpenAI rolled out GPT-3. The claims were huge. It can write poetry. Various writers wrote pieces about how realistic it was. It can write computer programs--well, actually that was less unbelievable. But the other claims were looking somewhat shaky already, including some linguistic trips into a verbal uncanny valley

Unfortunately, turns out that it also makes racist jokes, and backs up white supremacy. OpenAI signaled some of those issues back in May of 2020.

Since then, more problems. GPT-3 was being used to create child porn. It was, as Wired recently put it, "foulmouthed and toxic." This is not a new problem; you may recall when Microsoft created an AI chatbot that had to be shut down because it turned racist and abusive.

There's an unsettling message in all of this that I have rarely seen acknowledged. AI language software works by sampling huge amounts of human language and imitating the patterns that it sees. If these language AI programs are essentially distilling all the human language that's fed them, what does that say about all of our human communications? When you boil down every sentence written in English, do you get a grimy ugly abusive residue of slime? And if so, does that mean that slime trail is the undercarriage of all our communication?

Computer whizzes aren't asking those questions--they're asking more immediate practical questions like "How do I get this bot to be less racist?" 

That's the subject of a recent Wired article--"The efforts to make text-based AI less racist and terrible," which is an article we should all be reading in education if for no other reason than to remember that, among its many shortcomings, language-generating AI is racist and terrible.

Here are some of the attempts being made according to that piece. 

OpenAI researchers are going to fix GPT-3 by "feeding the program roughly 100 encyclopedia-like samples of writing by human professionals on topics like history and technology but also abuse, violence, and injustice." So, a big diet of bland, boring writing, some focused on problem topics so that it's sample base is tilted toward boring stuff, I guess. That may work--it has been tried with some marginal success to offset GPT-3's anti-Muslim bias.

Another approach is to give GPT-3 more toxic text, and then when it spews it back, label the bad examples as "bad" so that it can learn.

All of this underlines the issue behind AI language generation, which is that there is no actual intelligence there--just a prodigious ability to fake language behavior based on a huge bank of samples. Every advance in this field, including GPT-3, is mostly about figure out how to get the software to handle more samples. Wired talked to UC Berkeley psychology professor Alison Gopnik who studies human language acquisition in order to apply lessons to computers.

Children, she said, are the best learners, and the way kids learn language stems largely from their knowledge of and interaction with the world around them. Conversely, large language models have no connection to the world, making their output less grounded in reality.

Wired also collected the most awesome quote on the subject from Gopnik:

The definition of bullshitting is you talk a lot and it kind of sounds plausible, but there's no common sense behind it.

This would include many of the folks trying to sell schools and teachers super-duper software "powered by" or "incorporating" or "driven by" AI. It's still a machine, it still doesn't actually know anything, and it still serves as a dark mirror for some of our worst linguistic behaviors. 

.








1 comment:

  1. Anything AI with education is hair raising. As a side note, I wonder if anyone has done any studies on the influence of the lyrics of the songs that kids listen to frequently. That would be a treasure trove of concepts and vocabulary to ponder.

    ReplyDelete