Pages

Friday, August 23, 2019

Artificial Intelligence and Magical Thinking (HAL Knows How You Feel)

From the moment you read the title, you know this article from Inside Higher Ed by Ray Schroeder is going to be a corker-- Affective Artificial Intelligence: Better Understanding and Responding to Students.

Schroeder opens with "As a longtime professor of communication, I am fascinated with the cognitive characteristics of artificial intelligence as they relate to human communication," and that's  a touch misleading. While he was an associate professor of communication back in the early 80s and a professor in a television  production unit at the  University of Illinois up until the late 90s, I think it might be a little disingenuous of him of him  to skip over his work since then. He ran the University's center for online learning until 2013, when he became the associate  vice chancellor for on learning. 2013 was also the year he became a founding director of the National Council of Online Education, a group that is "dedicated to advancing quality online learning at the institutional level." They are "powered by UPCEA, the association for professional, continuing, and online education."

"Dave, are you sad, or just gassy?"
In short, Schroeder is writing not as a professor with some academic curiosity about AI, but as a guy whose professional life for the past two decades has been centered on promoting and advocating for computer-driven instruction. That would have been appropriate to mention here, but IHE didn't even give Schroeder a bio blurb at the end of his piece.

So here's the set-up:

One of the challenges in person-to-person communication is recognizing and responding to subtle verbal and nonverbal expressions of emotion. Too often, we fail to pick up on the importance of inflections, word choices, word emphases and body language that reveal emotions, depth of feelings and less obvious intent. I have known many of my colleagues who were insensitive to the cues; they often missed nonverbal cues that were obvious to other more perceptive people.

There's even a link to back up the notion that nonverbal communication is complicated. So now we're ready for the pitch:

And that brings me to just how artificial intelligence may soon enhance communication between and among students and instructors. AI in many fields now applies affective communication algorithms that help to respond to humans. Customer service chat bots can sense when a client is angry or upset, advertising research can use AI to measure emotional responses of viewers and a mental health app can measure nuances of voice to identify anxiety and mood changes over the phone.

Sigh. This continues to be a big dream, most often associated with the quest for computerized SEL instruction. Various companies have claimed they can tell how we're feeling, using everything from face-reading software to measuring how long students take to click on an answer. And yes-- Amazon has been training Alexa to read the stress in your voice. None of these has worked particularly well. And maybe I'm on the phone with the wrong service chatbots, but despite Schroeder's claim, they can't understand anything that falls outside a certain range of response, let alone read my emotional state.

Schroeder assures us that computers can analyze lots of data, including vocal inflections and micro-expessions, and so far we're still within the realm of standard tecbno-over-promising. But then stuff gets weird.

Too often we fail to put ourselves in the position of others in order to understand motivations, concerns and responses. Mikko Alasaarela posits that humans are bad at our current emotional intelligence reasonings: “We don’t try to understand their reasoning if it goes against our worldview. We don’t want to challenge our biases or prejudices. Online, the situation is much worse. We draw hasty and often mistaken conclusions from comments by people we don’t know at all and lash [out] at them if we believe their point goes against our biases.”

Well, sure. If, for instance, we're heavily invested in computer tech, we might be inclined to ignore evidence that we've put our faith in some magical thinking. However, some of us are way worse at this than others of us. But for his next leap, Schroeder needs to establish that all humans are bad at understanding other humans. He is, of course, particularly interested in one application of this AI mindreading-- online classes:

Too often, I fear, we miss the true intent, the real motivation, the true meaning of posts in discussion boards and synchronous voice and video discussions. The ability of AI algorithms to tease out these motivations and meanings could provide a much greater depth of understanding (and misunderstanding) in the communication of learners.

All those misunderstandings on Twitter or message boards and even video will be swept away, because AI will be there to say, "Well, her mood when she posted that was angry and anxious, and what she really meant bto say was..." Schroeder quotes Sophie Kleber quoting Annette Zimmerman saying, "By 2022, your personal device will know more about your emotional state than your own family." He cites the recent Ohio State study that showed computers beating humans at certain types of emotion recognition under lab conditions and using photos instead of live people (he does nod at the nightmare application of this tech--more effective marketing). This is some magical baloney here, but we can still raise the baloney bar. Go back to that last paragraph:

Too often, I fear, we miss the true intent, the real motivation, the true meaning of posts in discussion boards and synchronous voice and video discussions.

So AI can see past everything, straight to the truth. Schroeder may be missing the more important applications of his still-imaginary AI. It could be used to read Hamlet or Ulyses or that confusing note my one ex-girlfriend left me, and it will be able to tell us all The Truth! When I think of how many students have struggled through "The Lovesong of J. Alfred Prufrock" and now we can just have the AI tell us what the true intent, the real motivation, the true meaning of the texts would be.

No, no, no, you say. The AI has to read the face of the source human, and those writers are all dead (well, except for my ex-girlfriend, but she wasn't looked at the webcam when she wrote the note). Okay, fine. We just get authors to compose all future works in front of a computer-linked camera, and there will never be any mystery again. We'll know the true meaning of it all, the true motivation behind the writing. I suppose with singer-songwriters, it would be good enough to let the AI watch a performance. Call up Don McClean and Carly Simon-- we can finally uncover the truth of "American Pie" and "You're So Vain."

Even if we stick to academics, it's hard to know where this could lead. Should a professor write an article or essay in front of a computer cam, and should the article then be accompanied by the AI explication-- or should the AI response to the work be published instead of the article? If the scholar just thinks about  what he wants to write, will the AI write the full article for him? Can we just fire the professor  and replace him just by asking the AI, which knows him so well, "What would Dr. Superfluous say in this situation?"

All right, I'll calm down. But Schroeder's crazy-pants predictions aren't done yet.

With AI mediating our communication, we can look to a future of deeper communication that acknowledges human feelings and emotions. This will be able to enhance our communication in online classes even beyond the quality of face-to-face communication in campus-based classes. Algorithms that enable better “reading” of emotions behind written, auditory and visual communication are already at work in other industries. 

Yes,  with software assistance, our human communication will finally include feelings and emotions! Dang. Maybe Schroeder hangs around with too many geeky flat-affect computer programmers, but as someone who worked with teenagers for thirty-nine years and someone who has a widespread and varied family and someone who is, you know, a human being living on Planet Earth, I would have to say that feelings and emotions are widely involved and acknowledged.

As to the assertion that online classes will actually have better quality communication than real live classes-- well, if I made my living pushing the online stuff, I might want to believe that, too. But I don't. Sure, higher education is a slightly different animal than K-12, but in the classroom, human relationships matter. Otherwise we would just ship each student a crate of books and say, "Go learn this stuff."

The working world has always included people who are bad at the interacting with and understanding of other carbon based life forms. But the kind of crutches and tools developed to help seem, because of the very problem, hard for them to use well. Like the guy who went to a training where they told him that when he was talking to someone he should insert their name in the sentence to connect better-- he just ends up seeming like a creepy bot. The idea that a professor could communicate better with students if he had software to explain the students to him--even if the software could actually do it--seems equally fraught.

Schroeder does end the piece with a sentence that acknowledges the huge privacy concerns of such a system. He doesn't acknowledge the oddness of his central thesis-- that we need computers to  explain humans to other humans. Here's hoping the readers of IHE ignored him.


3 comments: