Well, here's a fun piece of research about AI and who is inclined to use it.
The title for this article in the Journal of Marketing-- "Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity"-- gives away the game, and the abstract tells us more than enough about what the research found.
You may think that familiarity with technology leads to more willingness to use it, but AI runs the opposite direction.
Contrary to expectations revealed in four surveys, cross country data and six additional studies find that people with lower AI literacy are typically more receptive to AI.
That linkage is explained simply enough. People who don't really understand what AI is or what it actually does "are more likely to perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes."
The researchers are Stephanie Tully (USC Marshall School of Business), Chiara Longoni (Bocconi University), and Gil Appel (GW School of Business) are all academics in the world of business and marketing, and while I wish they were using their power for Good here, that's not entirely the case.
Having determined that people with "lower AI literacy" are more likely to fork over money for AI products, they reach this conclusion:
These findings suggest that companies may benefit from shifting their marketing efforts and product development towards consumers with lower AI literacy. Additionally, efforts to demystify AI may inadvertently reduce its appeal, indicating that maintaining an aura of magic around AI could be beneficial for adoption.
To sell more of this non-magical product, make sure not to actually educate consumers. Emphasize the magic, and go after the low-information folks. Well, why not. It's a marketing approach that has worked in certain other areas of American life. In a piece about their own research, the authors suggest a tiny bit of nuance, but the idea is the same. If you show AI doing stuff that "only humans can do" without explaining too clearly how the illusion is created, you can successfully "develop and deploy" new AI-based products "without causing a loss of the awe that inspires many people to embrace this new technology." Gotta keep the customers just ignorant enough to make the sale.
And lord knows lots of AI fans are already on the case. Lord knows we've been subjected to an unending parade of lazy journalism of the "Wow! This computer can totally write limericks like a human" variety. For a recent example, Reid Hoffman, co-founder of LinkedIn, Microsoft board member, and early funder of OpenAI, unleashed a warm, fuzzy, magical woo-woo invocation of AI in the New York Times that is all magic and zero information.
Hoffman opens with an anecdote about someone asking ChatGPT "based on everything you know about me, draw a picture of what you think my current life looks like." This is Grade A magical AI puffery; ChatGPT does not "know" anything about you, nor does it have thoughts or an imagination to be used to create a visual image of your life. "Like any capable carnival mind reader," continues Hoffman, comparing computer software not just to a person, but to a magical person. And when ChatGPT gets something wrong, like putting a head of broccoli on your desk, Hoffman paints that "quirky charm" as a chance for the human to reflect and achieve a flash of epiphany.
But what Hoffman envisions is way more magical than that-- a world in which the AI knows you better than you know yourself, that could record the details of your life and analyze them for you.
Decades from now, as you try to remember exactly what sequence of events and life circumstances made you finally decide to go all-in on Bitcoin, your A.I. could develop an informed hypothesis based on a detailed record of your status updates, invites, DMs, and other potentially enduring ephemera that we’re often barely aware of as we create them, much less days, months or years after the fact.
When you’re trying to decide if it’s time to move to a new city, your A.I. will help you understand how your feelings about home have evolved through thousands of small moments — everything from frustrated tweets about your commute to subtle shifts in how often you’ve started clicking on job listings 100 miles away from your current residence.
The research trio suggested that the more AI imitates humanity, the better it sells to those low-information humans. Hoffman suggests that the AI can be more human than the user. But with science!
Do we lose something of our essential human nature if we start basing our decisions less on hunches, gut reactions, emotional immediacy, faulty mental shortcuts, fate, faith and mysticism? Or do we risk something even more fundamental by constraining or even dismissing our instinctive appetite for rationalism and enlightenment?
Software will make us more human than humans?
So imagine a world in which an A.I. knows your stress levels tend to drop more after playing World of Warcraft than after a walk in nature. Imagine a world in which an A.I. can analyze your reading patterns and alert you that you’re about to buy a book where there’s only a 10 percent chance you’ll get past Page 6.
Instead of functioning as a means of top-down compliance and control, A.I. can help us understand ourselves, act on our preferences and realize our aspirations.
I am reminded of Knewton, a big ed tech ball of whiz-bangery that was predicting it would collect so much information about students that it would be able to tell students what they should eat for breakfast on test day. It did not do that; instead it went out of business. Even though it did its very best to market itself via magic.
If I pretend that I think Hoffman's magical AI will ever exist, I still have other questions, not the least of which is why would someone listen to an AI saying "You should go play World of Warcraft" or "You won't be able to finish Ulysses" when people tend to ignore other actual humans with similar advice. And where do we land if Being Human is best demonstrated by software rather than actual humans? What would it do to humans to offload the business of managing and understanding their own lives?We have a hint. Research from Michael Gerlich (Head of Center for Strategic Corporate Foresight and Sustainability, SBS Swiss Business School) has published "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking"* and while there's a lot of scholaring going on here, the result is actually unsurprising.
Let's say you were really tired of walking everywhere, so you outsourced the walking to someone else, and you sat on the couch every waking hour. Can we predict what would happen to the muscles in your legs? Sure--when someone else bears the load, your own load-bearing members get weaker.
Gerlich finds the same holds true for outsourcing your thinking to AI. "The correlation between AI tool usage and critical thinking was found to be strongly negative." There are data and charts and academic talk, but bottom line is that "cognitive offloading" damages critical thinking. That makes sense several ways. Critical thinking is not a free-floating skill; you have to think about something, so content knowledge is necessary, and if you are using AI to know things and store your knowledge for you, your thinking isn't in play. Nor is it working when the AI writes topic sentences and spits out other work for you.
In the end, it's just like your high school English teacher told you-- if someone else does your homework for you, you won't learn anything.
You can sell the magic and try to preserve the mystery and maybe move a few more units of whatever AI widget you're marketing this week, but if you're selling something that people have to be ignorant to want so that they can offload some human activity then what are you doing? To have more time for World of Warcraft?
If AI is going to be any use at all, it will not be because it hid itself behind a mask of faux human magical baloney, but because it can do something useful and be clear and honest about what it is actually, really doing, and not because it used an imitation of magic to capitalize on the ignorance of consumers.
*I found this article thanks to Audrey Watters
No comments:
Post a Comment