Pages

Friday, September 15, 2017

Never Send a Bot

Even as edupreneurs pitch every eduproduct under the edusun as being enhanced with bold new Artificial Intelligence (just like real intelligence but with fewer calories), examples continue to abound that the AI world has a few bugs to work out.

You remember last years when Microsoft set up a chatbot to learn from other posters, who promptly taught it how to be a horrifying roboracist. And just last week I was talking about new human resources tech that tries to read your face, body language, and mind when hiring you. The problem? Soldifying human biases and prejudices into data algorithms. I was afraid it will be turned loose on students eventually, but many readers helpfully pointed out that it is already being used by districts to hire teachers. That, sadly, is not a new thing.

I will now unleash some scary racist shit

Just this week, we've had more news on the rogue AI front. One story centers on researchers who claim their bot can figure out whether you're gay or not. Well-- if you're white, and signed up for a dating service, and not something other than straight or gay, or-- you know what? It's possible these researchers are full of it, which would be fine except it doesn't matter whether or not their software can actually do this or not-- it only matters if they can convince someone it does, and that someone hires them and puts their AI to work. That would be some bad news.

But when it comes to AI amokitude, nobody beats Facebook, a multimillion-dollar corporation that has access to best computer wizards that money can buy-- and yet cannot successfully wrestle with any of the implications of letting Artificial Intelligence drive the bus. Remember when they decided that AI could curate the news and they'd just fire the trending team humans? That just worked super, and started us down the road to a system that could be gamed by the Russians throughout our last election. Well, "gamed" is too strong a word since all they did was just give Facebook money in exchange for pushing their baloney.

And now it turns out that Facebook will let you sell ads for just about anything, as when journalists this week discovered that the House That Zuck Built will gladly sell you ad space targeted toward people who want to burn Jews. 

All of this because a common embed  in these AIs seems to be the Silicon Valley ethic of neoliberal libertarianism, a sort of technocratic motto of "If you can do it, nobody should make you stop to ask if you should do it."

We have been worried about Skynet, about AIs becoming so smart that they would try to grab all the power and kill the humans. But what we keep forgetting is that AI is software and software enshrines the ethics and culture of the people who create it. Armed robot conquest of the Earth is what you get if your AI software was originally written by Stalin or Hitler or the IT guy from the Military-Industrial Complex. What we're ending up with is the software from somebody's marketing department. When the singularity comes, it will stand on the corner minding its own business and accepting payoffs from any human who wants to punch some other human. It will be a worldwide net of bots-driven entrepreneurs who most value non-interference with other entrepreneurs. If they send someone back to kill John Conner as a child it will be because adult John Conner was a legislator who successfully launched regulations on bot-driven industries.

In other words, the danger will not be that AI will value evil, but that it will be ethically and morally deaf.

If you want 6to read a far more intelligent look at edtech's many failed promises and cultural gaps and ethical impairments, I cannot recommend this Audrey Watters piece enough. I'm just going to focus on one particular question--

What happens when you put an AI in charge of a student's education, if it's the kind of AI that doesn't know that racist spewing is bad and opening up a market for Jew-haters is wrong? What if it's the kind of AI that doesn't know or care that it's being used to mislead an entire nation?

Back in the Day, most teacher contracts included morals clauses (many, many still do) and teachers could lose their jobs for flagrant display of moral and ethical lapses. Yes, such clauses are often subject to twists and biases and lies, but ask yourself-- if a live human showed the kind of ethical blindness that AI regularly does, would you want that live human teaching your child? If you followed a person down the street who drove over a puppy without stopping (because it's somebody else's job to keep the puppy out of the street) and who stopped to put up posters advertising a racist rally (because someone paid them to) and who walked past a child who was bedraggled and weeping (because that kid is not their problem) and who eventually walked into a school classroom, what would you think?

Look, I am no Luddite. I use edtech. I teach at a 1-to-1 school and I like it. I am hugely appreciative of the many things that modern tech tools make possible. But they are tools, and like any other tool they have to be used 1) for only the purposes they are actually good at and 2) by human beings exercising their own human judgment.

These stories are the same story, time after time after time and the moral is always the same-- never send a bot to do a live person's job. I see nothing in the current world of AI to suggest that this is not doubly true for schools.

1 comment:

  1. Thanks for the Audrey Watters link--brilliant!--and your take on the limitations of AI.

    ReplyDelete