This week Neil Selwyn (Monash University) turned up on the website of the British Educational Research Association with "Robots in the Classroom? Preparing for the automation of teaching,"
a title which poses a question and then skips over the really long discussion that question ought to prompt.
Fifty years after Stanley Kubrick introduced cinemagoers to HAL9000, the prospect of a robot-infused world still feels more science fiction than social fact. Yet robots are steadily beginning to impact on the nature of contemporary work. Industries such as circuit-board manufacturing and underground mining now rely on automated, mechanised robots. Elsewhere, intelligent systems are prompting forecasts of the ‘end of the professions’ and declining need for human doctors, lawyers and accountants. High-tech automation is now a real proposition across many sectors of work and employment.
One notable exception to this trend is education.
Selwyn notes that "it is generally assumed" that teaching is going to be done by humans. But that assumption, he suggests, is tied to the "continued dominance of mass schooling." But advances in artificial intelligence, robotics and machine learning should change that. Why are people pushing classroom robots. Well...
There is clearly growing corporate impatience to reform what are perceived as outdated and inefficient school systems. Calls for the automation of classroom teaching are often driven by desires to ‘reboot’ 20th century school systems that business interests suspect are no longer fit for purpose.
Without discussing exactly which "purpose" we're talking about, Selwyn connects this push to "growing political disgruntlement" with the teaching profession. Robot teachers would help disrupt unions and the profession as a whole.
He notes the wide range AI products out there, and the ability to "capture over a million data-points per user," without questioning if that might also be a motivator for business as well as a huge threat to the privacy of the children themselves.
But it's the non-response of the profession that seems to bother him. Why, given the momentum of the robot onslaught, is there not "greater consternation throughout education." Why is education as a sector not spending more time preparing for the arrival of our robot overlords.
There are plenty of reasons not to get excited about the robots. For one, the industry itself is questioning the AI developers themselves. A recent sciencemag.coim piece focuses on Ali Rahimi, a Google researcher who got a huge ovation at an AI conference for charging that machine learning algorithms have become a form of "alchemy." Researchers can't reproduce results and many developers do not seem to know which parts of the programs are actually doing the work.
Despite this lack of ability to actually produce Chief Instructor Robot, many school leaders are moving from a bunch of algorithm-driven learning software to even more of it. In Clearwater County, Idaho, for example, parents are complaining that the Summit software has moved from being a curriculum supplement to the actual curriculum. In response, the school board president touted the promise of Personalized [sic] Learning, which leans even more heavily on an algorithm-driven mass instruction software set up.
There are many reasons to resist the Robo-Teachers-- the lack of data privacy, the biases embedded into any software by the programmers, the lack of actual education expertise in the programmers, the lack of human contact and interaction for students. But there are problems even more simple than that.
I've just returned from a visit to my daughter and her family in Seattle. They have a Google Home system (similar to the Alexa) and it sort of works, except when it doesn't. I watched my grandchildren and there parents ask for the same song many times over the course of the visit, and there was no predicting what Google would actually do in return (on the plus side, I've know heard the German version of the Gummi Bear song). My wife and I used the GPS programs in our phones, which as usual worked well except when they didn't (first it couldn't find the grocery store, then it couldn't find us). And I'll just refer you to the story of my ex-wife's mail.
In other words, on top of the philosophical objections to robot teachers, we have to also consider whether or not they could actually do the job well. All the empirical evidence says no. Like the sort-of mostly self-driving kind of cars, most of our computer-based algorithm-powered tech works as long as there's a human buffer between the tech and the world-- because the tech doesn't work well enough on its own to be trusted. And that means it doesn't work well enough to be entrusted with the running of a classroom occupied by tiny humans-- or even a single tiny human.
But here-- since Selwyn opened with a Kubrickian HAL9000 reference, it seems only appropriate to end with this: