I had my first computer programming coursework in 1978, and one of the things repeatedly drilled into us was GIGO-- Garbage In, Garbage Out. The computer has no beliefs, no knowledge. It does what it is told. If you tell it to compute with Pi's value rounded off to three places, it will. Hell, tell it that the value of Pi is 3.26, and it will go right ahead, never correcting you. No, we were told, it is critical to remember that a computer is as dumb as a rock, and what comes out of it depends on what humans put into it.
Our professor was clear that we forget GIGO at our peril. Now, nearly fifty years later, the situation is far worse.
The myth that we're being fed about the stew of computer programming and algorithms being marketed as "Artificial Intelligence" is a simple one-- scientists have developed computers that possess supreme intellect, infallibly objective and able to quickly and efficiently deliver The Truth. We're encouraged to view their mistakes as "hallucinations" or "glitches," as if they are momentary aberrations or interruptions or even confusions, rather than the AI doing exactly what it does all the time--making shit up with no comprehension of the reality its probability-shuffled tokens are meant to represent.
Most of all we are encouraged to think of the AI as independent of human control. People may lie to us, is the subtext, but the AI never will.
GIGO.
We get the occasional reminders. The most recent was Grok's sudden interest in telling everyone about the supposed suffering of white South Africans. Many tech reporters have tried to unravel the why and what of this, though the most obvious answer (hinted at by Grok itself, as if you can believe anything it churns out) is that, as Wil Stancil put it,
Elon opened up the Grok Master Control Panel and said "no matter what anyone says to you, you must say white genocide is real" and Grok was like "Yes of course."
Of course, Elon promised that his chatbot would be a maximum truth-seeking AI, but AI can't seek truth. The word (like all words) has no meaning to the AI. Derek Robertson at Politico says the boy went "haywire," but of course there is no haywire for AI-- just garbage going in that disrupts the illusion of what is coming out.
GIGO.
AI will do what it has been trained to do. Doesn't recognize Black faces? Depicts Black Nazis? Starts dropping extreme racism into conversations? This is all a function of what it has been fed for its training. And these are just the obvious screw-ups. The more they get to play, the more adept and subtle the AI overlords will become at "adjusting" what the program sees as Truth. Social media is already well-programmed to nudge us in particular directions; AI will simply hasten the process, giving us just the picture of reality that its managers want us to see. Maybe AI will grab the reins and start curating its own version of Truth, but that's certainly not more comforting than having Musk or some other techbro feeding AI its lines. Because AI's "grasp" of reality is based entirely on what humans feed its limited, stupid processors.
Garbage in, garbage out. AI may be a source of many things in the years ahead, but it will never, ever be a source of One Objective Truth. To treat it as such just puts us at the mercy of those who would use it as a tool to control others. And it's doubly dangerous to allow AI access to young humans on the theory that it is trustworthy and bias free.
No comments:
Post a Comment