Sunday, January 12, 2025

AI Can't Imagine Future Humanity

It's a minor throwaway article, but it is a fine example of how people who aren't paying close attention both accept and perpetuate huge misconceptions about what AI is or can do.

The headline from a story that originally ran on Tom's Guide, but was picked up by MSN (which is itself a bad sign) is "I used AI to imagine humanity in 50,000 years — here’s how it went."  The piece is by Ryan Morrison, the AI Editor for this tech-centered website. The first three paragraphs tell us how far into weeds we are headed.

I’ve always been a daydreamer, leaving my mind to ponder the possibilities of what could be to come. With the help of artificial intelligence tools, I can turn those ponderings into something I can actually see and even interact with.

Recently I found myself talking about space travel with ChatGPT, asking it about timelines and the impact terraforming a smaller world like Mars might have on human physiology. This later led to me having ChatGPT outline how it perceived humanity over 5,000, 10,000 and 50,000 years of evolution.

I also had it come up with ways humans might change if left isolated on different terraformed planets such as Mars, with its lower gravity or even the moons of the gas giants. I then used Freepik’s impressive Mystic 2.5 image model to bring them to life.

Morrison goes on to talk about how he offered ChatGPT different parameters and asked some other pointed questions. It all seems built around the notion that when he asks ChatGPT these questions, ChatGPT goes and looks at all the scientific research surrounding Mats and human physiology and gravity and whatnot and works up a series of theories based on a rational consideration of all the pertinent science. "Well, what if human civilization splits with no contact for a few millennia?" he asks, and ChatGPT strokes its chin and says, "Well, let me consult some sources and run some numbers." 

The article is laced with references to his "conversation" with ChatGPT. The program went on to "outline" how it "perceived" human change. He asked it to "imagine" humans in 50,000 years. 

Of course, ChatGPT doesn't do any of that. The stochastic parrot strings together an assortment of probable words in a probably string given the prompt, and given whatever training it has on sources that string together words near words similar to the prompt words. 

People like to think of "artificial intelligence" as the equivalent of some really smart, extraordinarily well read professorial type, or perhaps any of the artificial personalities we know from popular fiction. People who have AI products to sell like that picture of AI very much-- but Artificial Generalized Intelligence like that is not here yet, and may never get here. GPT-5 also not here. 

In the meantime, people who really ought to know better keep pretending that Large Language Models like ChatGPT are really something far more advanced (and useful) than they really are. But here's Morrison, saying his website bio has been written for him by ChatGPT, a "silicon-based life form."

This is the kind of stuff that trickles down to the general public and teachers and administrators and leads them to put all sorts of faith in "AI" that it does not deserve and cannot live up to. If we're going to have conversations about AI's proper place in the classroom, they will have to be based on reality and not marketing puffery and the imagination of over-excited commentators.



 

No comments:

Post a Comment