AI floods sci-fi magazines as experts dismiss myths of machine consciousness
Key topics:
SF magazines halt submissions due to flood of AI-written stories
Experts denounce AI’s lack of creativity, morals, and true intelligence
Article rejects myths of sentient machines, calling AI a mere tool
Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.
Support South Africa’s bastion of independent journalism, offering balanced insights on investments, business, and the political economy, by joining BizNews Premium. Register here.
If you prefer WhatsApp for updates, sign up to the BizNews channel here.
By Tom Learmont
There’s panic on Wall Street, where three major SF (science fiction) magazines are based. Fantasy and Science Fiction, Asimov’s Science Fiction, and Clarkesworld – all have slammed their doors on submissions, thanks to the overwhelming effects unleashed by Chat GPT-3 last December. And Elon Musk is so terrified of Big Brother sentient software, he’d like to blast off for Mars. The AI chat bot has been hyped as:
being conscious, sentient and self-aware;
ready and willing to take over the world and enslave the human race;
able to put human artists, architects, composers and authors out of business.
The roots of these superannuated myths may be found in Golden Age SF stories, where the notion of machine intelligence began long ago, in the age of mechanical adding machines. Here are some early tropes from the Fifties.
The human race wipes itself out and robots build an intergalactic empire.
All the computers in the galaxy are linked. When asked “Does God exist?” the answer comes in a femtosecond: “He does now.”
An amateur pianist with a sentient robot servant sees a virtuoso playing and thinks: If only I had those hands. She finds them on her night stand, neatly severed.
A hard-working midlist author is replaced by A.I. software and remarks, “Give me the courage to watch my children starve.”
Along with sentience, mental illness can affect a machine. Such as HAL which went nuts in Stanley Kubrick’s movie A Space Odyssey 2001.
Fortunately there’s an answer to these AI nightmares in Isaac Asimov’s Three Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Why do people fall for the AI myth? It’s a case of divide and conquer. Non-readers believe stories about machine-scribbled literature. Non-architects accept the notion of a building designed by algorithms. The daily commuter on the Gautrain who has a Tretchikoff print on her dining room wall may assume that a consortium of Intel chips can dream up a canvas to rival Las Meninas by Velázquez. Alas, A.I cannot dream. It is cut and pasted, topped and tailed, assembled with data skimmed from frothy upper internet levels – by the same vast engine which trawls the slimy floor of the web. The labour-saving algorithms have their uses, sure, as a great time-saver for scientific research, number crunching and pattern recognition. But does even the souped-up new GPT-4 have any provision for Asimov’s three robotics laws?
Seeking clarity on what some detractors call “Artificial Ignorance”, The New York Times recently published an essay by two linguistic heavyweights, Professors Noam Chomsky (MIT) and Ian Roberts (Cambridge University) along with A.I. boffin Jeffrey Watumull. These experts gave faint praise, writing that the software might be helpful in suggesting rhymes for light verse, but was encoded with “ineradicable defects”. They lamented that “ . . . so much money and attention should be concentrated on so little a thing – something so trivial when contrasted with the human mind ”.
Dr Watumull describes how he asked Chat GPT about moral responsibility. The evasive answer he got made him think about the banality of evil. The chatbot “refused to take a stand on anything, pleaded not merely ignorance but lack of intelligence. And ultimately offered a ‘just following orders’ defence.” In other words – like a Nazi concentration camp guard in the war crimes dock at Nuremberg. The three experts sign off on their joint essay with these words: “Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.”
AI works with data cut and pasted from online sources. The neural network has its uses for powerful scientific searches, comparisons, pattern finding and number crunching. But in fields where creativity counts ab initio, forget it. The bot will tell you E = mc2 because it got that from Einstein. Not because it dreamed up the equation unaided. AI very rarely shows us something we didn’t know or never suspected. It believes what it’s told – even by compulsive liars. A neural network may sound sexy, but it can mislead the innocent with alarming speed.
A chat bot is a tool, just like an Acheulean hand axe or a vacuum cleaner. As such, it has no morals, no personality, no emotions, no common sense, no judgment, no senses or artistic taste of its own. No joy or sadness. No children. It’s an electronic idiot savant – a “wise idiot”. No more than a literal-minded, brainwashed box of bits and bytes that puts on a cleverly-programmed act. This impressive performance may vary from amusing to creepy. But the AI fantasy is debunked by an indisputable fact: the human race is far from understanding the huge mystery of consciousness. Google any neuroscientist you like for confirmation of that claim. We are still far from solving the mystery. So far, the notion of wiring up an electronic replica of a self-aware human – let alone superhuman – brain is best left to SF. One of Asimov’s positronic brains might be good enough.
Real creatives know a lack of originality when they see it in their own fields. Which brings us back to the SF mags on Wall Street. Their predicament has nothing to do with hurting human writers. They are targeted because they pay generous rates per word. Chat GPT-3, launched last December, has unleashed a tsunami of crud upon their heads, and they’re not coping with short story submissions written by bots.
The New York Times quotes Sheree Renée Thomas, editor of The Magazine of Fantasy & Science Fiction: “There are very strange glitches and things that make it obvious that it’s robotic. It’s just sad that we have to even waste time on it.” Neil Clarke, the editor of Clarkesworld, was landed with some 500 machine-written short story submissions in February. He was quoted as saying the writing was “bad in spectacular ways.”
At Asimov’s Science Fiction, editor Sheila Williams told The Times: “The people doing this by and large don’t have any real concept of how to tell a story, and neither does any kind of AI. You don’t have to finish the first sentence to know it’s not going to be a readable story.”
Despite these reactions, ads are now appearing on the agency site Upwork – seeking fiction editors to work on GPT-4 texts. And the Wall Street editors’ dismay is shared by educators who are devising work-around strategies for machine-written high school and college essays. You can’t hand a diploma to a bot on graduation day.
The protoplasmic brain which authored this essay was originated in Scotland by a husband and wife team and programmed at the University of Edinburgh. It has two simple messages. Software is a tool to be mastered. And there is no ghost in the machine.

