The coming reckoning: AI’s rise and the chaos it leaves behind - Ivo Vegter
Key topics:
The meaning of "AI" has evolved from simple logic systems to marketing hype.
Generative AI creates “AI slop,” recycling and distorting human-made content.
AI’s rise threatens truth, creativity, and public discourse through automation.
Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.
Support South Africa’s bastion of independent journalism, offering balanced insights on investments, business, and the political economy, by joining BizNews Premium. Register here.
If you prefer WhatsApp for updates, sign up to the BizNews channel here.
By Ivo Vegter*
Truly disruptive technologies are relatively rare, despite what marketers will tell you. AI is worthy of the adjective.
As someone who once studied computer science, I get a little annoyed by the shallow grasp most people have of what AI is, and what is AI.
Since the artificial intelligence (AI) hype reached its present deafening roar with the launch of ChatGPT back in November of 2022, almost everyone claims to be selling AI-powered somethings, even if they were calling it something else before.
The evolving meaning of “AI”
The term AI itself is vaguely defined. Back in the 1950s, when it was coined, it referred to symbolic logic or heuristic programming techniques.
By the 1970s, it was applied to expert systems, which are essentially rules-based decision systems. A sufficiently complex if-then tree can look to a passive observer exactly like an “expert”, or “artificial intelligence”. Today, these systems are sold (and taught) as “knowledge-based AI”.
In the 1980s and 1990s, “AI” enveloped automation, neural networks, statistical modelling, pattern detection and handwriting recognition, robotics, rudimentary speech recognition, grammar checkers, and early machine learning systems, all of which were based on matrix algebra, and all of which are today sold as “AI vision”, “voice AI”, “AI assistants” or some other flavour of AI.
Statistical models became more sophisticated in the 2000s. Business analytics and business intelligence systems were all the rage. Online services like Last.fm, Amazon and YouTube began to deploy recommendation systems. Retailers notoriously learnt to predict a pregnancy before the family even knew. Bots became common in chat rooms online.
Read more:
Today, they’re called “predictive AI” or “AI-powered personalisation” or “AI chatbots” or some such marketing drivel.
As neural networks became more sophisticated and, critically, the machines on which they ran became more powerful, techniques dating back to the 1980s, like convolutional neural networks, came down in cost and became widely adopted.
Natural language processing was now possible on fairly modest computers, and speech generation became passably tolerable.
Because such techniques could learn from patterns in data that humans wouldn’t notice without poring over it for hundreds or thousands of years, this was called “deep learning”, but it was still just matrix algebra.
AI everywhere
Then, this decade, a lot of these techniques were refined and combined into systems that could model and synthesise patterns in training data, leading to the generative AI explosion that has taken the world by storm.
Now, almost everything is “AI”. Social networks deploy “AI-assisted moderation”, they say. No they don’t. They use the same Bayesian spam filters that have been used for over 30 years. These filters have become more sophisticated, and look almost magical,but they’re still just statistical models.
Self-driving cars are “AI”, they say. In reality, they’re just sophisticated pattern-recognition and process control systems.
All this stuff falls under the very broad “AI” banner, but involves a wide variety of algorithms and techniques. None of this stuff is actually “intelligence”.
AI systems are just expert systems, automation systems, heuristic systems, decision systems, data analysis systems, statistical modelling systems, or whatever, with a shiny new label that makes it more marketable.
Generative AI is just a massive system of (you guessed it) matrix algebra that does frequency analysis on training data, which it scrapes from the internet, and reconstructs the sort of thing that people generally say on a topic.
We only lately have processors (previously used for the matrix calculations and transformations necessary to display 3D graphics) that are powerful enough to make it look fairly convincing.
Statistical sequencing machines
One can nudge an AI chatbot one way or another or impose categorical restrictions using “prompt engineering”, but what it produces is no more than a collage of statistically likely elements, be they words, pixels or sounds, in a sequence.
Large language models have become fairly sophisticated, and incorporate “reasoning” abilities that are based on the symbolic logic, heuristics and expert systems of yore, to solve awkward problems such as the inability to do basic arithmetic or deduce obvious conclusions from given premises.
All this might look like intelligence, but it is just a statistical sequencing machine that creates a synthetic replica of what intelligence looks like. Surprisingly often, it proves to be wrong, stupid or even dishonest. It cannot be used as a reliable source, or even a reliable summary of sources. It is no more than an unreliable tool that is sometimes useful.
Yet all of this has blown up an AI investment bubble that makes the dot com bubble look like a hiccough. Some companies and investors are going to get very badly burnt when their billion- and trillion-dollar bets evaporate into vaporware.
Truly disruptive
I don’t mean to sound dismissive of AI, because I’m not. Although it is really just an evolutionary development of principles and algorithms that we simply didn’t have the computing power to deploy at sufficient scale before, it is also truly disruptive.
And by “disruptive” I mean the unpredictable, astounding-and-terrifying-all-at-once kind of chaos-mongering that comes around maybe once in a generation. The world wide web and social media were “disruptive” technologies in the same sense.
As a disruptive technology, AI will have both amazing benefits, and awful drawbacks. We’re already seeing a lot of both.
On one hand, it is a tool that, in the right hands, can make a lot of work far more efficient. This is good. This is progress. That is what disruptive technologies have always done: automated drudgery and made workers of all kinds more efficient.
I think people who rely on AI for work ought to be very cautious about how they do so, but that it can be of tremendous assistance is not in doubt.
Sowing chaos
On the other hand, AI is sowing a great deal of chaos, and not of the good kind.
In just three short years since OpenAI unleashed ChatGPT upon the world, generative AI has become so good at “creating” content that our recommendation feeds are already inundated with what has become known as “AI slop”.
I put “creating” in scare quotes, because it never creates anything truly new. Generative AI is, at heart, a massively parallel automated plagiarism engine that consumes and digests all the human-created content it can find, and spews it out in slightly modified (and often incorrect) form.
This has some scary consequences.
Consider what happens with all the AI slop that is being published online. It becomes yet more fodder for AI models to consume and regurgitate.
Monster
At first, large language models were hyper-efficient thieves and summarisers of human intellectual property. That was dangerous enough, since the mere fact of being human-generated does not make content correct.
By now, however, AI is becoming a monster that eats its own vomit, digests it, and excretes it as “facts” that it serves up to unsuspecting humans.
Search engines all have AI summaries at the top of their search results. Instead of going to sources created by humans, who hopefully followed some sort of protocol for establishing and verifying facts, many people will be satisfied with the AI slop presented to them.
Not only does this deprive the human generators of knowledge of income – in much the same way that search engines and social media deprived the journalism industry of income, and with much the same consequences – but it reinforces the vicious cycle of AI-generated misinformation.
Once the AI models have reconsumed their own slop, via human writers who simply accept what AI models told them, whatever errors they made will become myth, widely presumed to be true, because how can a hundred or a thousand websites written by actual humans all be wrong?
AI arbiter
So-called AI moderation (which is really just sophisticated Bayesian statistics, perhaps backed up by a large language model) will become the arbiter of what is and isn’t acceptable speech online.
Platform companies have long ago given up the fight to be treated as neutral communication channels that aren’t responsible for the content users create.
Partly because of over-zealous censors and regulators, and partly because the platforms have all deployed algorithms (sorry, “AI”) that decide what content to amplify for whom, they have become publishers, and not mere conduits for other people’s content.
That means they must implement content moderation, which cannot be done without automating 99% of it. Which means AI systems will determine what we may and may not say online; what is and is not factual; what will and won’t offend.
Even assuming AI systems will learn to do what even human moderators never did – to recognise humour, hyperbole, metaphors, sarcasm and satire – this will ultimately automate the formation of public opinion. And because it is based on matrix algebra with weights derived from existing online content, opinions that deviate from the majority views will be at a disadvantage.
If the majority believes in homeopathy, or thinks pharmaceutical companies are inherently evil, or thinks nobody deserves a billion dollars, or thinks birds aren’t real, or thinks socialism is dandy, or follows QAnon, or believes in a particular deity, or believes propaganda designed to destabilise the free world, then this will determine how online content is filtered and moderated.
Artists and writers
Generative AI will also harm artists, musicians and writers. They’ve already had to watch talentless idiots lipsync and autotune, and even burp and fart, their way to wealth and fame. They will now have to cope with the fact that much of their audience will tolerate consuming AI slop, because, hey, it’s funny to watch historical figures step on rakes or spout modern slogans.
Read more:
Most people instinctively dislike something they appreciated at first once they learn that it is AI-generated, but that doesn’t really diminish the threat.
Advertisers, who pay for most of this stuff, aren’t that picky how people get to see their messaging. They care that people get to see their advertising, and will gladly fund torrents of algorithm-moderated AI slop if people can be convinced to mindlessly consume it.
Politics and science
There’s a threat to politics and science, too. With generative AI becoming ever-more realistic and convincing, the problem of “deep fakes” is only going to get worse.
The internet is already full of politicians made to say things they never said, public events that never happened, sciency videos that are plain wrong, or lurid tales that only vaguely resemble historical fact, if at all.
Create fake slop that feeds into people’s confirmation bias, or morbid fascination, or desire to be surprised – in short, that activates their dopamine system – and that fake slop enters the mainstream of information.
Soon, it could become hard to tell fact from fiction even if you’re an expert. The regular Jane or Joe who just scrolls for shits and giggles will have no hope of learning much of value, and every chance of becoming a redistributor of AI sewerage.
AI rights
The other day, I came across a software engineer who thinks people who use “bigoted language” against AI chatbots are just as bad as people who express hateful beliefs about people. As if AI systems have feelings that can be hurt, rights that can be violated, or merit being protected from prejudice and discrimination.
What we call AI today is a tremendously powerful and occasionally useful toolset, for sure. It has the potential to make us much more productive. That in itself is disruptive, since people will have to adjust to changing dynamics in the workplace as AI systems start doing some of the work people had to do before.
The far bigger disruption, however, is to be found in the gulf between people who literally want to give AI human rights, and people who recognise the manifold dangers AI slop poses to creativity, journalism, science, knowledge, and politics.
In the late 14th century, the prescient reverend Thomas Wimbledon warned, “Þe day of streyt rekenyng shal come, þat is þe day of doom.”
In other words, AI is going to cause trouble. I don’t know exactly how, but it will.
*Ivo Vegter is a freelance journalist, columnist and speaker who loves debunking myths and misconceptions, and addresses topics from the perspective of individual liberty and free markets.
This article was first published by Daily Friend and is republished with permission