By John Thornhill for The Financial Times*
The early use of the radio resulted in mysterious supernatural phenomena, such as talking radiators and stoves, according to newspaper reports of the time. English doctors once feared that excessive use of the bicycle would overtax the nervous system and produce anxious, worn “bicycle faces”. Teachers lamented how the replacement of the slide rule by electronic calculators would erode our understanding of mathematical concepts. Besides, what would happen when the batteries ran out?
All these examples of technophobia are taken from the Pessimists Archive, a wonderful collection of “fears about old things when they were new”. Anyone concerned about the rise of artificial intelligence today should rummage around this digital record. It is striking how many of our contemporary fears echo previous concerns about the growing supremacy of machines and human obsolescence. It is also reassuring how many of these moral panics have proven spectacularly wrong, appearing almost comical with hindsight.
___STEADY_PAYWALL___Of course, just because the doomsters were often wrong about the evils of past technologies does not mean that the pessimists are wrong about AI today. But we should at least focus on whether, or in what significant ways, the latest AI differs from what came before. There would certainly be a lot less fuss about AI if we were to demystify the field and rename it computational statistics, as some technologists suggest. And, as the Pessimists Archive makes clear, futurists tend to overemphasise the speed of adoption of most technologies and underemphasise the scope for adaptation. They can tell us what technologies can do in theory, but not how they will be used in practice.
Read more: Premium – From the FT: Nvidia’s rally forces money managers to play catch-up
One argument for AI being different — and a legitimate worry — has been made by the philosopher Daniel Dennett, who has long taken an interest in the field. In an essay in The Atlantic, he argues that for the first time in history AI can be used to create “counterfeit people”, who can pass for real people in our digital world. These deepfakes, as others have called them, controlled by powerful people, corporations and governments, are the most dangerous artefacts humanity has created and could be used at scale to distract and confuse, eroding rational debate. “Creating counterfeit digital people risks destroying our civilisation,” Dennett writes. “Democracy depends on the informed (not misinformed) consent of the governed.”
Dennett may, or may not, be overly alarmist about the threat of this technology. But he acknowledges that technology can also provide a solution. Just as we have largely solved the problem of counterfeit money, so we can postpone, if not extinguish, the ominous threat of counterfeit people. Most banknotes today contain high tech watermarks, such as the EURion Constellation, a system of embedded symbols which block colour photocopiers from counterfeiting legal currencies. Computer scientists are already developing similar watermarking techniques to flag AI-generated deepfake content.
“Watermarking is technically possible but not necessarily the most useful route,” the founder of one generative AI company told me last week. The emphasis should be as much on how deepfakes are distributed as on how they are created. In other words, we also need to focus on the equivalents of the photocopier manufacturers: the social media companies. That strengthens the case for ensuring that user accounts are verifiable and can be held accountable for their output.
Read more: Premium – From the FT: RIP bear market? Bulls v Bears – the argument both ways…
Other technologists accept Dennett’s argument that the novelty of AI is that it blurs the line between machines and humans. But that can also be a good thing. So many of the problems of our computerised society stem from the fact that machines are inflexible, says Neil Lawrence, professor of machine learning at Cambridge university. Most computers are deterministic machines that can only tackle problems quantitatively, leaving little room for ambiguity, doubt or nuance. But the latest generative AI models are probabilistic machines trained on all human knowledge on the internet and therefore more embedded in human culture.
That raises the possibility that machines can increasingly be used to address problems qualitatively, like humans do. “Humans have adapted to all previous technologies. But this technology can adapt to us,” says Lawrence, author of a forthcoming book on AI making that case.
When it comes to developing healthcare chatbots or self-driving cars, machine adaptability may be a particularly useful attribute. We should listen to the optimists, too.
Read also:
- Premium – Financial Times: Five reasons investors should expect the unexpected
- How the world sees SA: Eskom’s “slow motion car crash” gets a global spotlight
- Eugene Brink: New government needed to save SOEs; after de Ruyter’s Eskom nightmare
*John Thornhill is the founder of Sifted, an FT-backed site about European start-ups.