Algorithmic government and the hidden fallacies shaping AI policy
Key topics:
Logical fallacies shaping AI governance debates
Rhetorical devices distort tech policy and public trust
Bias in AI systems impacts fairness and decision-making
Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox every morning on weekdays. Register here.
Support South Africa's bastion of independent journalism, offering balanced insights on investments, business, and the political economy, by joining BizNews Premium. Register here.
If you prefer WhatsApp for updates, sign up to the BizNews channel here.
By Eugene Yiga
Logical fallacies are errors in reasoning that undermine the logic of an argument. They often stem from cognitive biases or flaws in reasoning that can mislead the listener or reader and lead to false conclusions. But these fallacies aren’t just academic curiosities; they have practical implications in that they can influence decision-making, public opinion, and policy debates. That’s why recognising and understanding these fallacies is so important for critical thinking and effective communication.
One person deeply engaged in the study of logical fallacies, especially in the context of technology and governance, is Till Straube. As a postdoctoral researcher in the Department of Human Geography at Goethe University Frankfurt, he explores how these classical errors influence and manifest within algorithmic governance, with the goal of helping individuals avoid being swayed by arguments that sound persuasive but lack a logical foundation. Indeed, his work highlights the intersection of technology, policy, and societal impact. In doing so, it shows that the same fallacies that affect traditional arguments are also prevalent in discussions about digital technologies and their regulation.
For example, the “slippery slope” fallacy is a common rhetorical device Straube identifies in discussions about algorithmic governance. It involves suggesting that a minor starting point will lead to a chain of related events culminating in a significant (usually negative) effect. In the context of algorithms, this might manifest as the fear that incremental increases in surveillance capabilities could lead inexorably to a dystopian state of constant monitoring.
Another instance is the idea of “moving the goalposts”. This occurs when the criteria for proving a claim are continually adjusted to exclude evidence that might disprove it. In algorithmic contexts, this can happen when the success metrics for AI systems are constantly changed to paint a favourable picture of their effectiveness, regardless of actual performance or impact.
A third example Straube offers is the fallacy of “appeal to a better world”. This involves making unsubstantiated claims about the societal benefits of a technology. Proponents of certain AI technologies might argue, without sufficient evidence, that their implementation will lead to a more efficient, safe, or fair society, emphasising potential benefits while downplaying possible risks and failures.
Finally, “redrawing the blackbox” is a fallacy where the complexities or failures of part of a system are isolated or minimised to suggest that the rest of the system is sound. In algorithmic governance, this might involve claims that issues with AI are limited to problems with data inputs, rather than with the algorithms themselves, thus maintaining a veneer of technological infallibility.
Rhetorical devices in AI discourse
Straube’s examination of rhetorical devices reveals how persuasive language influences public opinion and policy on AI, often masking the nuanced realities of these technologies.
“Solutionism” is one such rhetorical device, where there is a presumption that technology inherently provides solutions, often bypassing deeper analysis of the problems it seeks to solve. This rhetoric can mislead policymakers into adopting technologies without fully understanding their implications or whether they truly address the issues at hand. A prime example is the adoption of blockchain for various public services without addressing its scalability and environmental costs, which may not solve underlying inefficiencies in government systems.
“Human in the loop” is another rhetorical device that suggests human oversight can always counteract any AI system’s faults, offering a false reassurance of control and accountability. This can be misleading, as seen in the use of automated decision-making in parole systems, where human oversight is presented as a safeguard yet often fails to mitigate biases encoded in the AI algorithms.
Lastly, the “appeal to dystopia” uses exaggerated fears of a technology-led authoritarian future to argue against AI developments. While raising valid concerns, this approach can also obscure practical discussions on how to responsibly integrate AI technologies. For instance, discussions around facial recognition technology often evoke dystopian scenarios that overshadow constructive debates on regulation and ethical use, which prevents a balanced understanding of the technology’s benefits and risks.
The real-world impacts of AI fallacies
Indeed, Straube’s research on the misapplications of AI technologies, particularly in predictive policing and facial recognition, underscores the significant risks associated with these tools. These technologies, often lauded for their objectivity, rely heavily on historical data that is not only incomplete but also inherently biased. This reliance perpetuates existing societal inequalities by encoding and amplifying biases within the algorithms themselves.
For instance, predictive policing models can disproportionately target specific communities based on past arrest data, which reflects historical policing prejudices rather than objective indicators of future crime. Similarly, facial recognition technologies have been found to have higher error rates for people of colour, leading to higher rates of misidentification and false accusations. These issues highlight the need for a deeper understanding and more rigorous evaluation of how data shapes algorithmic governance and the consequences of its flawed application.
Building on the examination of rhetorical devices, Straube’s work extends to a critical analysis of the broader debate on AI, all the while contrasting the arguments of both proponents and critics. By identifying common fallacies that each side employs, he emphasises how these distort the true potential and risks associated with AI technologies.
Doomers and boomers
Proponents often gloss over AI’s limitations and ethical dilemmas with overly optimistic narratives, while critics may focus excessively on worst-case scenarios, potentially stifling innovation. Straube, meanwhile, advocates for a more nuanced discussion that transcends these simplistic arguments. That’s why, by dissecting these fallacies, he encourages a more balanced discourse that considers both the beneficial applications and the ethical considerations of AI, all with the aim of fostering a deeper understanding that is essential for responsible development and implementation.
Ultimately, Straube’s insights into logical fallacies reveal significant impacts on AI policy and public trust. Misleading or simplified arguments can lead to policies that either overestimate AI’s benefits or underestimate its risks, which impacts societal acceptance and the regulatory landscape more broadly. But to enhance public discourse, it’s vital to promote media literacy that includes an understanding of AI and its implications.
Educational initiatives should focus on demystifying AI technologies, explaining their operational mechanisms, and discussing their ethical dimensions. Such efforts would enable the public and policymakers to engage in more informed debates, thereby fostering policies that genuinely reflect the complexities of algorithmic governance.

