AI’s triumph over Kasparov: Shaping the future or fueling our fears?

AI’s triumph over Kasparov: Shaping the future or fueling our fears?

The triumph of AI: an extraordinary journey from Kasparov's defeat to shaping an uncharted ethical frontier.
Published on

AI emerged victorious in the iconic clash between World Chess Champion Garry Kasparov and IBM's Deep Blue, marking a pivotal moment in history. The match showcased AI's potential to outwit even intellectual giants. The journey of artificial intelligence has been one of uncertainty, hope, and fear, prompting questions about its trajectory and our preparedness. From Alan Turing's Enigma-cracking Bombe to the AI winter and the resurgence of deep learning, the evolution of AI has been a rollercoaster. Deep Blue's triumph brought AI into the mainstream, foreshadowing its integration into daily life via tech giants like Google and Amazon. Yet, with AI's rise comes unease. Can we trust machines to align with our ethical codes? The "Black Box Problem" underscores the need for transparency in AI decision-making. Regulations attempt to address this, but the core issue may lie in our data's biases. Ultimately, AI's direction is shaped by us – governments, tech companies, and consumers. To ensure rational outcomes, we must minimise biases in the data we feed AI. The path ahead is clear: AI's future lies in our hands. The article was first published on FirstRand Perspectives.

Artificial intelligence vs the world

By Josh Gordon

Deep Blue versus Garry Kasparov' was a clashing of six-game chess matches between World Chess Champion Garry Kasparov and IBM supercomputer Deep Blue. Kasparov considered the best chess player of all time, was ranked No. 1 for 225 consecutive months from 1984 until his retirement in 2005. At age 22, he became the youngest undisputed World Chess Champion, defeating the former champion Anatoly Karpov.

Deep Blue beat Kasparov 2 – 1 (3 games being drawn by mutual consent): signaling that Artificial Intelligence (AI) could outsmart one of humanity's great intellectual champions. AI's capricious history has been shrouded in uncertainty, hope and fear. Collectively, humanity has asked, "Where is AI taking us?" and "Are we prepared?"

Innovation has long been regarded as an invention born out of necessity. This was the case at Bletchley Park, the secret headquarters of British codebreakers during the Second World War. Alan Turing, one of the great mathematicians and cryptanalysts of his time, was tasked by the British military to decipher an unbreakable Nazi code encrypted by the infamous Enigma machine. Turing's own cryptanalytic machine, Bombe, effectively automated and optimised the trial of various codebreaking possibilities and consequently cracked the Enigma code – saving thousands of lives.

Although Bombe could not store or retrieve data (two critical functions of modern computers), the history of AI starts with these early manifestations of digital electronic computers.

Following a prosperous and auspicious period of AI research from the mid-1950s to the late 1960s, an era dubbed the "AI Winter" ensued. This era marked a steady decline in public interest and investment in AI research among business and academic communities. Feelings of pessimism toward AI did not wane until multi-layered neural networks (capable of deep learning) emerged, thanks in no small part to a group of devoted and dogged Canadian researchers.

At its most conceptual level, neural networks learn through processing numerous inputs and adjusting the translation of data based on the relevance of the output to a desired result. This makes machine learning so powerful: the ability to self-correct without continuous human intervention.

In the late 1990s and early 2000s, big business and mainstream media began to take AI seriously. The exhibition chess match between Deep Blue and Kasparov captured the imagination of over 70 million television viewers – arguably the first time laymen acknowledged the potential for computers to mimic human intelligence.

Following Deep Blue's win, AI research resembled a phoenix from the ashes. This culminated in the development of Watson, a computer which beat two Jeopardy champions in an exhibition game show. AI was no longer limited to number crunching; it could understand complicated contexts in the real world.

Major advances in AI took place under our collective noses. Vast strides in AI are prominent in the technology and software we use daily on smartphones and laptops. Big-tech firms such as Google, Apple, Facebook and Amazon make avoiding AI in our daily lives almost impossible. As Big-tech ushered AI in through the front door, uncertainty and fear crept in through the back.

Much like our reliance on our phones and the Internet for various daily tasks, the Oracle of Delphi was a respected source of advice for ancient Greeks. Delphi would often be found in the temple of Apollo, built around a sacred spring considered the centre of the world. Perched above a crevice where the divine spirit was said to radiate, Delphi would deliver messages from the Gods. Delphi's advice was taken as absolute truth, and no major decision regarding war or life would be taken without consultation. Yet, instead of supernatural events, historians have found evidence to suggest that the divine spirit emitted through Delphi's crevice was a combination of gasses capable of inducing a high. In all likelihood, ancient Greeks based their most important decisions on the word of an intoxicated mystic

The oracle we consult today requires less of an introduction: reaching into one's pocket is far more convenient than a trip up a mountain. There is no need to travel far for consultation; we simply input large amounts of data, coded in a way that makes sense to our medium and wait for a response. Unlike ancient Greeks, however, the layman does not fully understand how conclusions are reached. Can we assume that AI machines will always make decisions that align with our established formal and informal ethical codes? Centrally, can we trust the machines we have created?

This is the crux of our discomfort.

When individuals do not understand basic principles of causality, intent and meaning, their actions are labelled insane. We would not let such individuals make decisions about war or home loans. As AI is incorporated into these critical decisions, there has yet to be a demonstration that it will adhere to the social constructs that govern our lives.

As a consequence of AI's "Black Box Problem" (any artificial intelligence system whose inputs and operations are not visible to the user or another interested party), the sentiment that it should be a legal right to interrogate AI about how it reaches the decisions that it makes, is gaining momentum. Regulations such as the Equal Credit Opportunity Act in the USA and the General Data Protection Act in the EU are trying to solve these questions around trust.

However, are we misdiagnosing the source of our fear? Does the source of mistrust sit with AI itself? Or is AI just an algorithmic extension of the data we feed it? Similar to the fumes inhaled by Delphi all those years ago, any bias inherent in our data will manifest in the conclusions reached by AI.

Artificial Intelligence will always be governed by the data humans collect, process and use. It is undeniably vital for humans to minimise the biases in that data. Only then will AI be able to make sober, rational conclusions. For now, we are left to conclude that AI is headed in the direction that governments, big tech and us, the consumer, guide it.

Related Stories

No stories found.
BizNews
www.biznews.com