A PhD in Mathematical Statistics, Prof Mark Nasila is leading the Artificial Intelligence project at FNB, Africa’s most innovative bank. Given his passion, Nasila has kept a close eye on the past week’s drama around the firing and re-hiring of Sam Altman, rockstar of the global AI industry and co-founder of the world’s hottest start-up, the enterprise which created ChatGPT. In this wide-ranging episode of UNDICTATED, Nasila also shares insights on how South Africa is performing in the AI revolution (poorly) and what can be done to leapfrog the nation’s slow start. He spoke to BizNews editor Alec Hogg.
Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.
Watch here
Relevant timestamps from the interview
- 00:09 – Introductions
- 01:04 – Prof Mark Nasila on his background
- 02:37 – How he got drawn into the field of artificial intelligence
- 05:09 – On the fear that AI will take jobs away
- 08:05 – On the situation surrounding Sam Altman
- 14:57 – Where is SA in terms of AI
- 18:20 – What would the priority be to get us onto the fast track in this global change
- 21:37 – Conclusions
Listen here
Edited transcript of the Interview between Alec Hogg and Prof. Mark Nasila
Alec Hogg: In this episode of UNDICTATED, we’re delving into the world of artificial intelligence, currently in the headlines due to recent developments at OpenAI—the company behind CHAT GPT. Its founder and CEO, Sam Altman, faced removal, then returned, and ultimately ousted the board. This unfolding saga is deemed a tipping point for the global landscape.
The significance of these events cannot be overstated. Joining us is South African AI expert, Prof. Mark Nassila, Chief Data and Analytics Officer at FNB. Mark, great to have you with us. It’s impressive to see an AI expert in a banking organisation—how long have you held this position?
Prof. Mark Nasila: Thank you, Alec, for having me. I’m honoured to be here. My role involves modernising the risk function by harnessing data analytics and AI capabilities. Our journey began with algorithmic leverage for proactive risk identification. In 2019, we adopted artificial intelligence to further modernise risk functions, developing our first risk assessment models. These models expedite decision-making for investigators and due diligence specialists. We also incorporated generative AI in 2019, allowing us to create forensic reports efficiently, streamlining decision-making processes.
Alec Hogg: It’s indeed a new world. How did you gravitate toward this field, especially considering your background with a PhD in mathematical statistics and probability, which is no small feat?
Prof. Mark Nasila: My passion for numbers and mathematics dates back to high school. However, my interest in leveraging data and algorithms to solve problems intensified during my time in the banking industry. I pursued a PhD focused on predicting financial crime as a statistical rare event. AI, rooted in algorithms and data, became a natural extension. Joining the Singularity Executive Program in San Francisco further fuelled my interest. Learning from figures like Peter Diamandis, Ray Cazul, and others provided insights into the transformative power of AI.
Alec Hogg: That’s fascinating. I had a similar exposure to the Singularity faculty in 2014, which reshaped my views on investing and exponential companies. Peter Diamandis and the entire faculty, including David Roberts, left a lasting impact.
Prof. Mark Nasila: Absolutely, and due to that interest, I became a faculty member for Singularity, focusing on data science, artificial intelligence, and data strategies. It’s been a learning curve, exploring how emerging technologies are redefining various sectors—agriculture, medicine, energy, the future of work, smart cities—all undergoing significant re-imagination today.
Read more: 🔒 Sam Altman’s rapid return to OpenAI strengthens Microsoft’s position in AI race: Parmy Olson
Alec Hogg: Many people express concerns about artificial intelligence, fearing job displacement. However, there’s another side to the coin—the potential creation of entirely new industries.
Prof. Mark Nasila: Indeed, it’s a valid concern. AI has matured significantly as a capability, solving complex problems across various sectors. We’ve witnessed breakthroughs like self-driving cars from companies like Tesla and advancements in medicine and finance. Yet, with this progress comes the responsibility to anticipate and address unforeseen consequences. Ethical and irresponsible use of AI, as seen in discriminatory job interviews and biased financial practices, raises critical concerns. AI’s exponential problem-solving ability is matched by its potential for widespread harm due to rapid decision-making. There’s also the risk of creating societal inequality if only a privileged few can afford and control AI. It’s crucial for organisations to approach AI responsibly, focusing on empowering employees, customers, and economies while considering its impact on society.
Alec Hogg: Your point about inequality is intriguing. Looking at the positive side, tools like CHAT GPT open up education to anyone with questions. However, the recent upheavals at OpenAI, particularly the dismissal and subsequent reinstatement of CEO Sam Altman, raise ethical questions. Could you shed light on why this is significant and the role of Sam Altman in OpenAI?
Prof. Mark Nasila: The recent events have been quite dramatic, capturing global attention. Over the past year, CHAT GPT gained prominence, showcasing the maturity of large language models with the potential to reshape every industry. This capability, directly accessible to consumers without AI expertise, is a game-changer. However, concerns arise during the experimental phase, especially regarding ethics. Sam Altman’s dismissal raised questions about transparency in ethical considerations and the development of large language models. Additionally, there’s a growing debate about the possibility of reaching artificial general intelligence, where machines can perform human tasks without supervision. This potential state poses risks and opportunities, presenting challenges in terms of ethical use, potential misuse, and the readiness of society to handle such advancements.
At its current stage, CHAT GPT remains experimental, with a focus on addressing ethical concerns before transitioning into a product that enhances efficiency and effectiveness. The concerns raised during Altman’s firing emphasise the need for responsible development and use of AI, given its potential for both positive and negative impacts.
Alec Hogg: What were the driving forces behind these recent events? Did Sam Altman take too many risks, as suggested by the former OpenAI board?
Prof. Mark Nasila: The messages circulating in social media articles imply that Altman’s lack of transparency regarding the progression toward artificial general intelligence, safety measures in CHAT GPT and large language models, and non-compliance with the board’s expectations led to the upheaval. OpenAI, originally a nonprofit organisation aimed at solving societal problems, faced challenges aligning with its intended purpose. However, the truth remains elusive due to undisclosed behind-the-scenes actions, with Microsoft’s significant investment adding a layer of complexity. Speculation abounds, and only a select few possess the actual truth.
Read more: OpenAI’s ChatGPT can speak to you now: Meet its five new voice personas
Alec Hogg: Despite the internal turmoil, the attention on CHAT GPT and OpenAI underscores their global significance.
Prof. Mark Nasila: For a long time, scientists struggled to model language akin to human expression. CHAT GPT’s breakthrough has profound implications for various industries, potentially reshaping or even creating new ones. The tool’s development, utilising data from the web, raises ethical concerns, with instances of output hallucinations and potential misuse for undesirable information. The global impact of CHAT GPT is heightened by the comprehensive nature of language and the need for responsible control. The tool’s noteworthiness is evident in its ability to both enlighten and potentially lead to disaster if not properly regulated.
Alec Hogg: How does South Africa fare in the realm of artificial intelligence, especially in comparison to the rest of the world?
Prof. Mark Nasila: South Africa lags behind in artificial intelligence, posing a considerable challenge. Projections indicate a global value of $16 trillion by 2030, with China and America expected to claim over 70% of this amount. These countries have established clear strategies for AI and generative AI, focusing on execution manufacturing and technology development. In contrast, South Africa lacks a cohesive strategy, despite having numerous startups and institutions engaged in AI-related activities. The value of AI is realised when implemented at scale, modernising platforms, and reimagining industries. Establishing a strategy that encompasses data, technology, and ethics is essential for South Africa to make meaningful progress in the AI landscape. Currently, the absence of a specific ethical framework places the country at a disadvantage compared to regions like America and Europe, where frameworks for responsible AI have been established to build trust in the technology. Collaboration across sectors is crucial to formulating a measurable strategy that outlines South Africa’s AI objectives.
Alec Hogg: It seems like a substantial challenge for a country grappling with foundational issues, as we are well aware. Nonetheless, it’s an exciting time for humanity and for us in this country as we look ahead. Mark, if you were appointed as the Chief Information Officer for South Africa, Inc., what would be your top priority to accelerate our progress in this global shift?
Prof. Mark Nasila: While acknowledging our current lag in the field, I remain optimistic about South Africa and the African continent. Despite challenges, the issues we face today present ripe opportunities for artificial intelligence and other emerging technologies. Take, for instance, our energy problems—AI can address efficiency, better resource management, and the reduction of energy losses. In the health industry, managing hospital capacity and fostering a healthier nation are significant AI challenges. The education sector holds vast opportunities to democratise education through technology, equipping the next generation with valuable skills.
If appointed to the cabinet, my first priority would be to advocate for the development of a realistic strategy—one that instills hope and empowers the average person to understand and embrace artificial intelligence. Literacy programs would be essential to ensure widespread trust in AI, making it valuable for everyone. Additionally, I would identify specific areas that align with our identity, focusing on industries where AI adoption can solve problems, create products and services, and position us globally. It’s crucial not to attempt to be a hero in every aspect but rather to carve a niche in unique industries, offering something distinctive through AI. My concern is that we might end up as mere consumers, missing out on the opportunity to develop and apply AI-related skills, as seen in the engineering field. Demonstrating the success in leveraging AI in the banking industry, I believe the same approach can be applied across various sectors.
Alec Hogg: A hopeful message to conclude this episode of Undictated. Professor Mark Nasila, Chief Data and Analytics Officer at FNB, and I’m Alec Hogg from BizNews.com.
Read also: