In a realm once dominated by calculators and spreadsheets, the accounting world is embracing a new era of innovation. Generative artificial intelligence, led by models like ChatGPT-4, promises to revolutionize the industry, streamlining tasks and enhancing accuracy. As businesses eagerly adopt these technologies, educators and regulators face the challenge of ensuring their reliability and ethical use. Yet, with proper training and oversight, these AI advancements could usher in a new chapter of efficiency and opportunity for accountants worldwide.
Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.
By The Editors ___STEADY_PAYWALL___
From computers to data analytics, technology has helped accountants be more accurate and productive. Now — just as businesses bemoan a lack of qualified workers — generative artificial intelligence models such as ChatGPT-4 look poised to revolutionize the industry.
Companies are moving fast to experiment. But educators and regulators will need to adapt quickly to ensure the new large language models can be trusted to tackle the delicate task of tracking and verifying corporate finances.
Even before ChatGPT made headlines in late 2022, there was evidence that audit firms that invested in artificial intelligence were able to lower fees, reduce audit restatements and employ fewer junior accountants. Then, last year, research showed that ChatGPT-4 (with some extra training and access to a calculator) scored an average of 85.1% on all the major accounting certification exams, exceeding the minimum passing grade. That compares with 53.1% for ChatGPT-3.5.
So far, corporate finance departments and their external auditors are using large language models to handle modest tasks such as drafting memos and summarizing meetings. Some are training bespoke versions of the software to simplify internal accounting processes. But the toughest challenge remains: making the technology reliable enough to take over the internal and external audits that verify the processes are sound. Those need to stand up to the scrutiny of regulators; companies are moving cautiously.
The technology’s promise is tantalizing. Goldman Sachs Group Inc. estimates that generative AI could lift global gross domestic product by 7% and boost productivity by 1.5 percentage points over a decade. A study at German energy company Uniper SE found that using ChatGPT, with some modifications to protect data security, reduced the time needed to handle parts of internal audits — including identifying potential risks, preparing interviews and writing reports — by 50% to 80%, with more accurate outputs.
Before that promise can be realized, the industry needs to invest more time and money in refining its own generative AI models. That will require two things.
First, education. Industry certification programs need to ensure that aspiring accountants are well versed in the benefits and risks of large language models. Just as employers expect accountants to be comfortable using calculators and computer software, they’ll need them to understand best practices for AI governance. These should include principles like protecting a client’s data security and privacy, using effective prompt language, breaking tasks into subtasks, verifying the accuracy of results, and understanding the tool’s limitations.
Second, oversight. The Public Company Accounting Oversight Board and the Securities and Exchange Commission will need to be assured that the technology is being deployed in a way that enhances, rather than undermines, control over financial statement and audit quality.
The oversight board remains cautious. A proposal last year would require auditors to show they can verify the reliability of any technology, including data analytics. But some of the board’s members also say that generative AI will become essential in saving auditors time on routine tasks and can improve their capacity for fraud detection and risk assessment.
At the SEC, Chair Gary Gensler has warned that overreliance on models by financial service companies could create systemic risk and that AI models can produce false and deceptive results. He is correct and caution is warranted.
But over time, and with the right training and proper safeguards, generative AI should be able to do more of the jobs that CPAs do today without putting masses out of work. After all, huge advances in technology over the past 30 years haven’t reduced the number of accountants — instead they’ve enabled them to do more. In a similar way, accountants equipped with AI are likely to focus on higher-level tasks, such as shaping corporate strategy, communicating with audit committees and tackling ever-increasing regulatory demands. Everyone (in theory) benefits.
Long the definition of a staid profession, bookkeeping may now be on the cusp of a significant breakthrough. It should be welcomed, with an accountant’s prudence.
Read also:
- 🔒 FTX lesson no.1: Don’t fall asleep in accounting class
- Artificial Intelligence in finance: Sure AI can write, but can it pick stocks?
- 🔒 Masterclass from Martin Wolf: The threat and promise of artificial intelligence
© 2024 Bloomberg L.P.