In this exclusive FT interview with Stanford professor and AI expert Erik Brynjolfsson, he explores the transformative potential of generative AI on productivity, jobs, and societal structures. Reflecting on historical parallels, Brynjolfsson compares generative AI to electricity, emphasising its pervasive influence, rapid advancements, and the emergence of complementary innovations. As the technology gains momentum, Brynjolfsson anticipates substantial productivity growth in the 2020s, challenging projections and raising critical considerations about responsible governance and ethical use to shape a future marked by shared prosperity and technological augmentation.
Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.
By Tej Parikh
The Stanford professor on what generative AI will mean for productivity, jobs and the society of the future
*This is part of a series, “”, featuring conversations between top FT commentators and leading economists
The potential of generative artificial intelligence dominated discussions at Davos this month. Business leaders and policymakers are wondering if last year’s hype over large language models (LLMs), which fuelled a stock market rally, will actually be matched by productivity gains. This year — as the technology is increasingly adopted and commercialised — we will start to see its impact on our economies, societies and institutions.
Erik Brynjolfsson, a professor, author and senior fellow at the Stanford Institute for Human-Centered AI, is an authority on gen-AI’s potential impact on productivity. He was among the first researchers to measure the productivity contributions of information technology. One of his mentors was Robert Solow, the Nobel Prize winner, who passed away last month.
In this interview he discusses Solow’s influence on his work, how gen AI ranks relative to historic technologies, and his concepts of the productivity “J-curve” and the “Turing trap”.
Tej Parikh: New technology always creates a buzz. How excited should we be about generative AI?
Erik Brynjolfsson: I’ve always been pretty bullish on the potential of AI and I think foundation models or LLMs have exceeded even the most optimistic projections of most AI researchers, and certainly economists.
TP: What type of productivity gains are we talking about?
EB: I’m optimistic the technologies will affect a large number of tasks. A big percentage of the work that is done in a modern economy is amenable to being augmented by LLMs and generative AI. The effects on those tasks have been significant — double-digit productivity gains within just a few months in some of the cases I’ve studied. Multiply the large percentage of affected tasks by sizeable productivity gains for each one and you get a big total economic impact. I’m betting that productivity growth is maybe significantly higher in the 2020s than the Congressional Budget Office is projecting. They projected 1.4 per cent average per year. I think it could be twice that — closer to 3 per cent — maybe more.
TP: In a historical context how should we be thinking about gen-AI — Alphabet chief executive Sundar Pichai compared it to fire or electricity.
EB: I’m not sure about fire, but I do think electricity is not a bad comparison. General purpose technologies, or GPTs (although that acronym has kind of been stolen from economists), have three important characteristics. They are pervasive, affecting most sectors of the economy; second, they improve rapidly; and third, they spawn complementary innovations. Electricity certainly fits the bill. So did computing technology, and now gen-AI.
But, it is even more pervasive, more rapidly improving, and perhaps most importantly, has been spawning even more complementary innovations than the earlier technologies. So while most GPTs take decades to fully play out — in the case of electricity, the biggest effects happened 30 to 40 years after it was first introduced into American factories — for generative AI, I expect the productivity effects on the economy to be much faster. In specific applications, we found that it is just a matter of a few months.
TP: Is that based on randomised control trials at the worker-level?
EB: Yes, I did a study with Lindsey Raymond and Danielle Li where we looked at the introduction of an LLM tool to help customer service agents. There was an average of about a 14 per cent productivity gain within just a few months. The least experienced agents had a 34 per cent productivity gain — far more than the more experienced agents. We also saw improvements in customer satisfaction. Even the customer service representatives themselves seemed to be happier. They were less likely to quit, leading to less turnover. So, the stockholders, the customers and the employees all seem to be better off within just a few months.
We’ve seen something like a J-curve with earlier general purpose technologies like the steam engine, electricity, and early computers. My reading of the evidence is that it will happen faster with AI
There have been other studies that have looked at roles from software coding to management consulting, and different kinds of writing tasks. And each of them found a very similar pattern where 1) there were often double-digit gains in productivity, 2) the less skilled workers typically benefited more, and 3) the gains showed up within just a few months.
TP: Public sector applications are promising too, particularly in healthcare. But what matters now for seeing a productivity uplift, both in the public and private sector, is effective adoption.
EB: I think it is going to happen faster. Traditionally GPTs take decades to play out. I would not be surprised if there is a lot of business process change and new skills develop. But the good news with gen AI is that you can get a lot of the benefits even without that. We already have the infrastructure; the internet allows us to deliver these tools very quickly. ChatGPT reached 100mn users within 60 days. We also know how to use things like ChatGPT or other LLMs — you do not need to learn special coding languages or obscure skills. It is basically English. You could get better at prompt engineering, but people can get benefits almost immediately.
TP: Robert Solow, a mentor for you — who sadly passed away last month — did pioneering research on productivity and technology. Everyone knows his quote: “You can see the computer age everywhere but in the productivity statistics.” Might this so-called “Solow paradox” apply for gen AI too?
EB: With his encouragement, I wrote my first paper as a PhD student laying out a set of explanations for why the IT age was not showing up in the productivity statistics. The first was that digital technologies often create a lot of benefits in ways that are not well captured in the data. In particular, digital products often have zero price. Gross domestic product measures all the things that are bought and sold, with a few exceptions, if something has zero price, it is not captured. Second, like other GPTs, to get the full productivity benefits you need to change your work processes and often rescale the workforce, and that could take years or even decades.
TP: The second factor refers to the productivity “J-curve” right?
EB: Yes, not only does it take a while for things to show up, you could even have an initial negative effect. Initially, companies will invest a lot of time and effort in redesigning business processes but none of that instantly turns into greater output. So mechanically you have more input without greater output. That actually lowers your productivity. Later you start to harvest those investments and then you have higher productivity — hence the J-shape of productivity gains with time.
There was pretty high productivity growth in most of the postwar period up until the 1970s. Then there was a slow down until about 1995. That was the period where Solow made his remark about the productivity paradox. And then in 1995 through about 2005, we had a surge in productivity, which was in part due to the adoption of the internet and also enterprise resource planning systems in places like Walmart. Then it kind of petered out up until the past year or two, which I sometimes call the “modern productivity paradox”.
TP: How might the “J-curve” for gen AI look different from that of other technologies, such as say computing?
EB: That’s a great question. We’ve seen something like a J-curve with earlier general purpose technologies like the steam engine, electricity and early computers. It seems that the cycle times are getting faster, from decades down to years. My reading of the evidence is that it will happen faster with AI than it did with some of the earlier technologies. So the curve part is being compressed or even made more shallow.
Misinformation, hallucination and distractions — those are all things that these latest technologies will sometimes do. We will have to find ways of coping with them
TP: And that is down to the complementarities with existing infrastructure or preceding technologies, lower costs and the ease of use of generative AI.
EB: Exactly. Plus maybe add another one. AI is perhaps the most general of all general purpose technologies because it is going after intelligence. If we can “solve intelligence”, we can use that to solve a lot of other problems in the world.
TP: The capital investment required from developers, in terms of computing power and data, is significant. We are seeing progress there, but the complementary spending by potential adopters in things such as IT and training seems to be lagging behind.
EB: We are still in the early stages of the gen AI revolution. While developers like OpenAI and Google are making significant investments in computing power to build ever-larger models, most users are just in the exploration and early deployment stages. Of course, many of them will be relying on cloud services so the investments will show up there. As we can see from Nvidia’s sales and market cap, demand is strong and is likely to grow significantly for computing power. Likewise, the leading companies are now developing aggressive game plans for investments in software and training so they can implement gen AI solutions.
TP: We’ve focused on the potential productivity gains, but what should we make of gen-AI’s counter-productive aspects?
EB: Yes, I’d put that into two categories. One is that people can use them in destructive ways: to create misinformation, viruses or weapons, cyber attacks and phishing attacks. It makes “bad guys” more productive. It could also be used for zero-sum activities, such as targeted marketing that shifts around economic rents.
The second is that to use it effectively, you need to learn new techniques and new norms. Just like with earlier technologies, such as the introduction of the railroads, we had to have standardised time zones, and industrialisation introduced assembly lines and new ways of co-ordinating work in factories. We will have to come up with some new ways to manage information overloads. All of us are going to have to develop defences and new norms, to keep us from being overwhelmed and distracted.
TP: What about the issue of hallucinations?
EB: We need to learn where the technology is effective and where it is not. So for misinformation, hallucination and distractions — those are all things that these latest technologies will sometimes create. We will have to find ways of coping with them.
Take specifically the case of hallucination: part of it is about knowing what kinds of tasks they are suitable for. So if you are trying to creatively brainstorm ideas for a new design or ad campaign, maybe it is a feature more than a bug. If you want to have the exact reference to an article or a piece of data, then it can be problematic. I do think that the technologies are getting better, and the rates of hallucination are going down. More importantly, I think you can combine it with other technologies like retrieval augmented generation, where you match an LLM with a more reliable database to get the correct version of the data item as opposed to the hallucinated one.
TP: The other kind of anti-productivity effects, at least in the short term, is the disruption it will necessarily cause, particularly to labour markets.
EB: There is going to be massive economic disruption. Companies are going to be born and destroyed, as will occupations. Depending on how we use the technology, we can use it in a way that is more likely to create widely shared prosperity, or more concentration of wealth and power.
In particular, if the technologies are mainly used to imitate humans, mimic the tasks the same way we do them and replace humans with machines, it is likely to lead to lower wages and more concentration of wealth and power as capital substitutes for labour. But, if we use the technology mainly to augment our skills, to do new things, then it is more likely to lead to widely shared prosperity and higher wages. And that second path of higher wages has historically been the more common one.
Depending on how we use the technology, we can use it in a way that is more likely to create widely shared prosperity or more concentration of wealth and power
All of us, on average, have wages that are many times higher than they were in the 1800s because technology has mostly augmented our ability to do different tasks. That made one hour of labour more valuable today than one hour of labour was 100 or 200 years ago. That said, there are also areas where the value of labour has gone down for certain tasks. But I encourage technologists to think hard about how they can use the technology not just to imitate or mimic humans, but to augment and complement people.
TP: This is a problem you call the “Turing trap” . . .
EB: Yes, the Turing Test is the iconic idea that Alan Turing came up with in 1950, that we could make a machine that was so powerful that it could not be distinguished from a human. I think that has inspired a generation of researchers. But now that we are essentially reaching that goal, I think it’s becoming apparent that it was the wrong goal all along and that we should be thinking how to augment humans and extend our capabilities.
And I should say that it’s not just a matter of leading to more widely shared prosperity. Augmenting also leads to a higher ceiling. In many areas, machines could do a lot better. How lame would it have been if Henry Ford tried to make a vehicle with legs that could run as fast as humans? Even in the intellectual sphere, Google can search billions of documents much more quickly than I can. And an LLM can synthesise hundreds of thousands of books much more quickly than I can. Even animals do better than humans in certain areas: a bat can figure out where things are based on sound. Research has shown chimpanzees have better short-term memory.
In a few years, I think it’ll become apparent that humans have a narrow kind of intelligence and that true general intelligence will have a much broader set of capabilities. So when it comes to tech, let’s pursue these broader set of capabilities that go well beyond what humans have.
TP: Essentially a better “division of labour” between humans and technology. And so, the upshot of the “Turing trap” is that we may have missed out some productivity gains by focusing on technologies that emulate rather than augment human capabilities.
EB: Yes. Go back to the call centre example. There are some companies who tried to make bots that would answer the phone or answer a contact centre and answer your questions for you. Most of us find them very frustrating because the machines just aren’t that good. But if you use the machine to augment the human, as a team they do much better. There are some types of questions that are very common, like how do I reset my password? And then there are some that are very obscure. Machines are great at the common questions. They have training data and give a very clear answer for those. But they are not very good at the one-shot questions. That’s where we humans do better.
TP: Resistance is also an important factor. Even if gen AI can advance human society, it requires a tipping point of enough beneficiaries to help the technology into adoption. Today, with plenty of influential knowledge-based and professional services roles at risk, at least of disruption, how might that play out?
EB: If you are a CEO or a government leader and you are pushing for machines that replace the workers, well, guess what? A lot of workers are not going to help with your vision. But, if you are working to have the machine to augment and make people’s jobs more pleasant, productive, and allow them to do things they could never do before, you’re going to get a lot more buy-in. If we want people to buy in, we should make something that is a win-win. Aim for something that not only makes the economic pie bigger but also gives everyone a share of that growing pie.
TP: What do we need to see in terms of governance changes then to ensure we can better channel the benefits of gen AI?
EB: Our understanding of the skills, the organisations and institutions needed is not advancing nearly as fast as the technology is. The Digital Economy Lab, which I direct, is premised on the idea that we need to rapidly invest more in understanding the economics, organisational, cultural, political and ethical side of AI. If you look at the “Turing trap”, then I’m concerned about where technology is designed to replace human labour with technology. Current tax policy and investment credits encourage capital over labour.
In a few years, I think it’ll become apparent that humans have a narrow kind of intelligence and that true general intelligence will have a much broader set of capabilities
For instance, marginal tax rates on capital are about half of what they are on labour in the US and in many other countries. That biases entrepreneurs towards trying to find capital-heavy solutions instead of ones that involve labour. I do not see a public policy reason for having that kind of a bias. That is just one example of where the policy needs to take into account what it is doing and how it is shaping the trajectory of technological advances.
TP: The other aspect is minimising the harms, including on humans. We are capable of changing our environment faster than we evolve to adapt to it — and with gen AI, things like information overload, fake news and other social effects can be problematic.
EB: We evolved in a world at least 200,000 years ago that was very different. I think we need to beef up our ability to rapidly sense, and respond, to harmful changes in our environment. We didn’t invent the fire extinguisher until after people had been burnt a few times. We didn’t invent seat belts until after some number of crashes, and sadly I’m sure there’s going to have to be bad outcomes before we adapt to gen-AI. But let’s invest in the ability to act quickly when we see those dangers.
TP: One possible approach is a Cern-style compact between developers, scientists and governments, so we understand how gen AI works and impacts on us in a controlled manner.
EB: Yes. We do not have enough data understanding how technology is affecting society. And so I would invest more in data gathering and analysis. I’m not saying we’ll always make the right decisions, but at least we’ll have a fighting chance if we have a better understanding of what some of the effects are earlier on.
TP: The other side of this is how powerful gen AI businesses can become. Monopolies could stymie innovation. Open-source platforms may help level the playing field, but the ability to obtain training data and develop tools still points to network benefits.
EB: There are two powerful trends going on in opposite directions. One is driving towards more concentration. Scaling laws mean that bigger systems tend to be more powerful. If they have more computing power, more data and more parameters, they perform better. And this is the reason that companies like OpenAI, Google and Anthropic are spending billions of dollars to build gigantic systems. It is hard for smaller companies to keep up.
On the other side, open source and very small systems have become able to get close to the frontier models. Often they have local data that the bigger models do not have access to, and that can be more important than just raw power. In that second path, of having local data and rapid iteration where they could be trained in days or even hours instead of months, the economic impact can be larger.
I’m uncertain which of the two trends will ultimately dominate. Part of it will depend on our policy, antitrust, choices by executives and rules about data ownership.
TP: So what do you think the world will look like at the end of 2024, in effect gen-AI’s second year?
EB: On the productivity side, I think 2024 is going to be a year of harvesting a lot of the capabilities. So what I see is almost every CEO is being asked by their board of directors “what is your game plan for generative AI?” Many of them are implementing an approach based on what we call the task-based analysis, where you look at tens of thousands of tasks and rank them based on which ones gen AI can help with the most and then putting in place working production systems for coding for customer service, for writing, for sales support and for other areas. In 2024 many of those cases will deliver the double-digit productivity gains that we saw in research.
We are into uncharted territory where we have technologies that are much more powerful than before, and history suggests that often the biggest effects are ones that no one anticipated
TP: What about in the decade ahead, what is a best- and worse-case scenario for how the world might look with gen AI?
EB: The pessimistic scenario is not so much about stagnation. It’s more that the nefarious uses of it will catch us off guard, whether that is weaponising them or through information warfare and abuse. The unknown unknowns. We are into uncharted territory where we have technologies that are much more powerful than before, and history suggests that often the biggest effects are ones that no one anticipated. On the optimistic side, we could start seeing significant productivity growth in the 2020s. We will see new kinds of creative work, scientific progress, industrial designs and new products and services being invented.
TP: A key determinant of which of those two paths we will be closer to are the laws, norms and institutions we develop around gen AI. How optimistic are you that we can get that right?
EB: I would not assume that everything is automatically going to work out fine. I think these are super-powerful technologies and we should have our eyes open, and be quite careful about how we use them. If they’re used right, this could be the best decade in human history. If they’re used badly, it could be one of the worst. So there’s a real premium on smart governance, managers and policymakers really paying attention to this technology. We should not fly blind.
MyFT Twitter: @tejparikh90
The above transcript has been edited for brevity and clarity
- 🔒 From Taylor Swift to Joe Biden: The threat of AI-generated deepfakes in the age of misinformation
- 🔒 Social media battles deepfake menace ahead of a election-filled 2024: Parmy Olson
- Elon Musk, slinging expletives, says advertiser exodus on X may kill the social media platform
© 2024 The Financial Times Ltd. All rights reserved.