The Beautiful Bottleneck
Why the economic impact of AI is wildly exaggerated
"Dawud had been struggling with his identity for many years," Davinci said. "Growing up in a conservative village in Pakistan," he continued, "he was taught from an early age that anything other than heterosexuality was a sin."
This is how the story started, unfurled word by word in crisp, elegant prose. I was stunned, and a little bit anxious. I sent the story to a friend, a writer whose work has been longlisted for the commonwealth prize. In the email, I provided a clue to Davinci's identity. He wrote back with one word. "Damn."
The year was 2022, and Davinci was in fact text-davinci-003, the precursor to what became known as ChatGPT. Reading the text today, it looks formulaic. The plot is circular, the writing trite. It's the assembly-line prose we now expect from ChatGPT. But back then, it seemed magical – a general purpose magic that could recast the world into something new.
The potential of generative AI seemed unbounded for consumers, but also for business. PwC, a management consulting firm, estimated that AI could add $16.7 trillion to the global economy by 2030. That's like adding another China and India. McKinsey estimated the total economic impact of AI in the range of $17 to $26 trillion. The investment bank Goldman Sachs predicted that generative AI could increase the global GDP by seven percent in 10 years – the equivalent of adding Turkey, Netherlands, Saudi Arabia, Indonesia, Spain and Mexico to the world economy.
We're now two years into the horizon – where is all this growth?
In his paper, The Simple Macroeconomics of AI, Nobel prize-winning economist Daron Acemoglu argues that these economic projections, mostly by consultants and bankers, are wildly exaggerated. Acemoglu analyzed the expected productivity impact of AI at the task level, and then derived the GDP contribution using aggregated wages for the impacted occupations. For example, while programmers are highly exposed to AI, motorcycle mechanics are not. When he crunched the numbers, he found that the GDP growth from AI is likely to be around 1.1 percent over 10 years. But GDP doesn't tell the whole story – what's more important for consumer welfare is Total Factor Productivity (TFP) which gauges how efficiently labour and capital are used. Here the impact of AI is even more muted. Once we factor in tasks that are hard for AI to learn, the TFP growth drops to about 0.55 percent. Still material, but not the stuff of newspaper headlines.
AI is a General Purpose Technology like electricity or the steam engine. These technologies have widespread impact across industries, and they rewire the economy. But it takes time. Edison invented the first practical light bulb in 1879. By 1881, he had built electricity generating stations in London and New York – and yet, by the early 1900s most of the factories still relied on steam power. The typical factory built in the 1800s was organized around a giant steam engine, propelled by steam from boilers. Coming out of the engine was a giant shaft that went through multiple floors providing rotary power for machines. Electrifying factories didn't just mean replacing the engine, it meant a new way of working. That's why it took decades for the impact of electricity to start showing up in GDP numbers.
While ChatGPT set a record for the fastest-growing consumer application after it reached 100 million users within three months of launch, consumer diffusion doesn't always translate to economic impact. The pattern of 'revolutionary technology' not translating to productivity improvement is so persistent that there is a name for it. It's called the Solow Paradox, after Nobel Prize-winning economist, Robert Solow who observed that despite significant IT investments by firms in the 1970s, these investments did not materially improve worker productivity. In a book review Solow wrote for the New Yorker, he famously quipped, "You can see the computer age everywhere but in the productivity statistics."
In a 2017 paper published in National Bureau of Economic Research, Stanford professor Erik Brynjolfsson and two co-authors, outline four explanations for the productivity paradox: false hopes, mismeasurement, redistribution and implementation lags. They argue that implementation lags are the biggest contributor to the paradox. Technology is developing so rapidly, they say, that techno-optimists like investors and commentators underestimate the time it takes for technology to make an aggregate impact in the real world. It takes time for new technology to become widely accessible (econ-speak: building capital stock), and new technology needs other investments and innovations (econ-speak: complements) to unlock benefits. These complements "await discovery over time, and the required path may be lengthy and arduous."
We can already see the implementation lags for AI, casting doubt on the optimistic medium-term projections.
Automating work end to end is very hard, as it's constrained by what machines cannot yet do. Take a chain of physiotherapy clinics implementing AI. They start with marketing, getting ChatGPT to write copy. The clinic lets the part-time copywriter go, but still needs a human marketer for coordination. Now she's a bottleneck. They fire the marketer, and get their admin assistant to 'step up'. Now he's a bottleneck. These bottlenecks cascade until there's one final chokepoint: the physiotherapists. That's the bottlenecking effect, and as Matt Clancy, an economist and research fellow at Open Philanthropy writes, "there are going to be a billion little bottlenecks that will persistently slow the rate at which AGI takes over tasks."
The second problem is these pesky things called edge cases in software engineering. Edge-cases occur when systems encounter unexpected scenarios – like a child darting into the street on a rainy night. The more complex the system, the harder these edge cases. And the physical world is pretty darn complex. That's why fully autonomous self-driving cars have always been five years out. Elon Musk expected the first fully autonomous Tesla by 2018. Executives from Ford, GM and VW made similar predictions about their cars. And it wasn't just the business types. In 2015, Anthony Foxx, the US Secretary of Transportation under Obama, predicted that "the driverless car in a decade is certain... we'll see it in use all over the world in 10 years."
The third challenge is politics and power. We already have the technology to boost economic growth and solve some of the world's thorniest issues. Obesity, housing crisis, traffic congestion, climate change – you name a big problem, and the solution probably exists. If anything, AI will accelerate solution discovery. But productivity growth doesn't just come from theoretical solutions – they emerge from implementation, often in the physical world. Over 70 percent of the Canadian GDP is from industries that either produce physical goods (e.g., manufacturing, construction, mining) or provide services requiring humans and physical infrastructure (e.g. healthcare, education, transportation). These industries move slowly, in part because the stakes are higher and in part because politics comes in the way of good policy. AI wouldn't change that – at least not in the near term.
In his critique of Acemoglu's paper, economist Tyler Cowen notes that "a lot of the benefits of AI will come from 'new goods'." For example, we now have near perfect on-demand translations, virtual companions who are "always here to listen and talk", and personalized tutors that adapt to each learner. These things didn't exist until recently, and Cowen argues that the gains they provide are much higher than incremental productivity improvements.
The other wild card for AI is science. Futurists like Ray Kurzweil anticipate a dramatic acceleration in scientific discovery. They envision that AI would formulate its own hypotheses, conduct experiments autonomously, synthesize knowledge and iterate – progressively unlocking the mysteries of the universe. Adherents of this view point to how deep learning has discovered new materials and drugs. Outside of think-tanks like the World Economic Forum or the recently shuttered Future of Humanity Institute, though, the mood in the scientific community is more measured – but optimistic.
Terence Tao, one of the world's leading mathematicians and winner of the Field's Medal, likened science to the flow of water, with taps trickling new hypotheses, such as drug candidates into the scientific funnel. With AI, we now have a firehose that pumps 100 times more liquid—but only a tiny amount is potable. We therefore need a filtration unit. Part of this filtration can be algorithmic, but much of it requires the messy mammals called humans.
And so, we have come full circle. While AI can produce an endless torrent of words, images, or drug candidates – it’s human experience that imbues these creations with value. That beautiful bottleneck is not going away anytime soon.


