#EconomicPolicy
#EconomicPolicy79

Contacts:

Joshua Gans: joshua.gans@gmail.com

web: joshuagans.com
twitter: @joshgans

pronouns: he/him

VoxTalks Economics Live from the 79th Economic Policy Panel Meeting

Learning about potential harms from artificial intelligence

A fresh perspective on the optimal rate of adopting the new technologies

Some potential harms from artificial intelligence (AI), such as whether workers will be replaced, are all things that are hard to assess without real market deployment. In such circumstances, as Joshua Gans explains in a new study prepared for the journal Economic Policy, slowing the pace of AI research and implementation doesn’t make sense.

His argument is tempered, however, by whether an AI rollout is reversible. If it is – that is, we can ‘un-deploy’ AI or fix its ills down the track – then real world learning is actionable. If it is irreversible – that is, the cat is out of the bag – then real world learning has a challenge and it would be good to find ‘lab-learning’ options that are more reliable. Of course, if that is not the case, then depending on how worried you are, you may just have to take a risk on AI.

Policy-makers need to think carefully about all of this. Yes, AI may involve harms. But the case for a pause is a case for learning about those harms during the pause. If you can’t do that, then unless you are really worried there is no other option but to push forward.

Gans begins by noting that in 2023, a cadre of AI luminaries posted an open letter calling for a pause in the release of large-scale AI models. Their concern was the potential for many harms that people have speculated might result from AI – from misinformation to wiping out humanity.

What would a pause achieve? Presumably it was for time to assess those risks and a basic application of the ‘precautionary principle’.

At least insofar as the non-existential risks are concerned, an AER: Insights paper by Daron Acemoglu and Todd Lensman lays out the basis for slowing down the rate of an AI roll-out. Specifically, by waiting, the information regarding potential harms would surface and investments could be made without incurring those harms later on.

The logic of their model is sound but it makes a specific assumption about how we will learn about the harms of AI; specifically, they can be surfaced without deploying the AI itself. This is something that Gans refers to in his study as lab learning.

But can the harms of AI really be assessed without deployment? Gans argues that is unlikely. Harms such as the costs of misinformation, disruption to education and even whether workers will be replaced are all things that are hard to assess without real market deployment. Thus, if we want to learn about the harms of AI we have to try it out in the real world – outside the lab.

What Gans shows is that when harms can only be assessed by real world learning, the precautionary argument is turned on its head. Now there is no reason to delay and a new reason to accelerate AI adoption precisely so we can receive any productivity benefits earlier and also surface harms more quickly.


How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption

Author:

Joshua Gans (University of Toronto)