AI monopolies

New report warns of the economic and social threats from dominant firms in generative artificial intelligence

The market for the most advanced models of generative artificial intelligence (AI) may become extremely concentrated, due to the high costs of computational resources and the vast quantities of data required for training. That is the one of the central findings of a new study by Anton Korinek and Jai Vipra, prepared for the journal Economic Policy.

The researchers are also concerned that generative AI firms will face increasingly strong incentives to integrate vertically with providers of AI building blocks – such as microchips and data – and with providers of consumer products that use AI. They conclude that in the absence of clear antitrust rules and other regulatory actions, market concentration in generative AI could lead to systemic risks and stark inequality.

One major barrier to entry in the generative AI market is the immense computational power required to train the models. For example, a research team at Epoch estimates that Google’s DeepMind spent an astonishing $650 million to train its Gemini model. What’s more, they estimate that the cost of training the most cutting-edge frontier AI models is doubling every six months.

This computational intensity helps to explain the skyrocketing market capitalisation of firms such as Nvidia, which provide the specialised hardware required for AI training. In addition, it ensures that developing the most capable AI models is out of reach for all but the most well resourced technology companies.

Generative AI also requires vast amounts of data for training. But late entrants to the market are running out of freely available online data – and many websites now take measures to block AI companies from using their content for training purposes.

It’s estimated that 79% of major US news sites have already blocked OpenAI’s web crawlers. This dynamic gives Big Tech platforms like Google, Microsoft and Meta – which already control huge proprietary datasets – a competitive advantage as they can use their data troves to feed their generative AI while newer entrants face restrictions.

The researchers recommend particular antitrust scrutiny of vertical integration, including acquisitions of start-ups by Big Tech (recall Microsoft’s recent controversial investment in French AI newcomer, Mistral). As generative AI starts being used in more diverse economic applications and resembling an essential service like electricity, non-discrimination requirements will be needed so that private monopoly providers cannot arbitrarily determine who has access to the technology and who doesn’t.

Since a distorted market structure blunts market discipline, the researchers also recommend instituting a level regulatory playing field between AI, non-AI software and human service providers, especially in terms of liability and service standards, as well as data and corporate governance measures that ease the concentration of these resources.

An important warning in the new study is that of regulatory capture: AI monopolies becoming so powerful that they can determine the trajectory of regulation (and deregulation) to their own benefit. The cost of not heeding the warning could be a future where one or two generative AI providers control a significant portion of the entire economy. This could give rise to stark inequality and render society vulnerable to errors and attacks in a single point of failure.


Market Concentration Implications of Foundation Models: The Invisible Hand of ChatGPT

Authors:

Anton Korinek (University of Virginia)
Jai Vipra (Centre for Applied Law and Technology Research, ALTR)