now loading...
Wealth Asia Connect Middle East Treasury & Capital Markets Europe ESG Forum TechTalk
TechTalk / Viewpoint
Pre-empting generative AI monopoly paramount
The emergence of powerful new tools like ChatGPT represents a major breakthrough in artificial intelligence while highlighting the need for regulatory intervention. To protect the public interest, policymakers must prevent this nascent market from becoming dominated by a handful of giant private companies
Diane Coyle 6 Feb 2023

ChatGPT, the new artificial intelligence (AI) chatbot developed by the San Francisco-based research laboratory OpenAI, has taken the world by storm. Already hailed as a milestone in the evolution of so-called large language models (LLMs), the world’s most famous generative AI raises important questions about who controls this nascent market and whether these powerful technologies serve the public interest.

OpenAI’s release of ChatGPT last November quickly became a global sensation, attracting millions of users and allegedly killing the student essay. It is able to answer questions in conversational English (along with some other languages) and perform other tasks, such as writing computer code.

The answers that ChatGPT provides are fluent and compelling. Despite its facility for language, however, it can sometimes make mistakes or generate factual falsehoods, a phenomenon known among AI researchers as “hallucination”. The fear of fabricated references has recently led several scientific journals to ban or restrict the use of ChatGPT and similar tools in academic papers. But while the chatbot might struggle with fact checking, it is seemingly less prone to error when it comes to programming and can easily write efficient and elegant code.

For all its flaws, ChatGPT obviously represents a major technological breakthrough, which is why Microsoft recently announced a “multi-year, multi-billion-dollar investment” in OpenAI, reportedly amounting to US$10 billion, on top of the US$1 billion it had already committed to the company. Originally a nonprofit, OpenAI is now a for-profit corporation valued at US$29 billion. While it has pledged to cap its profits, its loose-fitting structure limits investors’ returns to 10,000%.

ChatGPT is powered by a GPT-3, a powerful LLM trained on vast amounts of text to generate natural-sounding, human-like answers. While it is currently the world’s most celebrated generative AI, other Big Tech companies such as Google and Meta have been developing their own versions. While it is still unclear how these chatbots will be monetized, a paid version of ChatGPT is reportedly forthcoming, with OpenAI projecting US$1 billion in revenues by 2024.

To be sure, bad actors could abuse these tools for various illicit schemes, such as sophisticated online scams or writing malware. But the technology’s prospective applications, from coding to protein discovery, offer cause for optimism. McKinsey, for example, estimates that 50% to 60% of companies have already incorporated AI-powered tools like chatbots into their operations. By expanding the use of LLMs, companies could improve efficiency and productivity.

But the massive, immensely costly, and rapidly increasing computing power needed to train and maintain generative AI tools represents a substantial barrier to entry that could lead to market concentration. The potential for monopolization, together with the risk of abuse, underscores the urgent need for policymakers to consider the implications of this technological breakthrough.

Fortunately, competition authorities in the United States and elsewhere seem to be aware of these risks. The United Kingdom’s communications regulator, Ofcom, launched an investigation of the cloud computing market, on which all large AI models rely, late last year, while the US Federal Trade Commission is currently investigating Amazon Web Services (AWS), which, along with Google and Microsoft Azure, dominates the market. These investigations could have far-reaching implications for AI-powered services, which rely on enormous economies of scale.

But it is not clear what, if anything, policymakers should do. On one hand, if regulators do nothing, the generative-AI market could end up dominated by one or two companies, like every digital market before it. On the other hand, the emergence of open-source LLMs, such as the text-to-image tool Stable Diffusion, could ensure that the market remains competitive without further intervention.

Even if for-profit models become dominant, however, open-source competitors could chip away at their market share, just as Mozilla’s Firefox did to Google’s Chrome browser and Android did to Apple’s mobile operating system, iOS. Then again, cloud computing giants like AWS and Microsoft Azure could also leverage generative AI products to increase their market power.

As was debated at the recent World Economic Forum meeting in Davos, generative AI is too powerful and potentially transformative to leave its fate in the hands of a few dominant companies. But while there is a clear demand for regulatory intervention, the accelerated pace of technological advance leaves governments at a huge disadvantage.

To ensure that the public interest is represented at the technological frontier, the world needs a public alternative to for-profit LLMs. Democratic governments could form a multilateral body that would develop means to prevent fakery, trolling and other online harms, like a Cern [European Organization for Nuclear Research] for generative AI. Alternatively, they could establish a publicly-funded competitor with a different business model and incentives to foster competition between the two models.

Whichever path global policymakers choose, standing in place is not an option. It is abundantly clear that leaving it to the market to decide how these powerful technologies are used, and by whom, is a very risky proposition.

Diane Coyle is a professor of public policy at the University of Cambridge.

Copyright: Project Syndicate

Stefano Ghezzi
Stefano Ghezzi
principal, infrastructure & natural resources, Asia Pacific
Ontario Teachers' Pension Plan
7th Asia Sustainable Infrastructure Finance Leaders Dialogue
Infrastructure of the future
View Highlights
Serena Tan
Serena Tan
senior analyst, responsible investments
Nordea Asset Management
In-person roundtable
Tech in ESG
View Highlights