artificial intelligence

[OPINION] Whose AI Revolution?

Anu Bradford

This is AI generated summarization, which may have errors. For context, always refer to the full article.

[OPINION] Whose AI Revolution?
While working closely with tech companies to foster AI innovation and maximize benefits, democratic governments also will need to protect their citizens, values, and institutions
As published byProject Syndicate

In November, the United Kingdom will host a high-profile international summit on the governance of artificial intelligence. With the agenda and list of invitees still being finalized, the biggest decision facing UK officials is whether to invite China or host a more exclusive gathering for the G7 and other countries that want to safeguard liberal democracy as the foundation for a digital society.

The tradeoff is obvious. Any global approach to AI governance that excludes China is likely to have only a limited impact; but China’s presence would inevitably change the agenda. No longer would the summit be able to address the problem of AI being used by governments for domestic surveillance – or any other controversial issue that is of concern to democratic governments.

Whatever the agenda, the summit is a prudent response to rapid and dramatic advances in AI that present both unprecedented opportunities and challenges for governments. World leaders are eager not to miss out on a technological revolution that could – ideally – help them expand their economies and address global challenges.

AI undoubtedly has the potential to improve individuals’ productivity and drive social progress. It could lead to important advances in education, medicine, agriculture, and many other fields that are critical for human development. It also will be a source of geopolitical and military power, conferring a significant strategic advantage on countries that gain a lead in its development.

But AI also poses societal challenges and risks – hence the growing chorus demanding that governments step in and regulate it. Among other things, AI is expected to transform labor markets in ways that will make many workers redundant and some far more productive, widening existing inequalities and eroding social cohesion. It also will be weaponized by bad actors to commit fraud, deceive people, and spread disinformation.

When used in the context of elections, AI could compromise citizens’ political autonomy and undermine democracy. And as a powerful tool for surveillance purposes, it threatens to undermine individuals’ fundamental rights and civil liberties.

While the above risks are all but certain to materialize, others are more speculative yet potentially catastrophic. Most notably, some commentators warn that AI could spin out of control and pose an existential threat to humanity.

No model to rule them all

With an eye toward seizing AI’s unprecedented opportunities while managing its potentially serious risks, divergent approaches to regulate the sector are emerging. Hesitant to interfere in the development of a disruptive technology that is critical in its economic, geopolitical, and military competition with China, the United States is relying on voluntary guidance and self-regulation by tech companies.

By contrast, the European Union is adamant that AI governance not be left to tech companies; instead, digital regulation must be grounded in the rule of law and subject to democratic oversight. Adding to its existing cache of digital regulations, the EU is in the final stages of adopting a comprehensive, binding AI regulation that focuses on protecting individuals’ fundamental rights, including their right to privacy and non-discrimination.

China also is pursuing ambitious AI regulation, but with authoritarian characteristics. The authorities seek to support AI development without undermining censorship and jeopardizing the Communist Party of China’s monopoly on political power. But this implies a tradeoff, because to maintain social stability, the CPC must restrict content that could be used to train the large language models behind generative AI.

The US, the EU, and China thus offer competing models of AI regulation. As the world’s leading technological, economic, and regulatory powers, they are “digital empires”: each not only regulating its domestic markets but also exporting its regulatory model and aiming to shape the global digital order in its own interests. Some governments may align their regulatory stance with the American market-driven approach, opting for light-touch regulation; others may side with the EU’s rights-driven approach, pursuing binding legislation that sets constraints on AI development; and some authoritarian countries will look to China, emulating its state-focused regulatory model.

Most countries, however, are likely to straddle the three approaches, selectively adopting elements of each. That means no single blueprint for AI governance worldwide will emerge.

The case for cooperation

Although regulatory divergence seems inevitable, there is a glaring need for international coordination, because AI presents challenges that no government alone can manage. A closer alignment of regulatory approaches would help all governments maximize the technology’s potential benefits and minimize the downside risks.

If every government develops its own regulatory framework, the resulting fragmentation will hamper AI development. After all, navigating conflicting regulatory regimes adds to companies’ costs, breeds uncertainty, and undermines projected gains. Consistent and predictable standards across markets will foster innovation, reward AI developers, and benefit consumers.

Moreover, an international agreement could help distribute these projected gains more equally across countries. AI development is currently concentrated in a handful of (mostly) developed economies that are poised to emerge as the clear winners in the global AI race. At the same time, most other countries’ ability to take advantage of AI is limited. International cooperation is needed to democratize access and mitigate fears that AI will benefit only a subset of wealthy countries and leave the Global South further behind.

International coordination could also help governments manage cross-border risks and prevent a race to the bottom. Absent such coordination, some actors will exploit regulatory gaps in some markets, offsetting the benefits of well-designed guardrails elsewhere. To prevent regulatory arbitrage, countries with better regulatory capacities would need to offer technical assistance to countries lacking it. In practice, this would entail pooling resources to identify and evaluate AI-related risks, disseminating technical knowledge about those risks, and helping countries develop regulatory responses to them.

Perhaps most importantly, international cooperation could contain the costly and dangerous AI arms race before it destabilizes the global order or precipitates a military conflict. Absent a joint agreement establishing rules governing dual-purpose (civil and military) AI, no country will be able to risk curtailing its own military-driven development, lest it cedes a strategic advantage to its adversaries.

Given the obvious benefits of international coordination, several attempts to develop global standards or methods of cooperation are already underway within institutions such as the OECD, the G20, the G7, the Council of Europe, and the United Nations. Yet it is reasonable to worry that these efforts will have only a limited impact. Given the differences in values, interests, and capabilities among states, it will be difficult to reach any meaningful consensus. For the same reason, the upcoming UK summit most likely will produce only lofty statements, endorse vague high-level principles, and commit to continue the dialogue.

The regulation debate

Not everyone is cheering for governments to succeed in their regulatory efforts. Some observers object to governments even attempting to regulate such a rapidly evolving technology.

These critics typically advance two arguments. The first is that AI is too complex and fast moving for legislators to understand and keep up with. The second argument holds that even if legislators were competent to regulate AI, they would likely err on the side of excessive precaution – doing too much – thereby curtailing innovation and undermining the gains from AI. If correct, either concern would provide grounds for governments to follow a “do no harm” principle, exercise restraint, and let the AI revolution follow its own course.

The argument that lawmakers are incapable of understanding such a complex, multifaceted, and fast-moving technology is easy to make, but remains unconvincing. Policymakers regulate many domains of economic activity without being experts themselves. Few regulators know how to build airplanes, yet they exercise uncontroversial authority over aviation safety. Governments also regulate medicines and vaccines, even though very few (if any) lawmakers are biotechnology experts. If only experts had the power to regulate, every industry would regulate itself.

Likewise, while the AI governance challenge is partly about the technology, it is also about understanding how that technology affects fundamental rights and democracy. This is hardly a domain where tech companies can claim expertise. Consider a company like Meta (Facebook). Its track record in content moderation and data privacy suggests that it is one of the least-qualified entities in the world to protect democracy or fundamental rights – as are most other leading tech companies. Given the stakes, government, not developers, must take the lead in governing AI.

This is not to suggest that governments will always get regulation right, or that regulation will not force companies to divert resources from research and development toward compliance. However, if implemented correctly, regulation can encourage firms to invest in more ethical and less error-prone applications, steering the industry toward more robust AI systems. This would enhance consumer confidence in the technology, thus expanding – rather than diminishing – market opportunities for AI companies.

Governments have every incentive not to forgo the benefits associated with AI. They desperately need new sources of economic growth and innovations that will help them achieve better outcomes, such as improved education and health care, at lower cost. If anything, they are more likely to do too little, for fear of losing a strategic advantage and missing out on potential benefits.

The key to regulating any fast-evolving, multifaceted technology is to work closely with AI developers to ensure that the potential benefits are preserved, and that regulators remain agile. But close consultation with tech companies is one thing; simply handing over governance to the private sector is quite another.

Who’s in charge here?

Some commentators are less worried that governments do not understand AI, or that they will get AI regulation wrong, because they doubt that government action matters much at all. The techno-determinist camp suggests that governments ultimately have only a limited ability to regulate tech companies in the first place. Since the real power resides in Silicon Valley and other technology hubs where AI is being developed, there is no point in governments picking a fight that they will lose. High-level meetings and summits are destined to be sideshows that merely allow governments to pretend they are still in charge.

Some commentators even argue – not unconvincingly – that tech firms are “new governors” who are “exercising a form of sovereignty,” and ushering in a world that will not be unipolar, bipolar, or multipolar, but rather “technopolar.” The largest tech companies are indeed exercising greater economic and political influence than most states. The tech industry also has near-unlimited resources with which to lobby against regulations and defend themselves in legal battles against governments.

Yet it does not follow that governments are powerless in this domain. The state remains the fundamental unit around which societies are built. As political scientist Stephen M. Walt recently put it, “Which do you expect to be around in 100 years? Facebook or France?” Despite all the influence tech companies have amassed, governments still have the ultimate authority to exercise coercive force.

This authority can be, and frequently has been, deployed to change the way firms operate. The user terms, community guidelines, and any other rules written by large tech companies remain subject to laws written by governments that have the authority to enforce compliance with those laws. Tech companies cannot decouple themselves from governments. Though they can try to resist and shape government regulations, they ultimately must obey them. They cannot force their way into mergers against antitrust authorities’ objections, nor can they refuse to pay digital taxes that governments enact, or offer digital services that violate a jurisdiction’s laws. If governments ban certain AI systems or applications, tech companies will have no choice but to comply or stay out of that market.

This is not merely hypothetical. Earlier this year, Sam Altman of OpenAI (the developer of ChatGPT) warned that his company might not offer its products in the EU, owing to regulatory constraints. Yet within days, he was backpedaling. OpenAI’s sovereignty is limited to the freedom not to do business in the EU or any other jurisdiction whose regulations it opposes. It is free to exercise that choice; but it is a costly choice to make.

A problem of will

The question, then, is not whether governments can govern the digital economy; it is whether they have the political will to do so. Since the commercialization of the internet in the 1990s, the US government has elected to delegate important governance functions to the private sector. This techno-libertarian approach is famously manifested in Section 230 of the 1996 Communications Decency Act, which shields online platforms from liability for any third-party content that they host. But even under this framework, the US government is not powerless. Though it gave platform companies free rein with Section 230, it retains the authority to repeal or amend that law.

The political will to do so may have been lacking in the past, but momentum for regulation is building as trust in the tech industry has declined. Over the past few years, US lawmakers have proposed bills not only to rewrite Section 230, but also to revive antitrust laws and establish a federal privacy law. And some lawmakers now are determined to regulate AI. They are holding hearings and already proposing legislation to address the recent advances in generative AI algorithms and large language models.

Yet while congressional Democrats and Republicans increasingly agree that tech companies have grown too powerful and need to be regulated, they are deeply divided when it comes to how to go about it. For some, the concern that AI regulation would undermine American technological progress and innovation is salient in an era of intensifying US-China competition. And, of course, tech companies continue to lobby aggressively and effectively, suggesting that even a bipartisan anti-tech crusade may change little in the end. As strong as the discontent about tech companies is, the political dysfunction within Congress could prove stronger.

Again, this does not mean that governments are not in charge. The EU, for its part, is not hampered by the same political dysfunction, and its recent legislative record has been impressive. Following its adoption of the General Data Protection Regulation (GDPR) in 2016, it has moved to regulate online platforms with its landmark 2022 laws: the Digital Services Act and the Digital Markets Act, which establish clear rules on content moderation and market competition, respectively. And the EU’s ambitious AI Act is expected to be finalized this year.

But for all the EU’s success in legislating, enforcement of its digital regulations has often failed to realize the measures’ stated goals. GDPR enforcement, especially, has drawn much criticism, and all the large antitrust fines that the EU has imposed on Google have done little to dent its dominance. These failures have led some to argue that the tech companies are already too big to regulate, and that AI will further entrench their market power, leaving the EU even more powerless to enforce its laws.

The Chinese government, of course, does not face this problem. Without the need to adhere to a democratic process, it was able to crack down dramatically and suddenly on the country’s tech industry starting in 2020, and tech companies duly capitulated. This relative “success” in holding tech companies accountable stands in stark contrast to European and American regulators’ experience. In both jurisdictions, regulators must fight lengthy legal battles against companies that will reliably contest, rather than acquiesce to, whatever regulatory actions they pursue.

The same pattern may well repeat with AI regulation. The US Congress will likely remain deadlocked, generating heated debates but no real action; and the EU will legislate, though continued uncertainty about the effectiveness of its regulation could lead to an outcome resembling that of the US. In that case, tech companies, not democratically elected governments, will be free to shape the AI revolution however they see fit.

Democracy’s big test

These scenarios raise a troubling possibility: only authoritarian regimes are capable of effectively governing AI. To disprove this proposition, the US, the EU, and other likeminded governments will have to demonstrate that democratic governance for AI is both feasible and effective. They will have to insist on their role as the primary rule-makers.

The upcoming summit likely will not convince the world that truly global AI rules are within reach anytime soon. The disagreements remain too deep for countries – especially so-called techno-democracies and techno-autocracies – to act in unison. Nonetheless, the summit can and should send a clear signal that tech companies remain beholden to governments, not the other way around.

While working closely with tech companies to foster AI innovation and maximize benefits, democratic governments also will need to protect their citizens, values, and institutions. Without this kind of dual commitment, the AI revolution will be much more likely to live up to its peril, not its promise. – Rappler.com

Anu Bradford, Professor of Law and International Organization at Columbia Law School, is the author of the forthcoming Digital Empires: The Global Battle to Regulate Technology (Oxford University Press, 2023).

This article was republished with permission from Project Syndicate.

The views expressed by the writer are his/her own and do not reflect the views or positions of Rappler.

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!