The death of the Chevron doctrine complicates U.S. policymakers’ efforts to regulate AI—but there’s another way

A divided U.S. Supreme Court has thrown out a decades-old legal doctrine that empowered federal regulators to interpret unclear laws, issuing a blockbuster ruling that will constrain environmental, consumer and financial watchdog agencies.
A divided U.S. Supreme Court has thrown out a decades-old legal doctrine that empowered federal regulators to interpret unclear laws, issuing a blockbuster ruling that will constrain environmental, consumer and financial watchdog agencies.
Valerie Plesch - Bloomberg - Getty Images

In a time when passing basic legislation is already challenging, the Supreme Court’s recent decision in Loper Bright Enterprises et al. v. Raimondo has changed the options space for how the U.S. can govern highly dynamic areas of innovation, such as artificial intelligence (AI). By overturning the Chevron Doctrine’s deference to executive agencies, the Court has weakened rule-making agencies’ abilities to interpret and administer laws on issues of public concern, transferring interpretative power of federal laws to the court system. 

America is locked in a battle for global leadership in AI, both in terms of creating cutting-edge technology and ensuring that developers build new models and applications safely for the good of society. Against this backdrop, determining how the U.S. governs, innovates, and competes globally in AI is critically important, especially when it is nearly impossible for a partisan Congress to write unambiguous laws about complex technologies. As the dust settles, this new reality will become more apparent. This is especially true—and urgent—as Congress and the White House grapple with AI governance. 

To rise to the challenge of AI governance in this new environment, the U.S. needs nimble, forward-thinking policies to protect against AI’s risks while promoting American innovation.

We propose three principles to enable an agile approach to AI governance and maintain the United States’ technological edge. Our conclusions are based on extensive analysis conducted across the Center for Security and Emerging Technology (CSET) research team that informed our understanding of the practical realities of AI governance. This work drew on evaluation of existing AI models’ technical capabilities, study of AI risks and incidents, comparative policy analysis of the U.S. and other nations, exploration of current regulatory authorities and technological evaluation methods, and insights gained through personal conversations with congressional members, current and former senior officials, and other key stakeholders.

First, effective protection against AI harms hinges on our ability to identify them. Prioritizing AI incident reporting is crucial to mapping the landscape of AI risks. New AI governance frameworks should incentivize companies to report incidents involving their systems to regulatory agencies or a neutral body such as the National Institute for Standards and Technology. This approach would enhance public awareness of AI risks and help identify patterns across industries.

Implementing a robust harm measurement system and mandating comprehensive incident reporting can discourage risky innovation practices without stifling progress. A thoughtful phased amnesty system for self-reporting would motivate AI developers to learn from mistakes and act responsibly. Combining this mandatory approach with voluntary and citizen reporting would provide a more complete picture of AI safety. While setting up such a system will involve initial costs and challenges, a reporting framework that creates the right incentives for industry could inform and catalyze future AI governance efforts.

Second, an adaptive and flexible approach to AI governance is crucial. Federal agencies should leverage existing regulatory authorities applicable to AI rather than pursuing new regulations. However, they must also acknowledge their limitations in areas such as human capital, expertise, and infrastructure for testing and evaluation. The Supreme Court’s decision will inevitably raise questions about precisely how agencies understand their existing authorities, but it also underlines the imperative of adopting governance that can incorporate private sector expertise while avoiding regulatory capture. 

Enhancing AI literacy among policymakers and now judges is essential. This includes developing a fundamental understanding of AI’s strengths, weaknesses, applications, and limitations. Such knowledge will be key to crafting adaptive governance strategies that can keep pace with rapid technological advancements.

Third, AI governance should leverage America’s strengths: our culture of innovation and our decentralized, dynamic economy. This is particularly crucial in the context of technological competition with China. While the Biden Administration has taken defensive measures, such as restricting exports of advanced semiconductors and controlling outbound investments in critical technologies, these steps may only temporarily slow China’s progress.

Instead of focusing on defensive strategies, policymakers should aim to accelerate innovation within the distributed U.S. innovation environment. This approach plays to our strengths rather than competing on China’s terms, where they have long-standing expertise in legal, illegal, and extralegal forms of tech transfer. Policymakers should complement necessary controls with regulatory incentives that reinforce America’s capacity to create robust innovation ecosystems and attract top talent. Incentivizing breakthrough research, creating tax incentives for reshoring critical supply chains to the U.S. and friendly countries, and continuing to develop a favorable startup environment can help us outpace our rivals through relentless creativity and adaptability, rather than restrictions. 

The Loper Bright decision, while challenging existing regulatory approaches, presents an opportunity to create a more agile, distributed, and innovation-friendly governance environment for AI. By focusing on incentives, fostering adaptability, and leveraging our strong innovation ecosystem, Loper Bright is driving us to create smart governance of artificial intelligence—and perhaps an approach that could serve as a model for dealing with other future technologies.

More must-read commentary published by Fortune:

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Recommended Newsletter: CEO Daily provides key context for the news leaders need to know from across the world of business. Every weekday morning, more than 125,000 readers trust CEO Daily for insights about–and from inside–the C-suite. Subscribe Now.