25 September 2024

Regulatory Capture

'How Do AI Companies “Fine-Tune” Policy? Examining Regulatory Capture in AI Governance' by Kevin Wei, Carson Ezell, Nick Gabrieli and Chinmay Deshpande comments 

Industry actors in the United States have gained extensive influence in conversations about the regulation of general-purpose artificial intelligence (AI) systems. This article examines the ways in which industry influence in AI policy can result in policy outcomes that are detrimental to the public interest, i.e., scenarios of “regulatory capture.” First, we provide a framework for understanding regulatory capture. Then, we report the results from 17 expert interviews identifying what policy outcomes could constitute capture in AI policy and how industry actors (e.g., AI companies, trade associations) currently influence AI policy. We conclude with suggestions for how capture might be mitigated or prevented. 

In accordance with prior work, we define “regulatory capture” as situations in which:

1. A policy outcome contravenes the public interest. These policy outcomes are characterized by regulatory regimes that prioritize private over public welfare and that could hinder such regulatory goals as ensuring the safety, fairness, beneficence, transparency, or innovation of general-purpose AI systems. Potential outcomes can include changes to policy, enforcement of policy, or governance structures that develop or enforce policy. 

2. Industry actors exert influence on policymakers through particular mechanisms to achieve that policy outcome. We identify 15 mechanisms through which in-dustry actors can influence policy. These mechanisms include advocacy, revolving door (employees shuttling between industry and government), agenda-setting, cultural capture, and other mechanisms as defined in Table 0. Policy outcomes that arise absent industry influence—even those which may benefit industry—do not reflect capture. 

To contextualize these outcomes and mechanisms to AI policy, we interview 17 AI policy experts across academia, government, and civil society. We seek to identify possible outcomes of capture in AI policy as well as the ways that AI industry actors are currently exerting influence to achieve those outcomes. 

With respect to potential captured outcomes in AI policy, experts were primarily concerned with capture leading to a lack of AI regulation, weak regulation, or regulation that over-emphasizes certain policy goals above others. 

Experts most commonly identified that AI industry actors use the following mechanisms to exert policy influence:

• Agenda-setting (15 of 17 interviews): Interviewees expressed that industry actors advance anti-regulation narratives and are able to steer policy conversations toward or away from particular problems posed by AI. These actors, including AI companies, are also able to set default standards, measurement metrics, and regulatory approaches that fail to reflect public interest goals. 

• Advocacy (13): Interviewees were concerned with AI companies’ and trade associations’ advocacy activities targeted at legislators. 

• Academic capture (10): Interviewees identified ways that industry actors can direct research agendas or promote particular researchers, which could in turn influence policymakers. 

• Information management (9): Interviewees indicated that industry actors have large information asymmetries over government actors and are able to shape policy narratives by strategically controlling or releasing specific types of information. 

To conclude, we explore potential measures to mitigate capture. Systemic changes are needed to protect the AI governance ecosystem from undue industry influence—building technical capacity within governments and civil society (e.g., promoting access requirements, providing funding in- dependent of industry, and creating public AI infrastructure) could be a first step towards building resilience to capture. Procedural and institutional safeguards may also be effective against many different types of capture; examples include building regulatory capacity in government, empowering watchdogs, conducting independent review of regulatory rules, and forming advisory boards or public advocates. Other mitigation measures that are specific to different types of industry influence are outlined in Table 0. 

Although additional research is needed to identify more concrete solutions to regulatory capture, we hope that this article provides a starting point and common framework for productive discussions about industry influence in AI policy.

The mechanisms are summarised as

1. Advocacy 

2. Procedural obstruction 

3. Donations, gifts, and bribes 

4. Private threats 

5. Revolving door 

6. Agenda-setting 

7. Information management 

8. Information overload 

9. Group identity 

10. Relationship networks 

11. Status 

12. Academic capture 

13. Private regulator capture 

14. Public relations 

15. Media capture