Skip to content
The Rabbit Hole
HUBWEBABOUT
SECTIONS
11 min read
TECH & SOCIAL CONTROL
ACTIVE
2022 - Present11 min read

AI Regulation Capture

The AI industry is executing the same regulatory capture playbook used by social media and finance: flood Washington with lobbyists and former employees, write the regulatory framework yourself, and use 'safety' language to erect barriers that protect incumbents from competition.

79/100 4 sources 3 connections 3 key players
AIregulationOpenAIlobbyingregulatory capturesafetyGoogle

The companies building the most powerful AI systems are simultaneously the ones writing the rules. OpenAI, Google, Anthropic, and Microsoft have spent hundreds of millions lobbying Washington while their employees fill the government panels that are supposed to regulate them.

Overview

The AI regulatory moment that began in 2022 has followed a pattern that students of regulatory capture will recognize: the most powerful players in the industry have positioned themselves as the stewards of safety while simultaneously working to shape regulation in ways that entrench their market position.

OpenAI spent $700,000 on lobbying in 2023 — a 4x increase from the previous year and the beginning of a rapid escalation. Google and Microsoft have lobbying operations that dwarf this, but each company has worked to ensure that AI regulation requires compliance infrastructure so expensive that only established players can afford it.

The "safety" framing is central to this strategy. By defining AI risk primarily in terms of existential or catastrophic scenarios — AI taking over the world, AI creating bioweapons — the dominant companies redirect regulatory attention away from near-term, concrete harms (algorithmic discrimination, labor displacement, surveillance) toward hypothetical future harms they claim to be uniquely positioned to address.

The revolving door is already spinning. Former FTC chair Lina Khan's aggressive tech enforcement era ended when Trump returned to office. Former OpenAI board members have become AI advisers to governments. The EU AI Act — the most comprehensive AI regulation to date — was shaped significantly by industry input and creates compliance burdens that favor large established players.

"OpenAI spent $700,000 on lobbying in 2023 — a 4x increase from the previous year. The companies building the most powerful AI systems are simultaneously the ones writing the rules."

Timeline

November 2022VERIFIED

ChatGPT Launch

OpenAI launches ChatGPT, triggering the mass-market AI moment and immediate calls for regulation.

May 2023VERIFIED

Sam Altman Senate Testimony

OpenAI CEO Sam Altman testifies to Congress calling for AI regulation — while framing the regulatory agenda in industry-friendly terms.

October 2023VERIFIED

Biden AI Executive Order

Biden EO establishes AI safety guidelines. Industry praises it. Critics note it contains no enforcement mechanism.

March 2024VERIFIED

EU AI Act Passed

EU passes comprehensive AI legislation. Compliance costs estimated at tens of millions for large models — a barrier that favors incumbents.

January 2025VERIFIED

Trump Revokes Biden AI EO

Trump administration revokes Biden's AI executive order on day one, declaring it an obstacle to American AI dominance.

Key Players

Sam Altman

OpenAI CEO

Has testified to Congress calling for AI regulation while his company lobbies against specific provisions and seeks regulatory frameworks that favor incumbents.

Dario Amodei

Anthropic CEO

Former OpenAI researcher who has consistently advocated for safety-first regulation — while Anthropic competes directly with OpenAI.

Lina Khan

Former FTC Chair

Pursued aggressive tech antitrust enforcement, including scrutinizing Microsoft's investment in OpenAI. Replaced when Trump took office.

How 'Safety' Becomes a Moat

DOCUMENTED

The pattern is not unique to AI: incumbents in a new industry call for regulation, help write the regulations, and the regulations create compliance costs that only incumbents can afford. This has played out in finance (Dodd-Frank), pharmaceuticals (FDA requirements), and telecommunications.

In AI, the mechanism works as follows: large AI labs advocate for "responsible scaling policies" that require external audits, safety evaluations, and compliance documentation before releasing powerful models. These requirements cost millions of dollars and weeks of delay — manageable for OpenAI and Google, prohibitive for open-source developers and startups.

The result is that "AI safety" regulation, as currently proposed by industry-friendly advocates, would concentrate AI development in fewer hands while doing little about near-term harms like algorithmic discrimination, misinformation generation, or labor displacement.

Open-source AI developers and civil society groups have noted that almost no AI regulation proposal currently on the table addresses the harms that are actually happening today, while nearly all address speculative future scenarios that benefit the incumbents who define them.

The Bottom Line

The AI regulation debate is structured so that the companies doing the most to concentrate AI power are also the ones most vocally calling for safety regulation — and writing what that regulation looks like. It has happened before, in every industry that has gone through a similar moment.

Primary Sources4 cited

1

OpenSecrets AI Lobbying Data

Database

Lobbying expenditure tracking for AI companies.

2

Stanford HAI AI Index Report

Academic Research

Annual analysis of AI policy, industry concentration, and regulatory developments.

3

EU AI Act Text

Legislation

Full text of the EU's AI regulation and its compliance requirements.

4

FTC AI Market Reports

Government Report

Federal Trade Commission analysis of AI market concentration.

Connected Topics

Palantir & The Surveillance State
TECH · Heat: 83
Social Media Manipulation
TECH · Heat: 90
Digital ID & CBDCs
TECH · Heat: 73

More in TECH & SOCIAL CONTROL

Continue investigating related topics in this category

Social Media Manipulation

The Twitter Files revealed government agencies flagging content for removal. The Facebook Files showed the company knew its products harmed teens. Murthy v. Missouri tested the limits of government-platform coordination.

Heat: 90 · 5 sources · 18 min read
AI & Deepfake Threats

AI-generated deepfakes can now produce photorealistic video and clone voices from seconds of audio. The technology has been used for election interference, fraud, and the creation of non-consensual intimate imagery at massive scale.

Heat: 85 · 4 sources · 14 min read
Palantir & The Surveillance State

Palantir Technologies, founded with CIA seed funding, has become the data integration backbone of US intelligence, law enforcement, and military operations. Its expansion into healthcare, financial services, and government infrastructure represents an unprecedented concentration of data power in a private company.

Heat: 83 · 4 sources · 16 min read
Digital ID & CBDCs

Central banks worldwide are developing digital currencies that could enable programmable money — with the ability to restrict what you buy, when, and where. Over 130 countries are exploring CBDCs.

Heat: 73 · 4 sources · 14 min read
View all in TECH & SOCIAL CONTROL

Explore Other Categories

🏛
POLITICS & POWER
7 topics
👁
INTELLIGENCE & BLACK OPS
6 topics
💰
MONEY & CORRUPTION
5 topics
View in Connection Web