Skip to content
The Rabbit Hole
HUBWEBABOUT
SECTIONS
14 min read
TECH & SOCIAL CONTROL
ACTIVE
2017 - Present14 min read

AI & Deepfake Threats

AI-generated deepfakes can now produce photorealistic video and clone voices from seconds of audio. The technology has been used for election interference, fraud, and the creation of non-consensual intimate imagery at massive scale.

85/100 4 sources 2 connections 2 key players
deepfakesAINCIIelection interferencevoice cloningEU AI Act

You can now clone someone's voice from 3 seconds of audio. You can generate photorealistic video of anyone saying anything. AI-generated robocalls impersonating President Biden were used to suppress voter turnout. 96% of all deepfake content online is non-consensual pornography targeting women. The technology is free, requires no expertise, and the legal frameworks to address it don't exist.

Overview

Deepfake technology — AI systems capable of generating photorealistic fake video, audio, and images — has advanced from a research curiosity to a significant threat to information integrity, personal safety, and democratic processes. Modern deepfake tools can produce convincing video from a single photograph and clone a voice from just a few seconds of audio.

The non-consensual intimate imagery (NCII) crisis represents the most widespread harm. Studies estimate that 96% of deepfake content online is non-consensual pornography, overwhelmingly targeting women. Schools have reported incidents of students creating deepfake intimate images of classmates. The scale of the problem has overwhelmed existing legal frameworks.

Election interference through deepfakes has moved from theoretical to actual. In 2024, AI-generated robocalls mimicking President Biden's voice urged New Hampshire voters not to vote in the primary. Similar incidents have occurred in elections worldwide, including in India, Turkey, and the UK. The ability to create convincing fake statements from political leaders poses a fundamental challenge to democratic discourse.

The regulatory landscape is fragmented. The EU AI Act, adopted in 2024, establishes risk-based regulation of AI systems including deepfake disclosure requirements. The US approach has been largely state-by-state, with no comprehensive federal legislation. The FTC has taken enforcement actions against AI-enabled deception, but enforcement capacity lags far behind the technology's proliferation.

"96% of all deepfake content online is non-consensual pornography targeting women. The tools are free, require no expertise, and can generate images from a single photograph."

Timeline

2017VERIFIED

Deepfakes Emerge

The term 'deepfake' is coined on Reddit as face-swapping AI tools become publicly available.

2023VERIFIED

Voice Cloning Fraud Surges

FBI reports surge in voice cloning fraud, with criminals using AI-cloned voices to impersonate family members.

FBI IC3 reports

January 2024VERIFIED

Biden Deepfake Robocall

AI-generated robocall mimicking Biden's voice tells New Hampshire voters not to vote in primary.

FCC investigation

March 2024VERIFIED

EU AI Act Adopted

European Parliament adopts the AI Act, the world's most comprehensive AI regulation.

Key Players

Sam Altman

OpenAI CEO

Leads the company behind ChatGPT and DALL-E. Has testified before Congress about AI risks.

Lina Khan

FTC Chair

Has pursued enforcement actions against AI-enabled fraud and deceptive practices.

The NCII Crisis

DOCUMENTED

The creation of non-consensual intimate deepfake imagery has become a mass phenomenon. Tools to create such content are freely available, require no technical expertise, and can produce results from a single clothed photograph. Reports of deepfake NCII have increased exponentially, with schools, universities, and law enforcement overwhelmed.

Legal protections remain inadequate. While some states have passed laws specifically addressing deepfake NCII, enforcement is difficult when content spreads rapidly across platforms and jurisdictions. Many platforms have policies against such content but struggle with detection and enforcement at scale.

The psychological impact on victims is severe and well-documented, comparable to other forms of sexual abuse. Victims report anxiety, depression, social withdrawal, and in some cases suicidal ideation. The permanence of digital content — once shared, impossible to fully remove — compounds the harm.

The Bottom Line

Deepfake technology has made the concept of visual or audio evidence meaningless. Any image, video, or voice recording can now be fabricated at no cost. The implications for criminal justice, democratic discourse, and personal safety are catastrophic — and the technology is advancing faster than any regulatory response.

Primary Sources4 cited

1

EU AI Act

Legislation

European Union regulation establishing risk-based AI governance framework.

2

FBI IC3 Reports

Government Report

FBI Internet Crime Complaint Center reports on AI-enabled fraud.

3

Senate AI Hearing Transcripts

Congressional Record

Congressional hearings on AI risks and regulation.

4

FTC AI Enforcement Actions

Government Record

Federal Trade Commission actions against AI-enabled deceptive practices.

Connected Topics

Social Media Manipulation
TECH · Heat: 90
Pegasus Spyware
INTEL · Heat: 91

More in TECH & SOCIAL CONTROL

Continue investigating related topics in this category

Social Media Manipulation

The Twitter Files revealed government agencies flagging content for removal. The Facebook Files showed the company knew its products harmed teens. Murthy v. Missouri tested the limits of government-platform coordination.

Heat: 90 · 5 sources · 18 min read
Palantir & The Surveillance State

Palantir Technologies, founded with CIA seed funding, has become the data integration backbone of US intelligence, law enforcement, and military operations. Its expansion into healthcare, financial services, and government infrastructure represents an unprecedented concentration of data power in a private company.

Heat: 83 · 4 sources · 16 min read
AI Regulation Capture

The AI industry is executing the same regulatory capture playbook used by social media and finance: flood Washington with lobbyists and former employees, write the regulatory framework yourself, and use 'safety' language to erect barriers that protect incumbents from competition.

Heat: 79 · 4 sources · 11 min read
Digital ID & CBDCs

Central banks worldwide are developing digital currencies that could enable programmable money — with the ability to restrict what you buy, when, and where. Over 130 countries are exploring CBDCs.

Heat: 73 · 4 sources · 14 min read
View all in TECH & SOCIAL CONTROL

Explore Other Categories

🏛
POLITICS & POWER
7 topics
👁
INTELLIGENCE & BLACK OPS
6 topics
💰
MONEY & CORRUPTION
5 topics
View in Connection Web