Why Make Foreign Influence Easy?
,Segment #662
An Analysis
Foreign Psychological Operations (Psyops) and Influence Campaigns Targeting U.S. Political Tensions
(This is information that is publicly available so this is probably about 20% of what is really going on. That said, it is informative. It also begs the question why we permit any foreign influence to include lobbyists, unvetted students, and unvetted military age males crossing our borders
Foreign adversaries, primarily Russia, Iran, and China, have conducted sophisticated psychological operations (psyops) and influence campaigns to exploit and amplify U.S. political divisions. These efforts rarely create new tensions from scratch but instead magnify existing societal fractures (e.g., on immigration, race, gender issues, election integrity, and foreign policy) to undermine trust in democratic institutions, sow discord, and advance geopolitical goals. During the 2024 U.S. election cycle, these operations reached a high intensity, though U.S. officials assessed they did not materially alter vote outcomes or manipulate vote tallies at scale.Key Actors and Their Targeted Approaches
Russia (most active and sophisticated):
Russia focused on boosting right-wing narratives and candidates perceived as isolationist or skeptical of U.S. aid to Ukraine (e.g., favoring Donald Trump). Tactics included:Funding U.S.-based right-wing influencers unknowingly (e.g., via Tenet Media, where RT employees allegedly funneled ~$10 million to creators like Tim Pool and Benny Johnson).
The "Doppelganger" campaign: Creating fake websites mimicking legitimate U.S. media (e.g., washingtonpost.pm) and using AI-generated content/ads to spread pro-Kremlin propaganda.
Manufacturing deepfake videos and false claims of election fraud to erode confidence, often amplified on social media.
Goal: Weaken U.S. support for Ukraine/NATO and polarize conservatives against "establishment" figures.
Iran:
Iran acted as a "chaos agent," targeting both sides but with notable efforts to exploit left-leaning causes:Covertly posing as activists online, encouraging/organizing anti-Israel protests (e.g., campus Gaza war demonstrations), and providing financial support to some protesters.
Hacking campaigns (e.g., Trump campaign leaks) and disinformation to stoke anti-U.S. sentiment.
Goal: Undermine confidence in democracy broadly, avenge past U.S. actions (e.g., Soleimani assassination), and exploit divisions on Israel/Palestine to alienate progressive voters.
China:
China was more cautious and long-term oriented, avoiding overt candidate endorsements:"Spamouflage/Dragonbridge": Networks of fake accounts posing as American voters (including MAGA supporters or veterans) to post divisive content on issues like abortion, immigration, and Taiwan policy.
Targeting down-ballot races (e.g., attacking anti-China congressional candidates) and amplifying narratives to portray U.S. democracy as dysfunctional.
Goal: Sideline U.S. critics of China, erode global faith in American institutions, and distract from Beijing's own issues.
All three used generative AI for deepfakes, fake articles, and personalized content, making operations cheaper and more scalable. Common themes: Amplifying wedge issues to make the U.S. appear weak and divided.Ease of Foreign Infiltration into U.S. Political OrganizationsForeign powers find it relatively easy to conduct influence operations but harder to achieve deep, direct infiltration of formal organizations (e.g., parties, campaigns) without detection:
Easy aspects (low-barrier influence):
Social media enables anonymous accounts, bots, and paid amplification to reach millions without physical presence.
Unwitting intermediaries (e.g., influencers paid through cutouts) launder propaganda effectively — as seen in Russia's Tenet Media case.
Open U.S. society: Free speech protections, decentralized politics, and volunteer-heavy grassroots groups allow indirect exploitation (e.g., funding protests or co-opting narratives).
AI lowers costs: Generating convincing content is now cheap and fast.
Harder aspects (direct infiltration):
U.S. counterintelligence (FBI, etc.) actively monitors and disrupts (e.g., domain seizures, indictments).
Formal vetting in campaigns/parties and legal bans on foreign donations limit deep penetration.
Past exposures (e.g., 2016 Russian hacks) have heightened awareness, making overt recruitment riskier.
Still feasible via long-term cultivation (e.g., business ties or diaspora communities), but high-risk and less common than broad disinformation.
Overall, adversaries prefer "influence without infiltration" — amplifying authentic American voices — because it's deniable, scalable, and exploits polarization organically.U.S. Intelligence Community (IC) Monitoring and ResponseThe U.S. IC — led by ODNI (Foreign Malign Influence Center), FBI (Foreign Influence Task Force), and CISA — has significantly ramped up monitoring since 2016, treating foreign influence as a top threat:
Meta (Facebook/Instagram/WhatsApp)
- Mass removal of fake accounts, Pages, Groups - Ad bans & payment blocking - AI detection for coordinated inauthentic behavior (CIB) - Public quarterly Adversarial Threat Reports with IOCs shared on GitHub
- Q1 2025: Disrupted 3 major networks (China: 157 FB + 17 IG accounts using AI personas targeting Taiwan/Myanmar/Japan; Iran & Romania ops) - Ongoing takedowns of Russian "Doppelganger" clones and Iranian chaos-agent networks - Removed thousands of accounts tied to Spamouflage/Dragonbridge
Global; prevented authentic audience buildup in most cases; highest volume of removals industry-wide
Microsoft Threat Intelligence (incl. Mandiant)
- Account/content disruption across LinkedIn, GitHub, Bing - Domain seizures via partnerships - AI-enhanced deepfake detection & content provenance tools - Detailed public reports (Digital Defense Report 2025)
- Tracked/disrupted evolving Russian Doppelganger expansions & Iranian cyber-enabled IO - Exposed AI use in Chinese ops; shared indicators leading to partner takedowns - November 2025 Digital Defense Report highlighted scaled AI tactics by nation-states
High attribution accuracy; focuses on hybrid cyber-influence ops
Google Threat Analysis Group (TAG) + Mandiant
- YouTube channel/account terminations - Blogger/AdSense shutdowns - Collaboration with Graphika on reports leading to cross-platform enforcement
- Continued Dragonbridge/Spamouflage takedowns (100k+ accounts since 2019) - Exposed AI-generated presenters & fake sites; disrupted pro-Beijing ops targeting elections
Persistent against Chinese ops; low organic engagement in disrupted networks
Graphika (independent analytics firm)
- In-depth network mapping & public reports - Tips to platforms/governments triggering takedowns - Visual/AI analysis of CIB
- Exposed Chinese "Falsos Amigos" fake news sites & Spamouflage expansions into Europe/Global South - Collaborated on Doppelganger attributions
Investigative; often catalysts for platform actions
OpenAI
- Model access revocation for abuse - Disruption of ops using ChatGPT/GPT tools
- Removed networks tied to Russia, China, Israel using AI for content generation/translation
Targets AI-enabled scaling of ops
Other (Recorded Future, etc.)
- Threat intel sharing leading to sanctions/indictments - Private-sector alerts
- Detailed Russian AI-audio fakes & fake media sites
Supports broader ecosystem
Why Private Firms Are Now the "Attackers"
Vacuum from USG cutbacks: With FBI FITF, ODNI FMIC, and CISA MDM gutted, platforms lost formalized government coordination but gained freedom to act unilaterally.
Economic/self-interest motive: Influence ops spam platforms, erode user trust, and cost billions in moderation — direct disruption protects revenue.
Technological edge: Private firms control the infrastructure (accounts, domains, AI models) adversaries rely on, enabling rapid, scalable enforcement.
AI arms race: Adversaries use genAI for cheaper/faster ops → firms counter with superior detection AI and content authenticity tools (e.g., Microsoft's provenance tech).
Limitations & Adversary Adaptation
Disruptions are highly effective at prevention (most networks die before gaining traction) but reactive — new fake accounts spin up daily.
Russia/China/Iran persist via "smash-and-grab" volume tactics and off-platform sites.
No private "hack-back" into state C2 servers for pure influence ops (unlike some ransomware cases).
In 2025, private contractors aren't just monitoring — they're the frontline offensive force imposing real operational costs on foreign psyops, filling the gap left by reduced government capacity. This privatized model has kept most campaigns contained but relies heavily on a handful of tech giants.