At the Pentagon, OpenAI is In and Anthropic Is Out
Episode
33 min
Read time
2 min
Topics
Artificial Intelligence
AI-Generated Summary
Key Takeaways
- ✓The "All Lawful Use" Loophole: The Pentagon demanded AI labs accept an "all lawful use" standard rather than specific prohibitions. Anthropic refused because the U.S. lacks meaningful AI regulation and no national privacy law — meaning activities like purchasing bulk American data from data brokers and feeding it into AI models remain entirely legal, making "lawful" an insufficient safeguard.
- ✓Supply Chain Risk Designation as a Weapon: The Pentagon's threat to designate Anthropic a supply chain risk — a label historically reserved for Chinese chip suppliers and foreign entities like Kaspersky Lab — represents the most punitive action the U.S. government has taken against a major domestic company in at least a century, with no formal legal proceeding yet initiated beyond a social media post.
- ✓Data Broker Surveillance Gap: Federal agencies can legally purchase bulk data on millions of Americans from commercial data brokers, then run that data through AI models for analysis. This is functionally equivalent to mass domestic surveillance but falls outside legal definitions of it — making contractual prohibitions on "illegal" surveillance largely meaningless without explicit categorical bans.
- ✓OpenAI's "Safety Stack" Claim Requires Scrutiny: Sam Altman announced OpenAI would build model-level technical guardrails — a "safety stack" — to prevent misuse inside Pentagon classified networks. Security analysts contacted by the hosts argue these guardrails cannot determine whether data fed into a model was legally obtained, making the technical protection largely performative rather than substantive.
- ✓Political Alignment as Business Survival Strategy: The Anthropic-Pentagon conflict illustrates a concrete consequence of not cultivating administration relationships. OpenAI's Greg Brockman donated $25 million to Trump's PAC; Tim Cook presented Trump with a trophy. Companies that maintained political alignment secured contracts; Anthropic, perceived as ideologically opposed, faced attempted operational destruction through regulatory designation rather than contractual dispute resolution.
What It Covers
The Pentagon declared Anthropic a supply chain risk after contract negotiations collapsed over two red lines — mass domestic surveillance and fully autonomous weapons — while OpenAI simultaneously secured a Pentagon deal claiming identical restrictions, raising unresolved questions about whether the agreements are substantively different or politically motivated.
Key Questions Answered
- •The "All Lawful Use" Loophole: The Pentagon demanded AI labs accept an "all lawful use" standard rather than specific prohibitions. Anthropic refused because the U.S. lacks meaningful AI regulation and no national privacy law — meaning activities like purchasing bulk American data from data brokers and feeding it into AI models remain entirely legal, making "lawful" an insufficient safeguard.
- •Supply Chain Risk Designation as a Weapon: The Pentagon's threat to designate Anthropic a supply chain risk — a label historically reserved for Chinese chip suppliers and foreign entities like Kaspersky Lab — represents the most punitive action the U.S. government has taken against a major domestic company in at least a century, with no formal legal proceeding yet initiated beyond a social media post.
- •Data Broker Surveillance Gap: Federal agencies can legally purchase bulk data on millions of Americans from commercial data brokers, then run that data through AI models for analysis. This is functionally equivalent to mass domestic surveillance but falls outside legal definitions of it — making contractual prohibitions on "illegal" surveillance largely meaningless without explicit categorical bans.
- •OpenAI's "Safety Stack" Claim Requires Scrutiny: Sam Altman announced OpenAI would build model-level technical guardrails — a "safety stack" — to prevent misuse inside Pentagon classified networks. Security analysts contacted by the hosts argue these guardrails cannot determine whether data fed into a model was legally obtained, making the technical protection largely performative rather than substantive.
- •Political Alignment as Business Survival Strategy: The Anthropic-Pentagon conflict illustrates a concrete consequence of not cultivating administration relationships. OpenAI's Greg Brockman donated $25 million to Trump's PAC; Tim Cook presented Trump with a trophy. Companies that maintained political alignment secured contracts; Anthropic, perceived as ideologically opposed, faced attempted operational destruction through regulatory designation rather than contractual dispute resolution.
Notable Moment
Former Trump administration AI policy official Dean Ball publicly described the Pentagon's actions against Anthropic as an attempted corporate murder based on ideology — a characterization that drew comparisons to how the Chinese government eliminates tech companies that refuse alignment with state priorities, coming from within the administration's own orbit.
You just read a 3-minute summary of a 30-minute episode.
Get Hard Fork summarized like this every Monday — plus up to 2 more podcasts, free.
Pick Your Podcasts — FreeKeep Reading
More from Hard Fork
Tim Cook’s Legacy + The Future of U.B.I. With Andrew Yang + HatGPT
Apr 24 · 74 min
Odd Lots
Presenting Foundering Season 6: The Killing of Bob Lee, Part 1
Apr 26
More from Hard Fork
A.I. Backlash Turns Violent + Kara Swisher on Healthmaxxing + The Zuck Bot Is Coming
Apr 17 · 63 min
Masters of Scale
Possible: Netflix co-founder Reed Hastings: stories, schools, superpowers
Apr 25
More from Hard Fork
We summarize every new episode. Want them in your inbox?
Tim Cook’s Legacy + The Future of U.B.I. With Andrew Yang + HatGPT
A.I. Backlash Turns Violent + Kara Swisher on Healthmaxxing + The Zuck Bot Is Coming
Anthropic’s Cybersecurity Shock Wave + Ronan Farrow and Andrew Marantz on Their Sam Altman Investigation + One Good Thing
The Future of Addictive Design + Going Deep at DeepMind + HatGPT
The Ezra Klein Show: How Fast Will A.I. Agents Rip Through the Economy?
Similar Episodes
Related episodes from other podcasts
Odd Lots
Apr 26
Presenting Foundering Season 6: The Killing of Bob Lee, Part 1
Masters of Scale
Apr 25
Possible: Netflix co-founder Reed Hastings: stories, schools, superpowers
The Futur
Apr 25
Why Process is Better Than AI w/ Scott Clum | Ep 430
20VC (20 Minute VC)
Apr 25
20Product: Replit CEO on Why Coding Models Are Plateauing | Why the SaaS Apocalypse is Justified: Will Incumbents Be Replaced? | Why IDEs Are Dead and Do PMs Survive the Next 3-5 Years with Amjad Masad
This Week in Startups
Apr 25
The Defense Tech Startup YC Kicked Out of a Meeting is Now Arming America | E2280
Explore Related Topics
This podcast is featured in Best Tech Podcasts (2026) — ranked and reviewed with AI summaries.
Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.
You're clearly into Hard Fork.
Every Monday, we deliver AI summaries of the latest episodes from Hard Fork and 192+ other podcasts. Free for up to 3 shows.
Start My Monday DigestNo credit card · Unsubscribe anytime