Skip to main content
Hard Fork

At the Pentagon, OpenAI is In and Anthropic Is Out

33 min episode · 2 min read

Episode

33 min

Read time

2 min

Topics

Artificial Intelligence

AI-Generated Summary

Key Takeaways

  • The "All Lawful Use" Loophole: The Pentagon demanded AI labs accept an "all lawful use" standard rather than specific prohibitions. Anthropic refused because the U.S. lacks meaningful AI regulation and no national privacy law — meaning activities like purchasing bulk American data from data brokers and feeding it into AI models remain entirely legal, making "lawful" an insufficient safeguard.
  • Supply Chain Risk Designation as a Weapon: The Pentagon's threat to designate Anthropic a supply chain risk — a label historically reserved for Chinese chip suppliers and foreign entities like Kaspersky Lab — represents the most punitive action the U.S. government has taken against a major domestic company in at least a century, with no formal legal proceeding yet initiated beyond a social media post.
  • Data Broker Surveillance Gap: Federal agencies can legally purchase bulk data on millions of Americans from commercial data brokers, then run that data through AI models for analysis. This is functionally equivalent to mass domestic surveillance but falls outside legal definitions of it — making contractual prohibitions on "illegal" surveillance largely meaningless without explicit categorical bans.
  • OpenAI's "Safety Stack" Claim Requires Scrutiny: Sam Altman announced OpenAI would build model-level technical guardrails — a "safety stack" — to prevent misuse inside Pentagon classified networks. Security analysts contacted by the hosts argue these guardrails cannot determine whether data fed into a model was legally obtained, making the technical protection largely performative rather than substantive.
  • Political Alignment as Business Survival Strategy: The Anthropic-Pentagon conflict illustrates a concrete consequence of not cultivating administration relationships. OpenAI's Greg Brockman donated $25 million to Trump's PAC; Tim Cook presented Trump with a trophy. Companies that maintained political alignment secured contracts; Anthropic, perceived as ideologically opposed, faced attempted operational destruction through regulatory designation rather than contractual dispute resolution.

What It Covers

The Pentagon declared Anthropic a supply chain risk after contract negotiations collapsed over two red lines — mass domestic surveillance and fully autonomous weapons — while OpenAI simultaneously secured a Pentagon deal claiming identical restrictions, raising unresolved questions about whether the agreements are substantively different or politically motivated.

Key Questions Answered

  • The "All Lawful Use" Loophole: The Pentagon demanded AI labs accept an "all lawful use" standard rather than specific prohibitions. Anthropic refused because the U.S. lacks meaningful AI regulation and no national privacy law — meaning activities like purchasing bulk American data from data brokers and feeding it into AI models remain entirely legal, making "lawful" an insufficient safeguard.
  • Supply Chain Risk Designation as a Weapon: The Pentagon's threat to designate Anthropic a supply chain risk — a label historically reserved for Chinese chip suppliers and foreign entities like Kaspersky Lab — represents the most punitive action the U.S. government has taken against a major domestic company in at least a century, with no formal legal proceeding yet initiated beyond a social media post.
  • Data Broker Surveillance Gap: Federal agencies can legally purchase bulk data on millions of Americans from commercial data brokers, then run that data through AI models for analysis. This is functionally equivalent to mass domestic surveillance but falls outside legal definitions of it — making contractual prohibitions on "illegal" surveillance largely meaningless without explicit categorical bans.
  • OpenAI's "Safety Stack" Claim Requires Scrutiny: Sam Altman announced OpenAI would build model-level technical guardrails — a "safety stack" — to prevent misuse inside Pentagon classified networks. Security analysts contacted by the hosts argue these guardrails cannot determine whether data fed into a model was legally obtained, making the technical protection largely performative rather than substantive.
  • Political Alignment as Business Survival Strategy: The Anthropic-Pentagon conflict illustrates a concrete consequence of not cultivating administration relationships. OpenAI's Greg Brockman donated $25 million to Trump's PAC; Tim Cook presented Trump with a trophy. Companies that maintained political alignment secured contracts; Anthropic, perceived as ideologically opposed, faced attempted operational destruction through regulatory designation rather than contractual dispute resolution.

Notable Moment

Former Trump administration AI policy official Dean Ball publicly described the Pentagon's actions against Anthropic as an attempted corporate murder based on ideology — a characterization that drew comparisons to how the Chinese government eliminates tech companies that refuse alignment with state priorities, coming from within the administration's own orbit.

Know someone who'd find this useful?

You just read a 3-minute summary of a 30-minute episode.

Get Hard Fork summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Hard Fork

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best Tech Podcasts (2026) — ranked and reviewed with AI summaries.

Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.

You're clearly into Hard Fork.

Every Monday, we deliver AI summaries of the latest episodes from Hard Fork and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime