Skip to main content
Hard Fork

OpenAI's Fog of War + Betting on Iran + Hard Fork Review of Slop

65 min episode · 3 min read

Episode

65 min

Read time

3 min

Topics

Artificial Intelligence, History

AI-Generated Summary

Key Takeaways

  • OpenAI Pentagon Damage Control: Sam Altman publicly admitted the Pentagon deal announcement was rushed, sloppy, and opportunistic, then amended contract language to explicitly prohibit using OpenAI tools for domestic surveillance of US persons. However, government procurement experts note that releasing only "relevant portions" of the contract makes independent verification impossible, leaving employee and public trust largely unrestored despite the AMA and blog post efforts.
  • AI Talent Leverage: OpenAI's original technical staff — those with three-plus years building frontier models — retain significant leverage because they hold irreplaceable knowledge for training GPT-6 and beyond. Leaders cannot afford mass defections from this group. The resignation of post-training VP Max Schwartzer, who publicly cited respect for Anthropic's values upon departure, signals that contract amendments have not satisfied this critical employee segment.
  • Anthropic's Dual Reality: Anthropic simultaneously faces an existential Pentagon legal battle and extraordinary revenue growth, reaching a projected $20 billion annualized run rate in mid-2025, up from $1 billion at the start of the year — a 20x increase. The growth is driven primarily by Claude Code adoption in enterprise. The US State Department has already switched from Claude to GPT-4.1, a model now considered several generations behind current capabilities.
  • Prediction Market Insider Trading Risk: More than 150 accounts placed bets of at least $1,000 correctly predicting a US airstrike on Iran within one day of the strike occurring. Israel has already arrested individuals using classified military information to trade on Polymarket. The CFTC, the primary regulator for platforms like Kalshi, lacks sufficient enforcement staff to investigate the volume of potentially insider-informed trades occurring daily across these platforms.
  • Soft Nationalization Pattern: The current Pentagon pressure on AI companies — demanding model behavior changes, threatening Defense Production Act invocation, and pulling agencies away from non-compliant vendors — represents an early template for gradual government control of AI. This differs from World War II-style asset seizure; instead it operates through contract clauses, supply chain designations, and regulatory pressure, with full nationalization remaining a plausible longer-term outcome as model capabilities increase.

What It Covers

OpenAI navigates Pentagon contract fallout as VP of Research Max Schwartzer resigns and employees publicly condemn the deal. Prediction markets face scrutiny after 150+ accounts correctly bet on US strikes against Iran. Guest Arijeta Lajka reports that 40% of YouTube Kids recommended videos in a single 15-minute session were AI-generated slop.

Key Questions Answered

  • OpenAI Pentagon Damage Control: Sam Altman publicly admitted the Pentagon deal announcement was rushed, sloppy, and opportunistic, then amended contract language to explicitly prohibit using OpenAI tools for domestic surveillance of US persons. However, government procurement experts note that releasing only "relevant portions" of the contract makes independent verification impossible, leaving employee and public trust largely unrestored despite the AMA and blog post efforts.
  • AI Talent Leverage: OpenAI's original technical staff — those with three-plus years building frontier models — retain significant leverage because they hold irreplaceable knowledge for training GPT-6 and beyond. Leaders cannot afford mass defections from this group. The resignation of post-training VP Max Schwartzer, who publicly cited respect for Anthropic's values upon departure, signals that contract amendments have not satisfied this critical employee segment.
  • Anthropic's Dual Reality: Anthropic simultaneously faces an existential Pentagon legal battle and extraordinary revenue growth, reaching a projected $20 billion annualized run rate in mid-2025, up from $1 billion at the start of the year — a 20x increase. The growth is driven primarily by Claude Code adoption in enterprise. The US State Department has already switched from Claude to GPT-4.1, a model now considered several generations behind current capabilities.
  • Prediction Market Insider Trading Risk: More than 150 accounts placed bets of at least $1,000 correctly predicting a US airstrike on Iran within one day of the strike occurring. Israel has already arrested individuals using classified military information to trade on Polymarket. The CFTC, the primary regulator for platforms like Kalshi, lacks sufficient enforcement staff to investigate the volume of potentially insider-informed trades occurring daily across these platforms.
  • Soft Nationalization Pattern: The current Pentagon pressure on AI companies — demanding model behavior changes, threatening Defense Production Act invocation, and pulling agencies away from non-compliant vendors — represents an early template for gradual government control of AI. This differs from World War II-style asset seizure; instead it operates through contract clauses, supply chain designations, and regulatory pressure, with full nationalization remaining a plausible longer-term outcome as model capabilities increase.
  • AI Slop Saturation in Children's Content: Arijeta Lajka's New York Times investigation found that in a single 15-minute YouTube scrolling session starting from approved channels like Cocomelon, over 40% of recommended videos were AI-generated. YouTube's current policy only requires creators to label "realistic-looking" synthetic content, leaving animated AI slop unlabeled. No filtering tool exists for parents on YouTube Kids, and YouTube Shorts time-limit controls announced in January 2025 represent the only near-term mitigation option.

Notable Moment

Arijeta Lajka revealed that after she published her story linking to a YouTube channel showing a beloved children's character with its stomach cut open, YouTube only removed the video after she flagged it directly — despite the content being live and actively recommended to children beforehand, illustrating how reactive rather than proactive YouTube's moderation remains.

Know someone who'd find this useful?

You just read a 3-minute summary of a 62-minute episode.

Get Hard Fork summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Hard Fork

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best Tech Podcasts (2026) — ranked and reviewed with AI summaries.

Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.

You're clearly into Hard Fork.

Every Monday, we deliver AI summaries of the latest episodes from Hard Fork and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime