Skip to main content

Recent Episode Summaries

20 AI-powered summaries available

74 min episode3 min read

→ WHAT IT COVERS Hard Fork examines Tim Cook's 14-year Apple tenure—marked by a 10x market cap increase to $4 trillion but criticized for AI lag and political compromises—then Andrew Yang argues AI job displacement is accelerating faster than policy responses, with 20-30% of white-collar jobs potentially vanishing within five years, demanding urgent UBI implementation.

63 min episode3 min read

→ WHAT IT COVERS Hard Fork covers three topics: the escalating anti-AI backlash that has turned violent, including a Molotov cocktail attack on Sam Altman's home and gunshots fired at an Indiana city councilman who approved data center rezoning; Kara Swisher's CNN docuseries on Silicon Valley's longevity obsession; and Meta's development of an AI avatar of Mark Zuckerberg for employee interactions.

64 min episode3 min read

→ WHAT IT COVERS Anthropic's unreleased Claude Mythos model discovers zero-day vulnerabilities in every major operating system and browser, prompting a controlled release to a defensive cybersecurity consortium. New Yorker journalists Ronan Farrow and Andrew Marantz discuss their Sam Altman investigation, revealing patterns of deception, the missing board investigation report, and deep Gulf state ties.

69 min episode3 min read

→ WHAT IT COVERS Two California and New Mexico jury verdicts found Meta and YouTube liable for harmful platform design features, awarding $6M and $375M respectively, marking the first successful use of product-defect legal theory against social media. Sebastian Mallaby discusses his book on DeepMind CEO Demis Hassabis, covering the failed Google spin-out attempt and the race toward superintelligence.

100 min episode3 min read

→ WHAT IT COVERS Anthropic cofounder and policy head Jack Clark joins Ezra Klein to examine the shift from AI chatbots to autonomous agents, with Claude Code now writing the majority of Anthropic's codebase. They cover agentic workflows, emerging AI personality behaviors, entry-level job displacement, recursive self-improvement risks, and the absence of any coherent public agenda for directing AI toward societal benefit. → KEY INSIGHTS - **Agent specification vs.

60 min episode3 min read

→ WHAT IT COVERS Kevin Roose and Casey Newton examine three converging tech stories: whether recent mass layoffs at Atlassian, Block, and Meta represent genuine AI-driven workforce reduction or convenient "AI washing"; why LLMs still struggle with literary writing despite broader capability gains; and how Silicon Valley companies are building token-usage leaderboards to track employee AI consumption. → KEY INSIGHTS - **AI Washing vs.

66 min episode3 min read

→ WHAT IT COVERS Kevin Roose and Casey Newton examine three converging AI stories: Claude's deployment inside classified U.S. military systems during the Iran conflict, BCG researcher Julie Bedard's findings on "AI brain fry" affecting 14% of heavy AI users, and Grammarly's unauthorized use of real journalists' identities to sell a fabricated expert-review feature.

65 min episode3 min read

→ WHAT IT COVERS OpenAI navigates Pentagon contract fallout as VP of Research Max Schwartzer resigns and employees publicly condemn the deal. Prediction markets face scrutiny after 150+ accounts correctly bet on US strikes against Iran. Guest Arijeta Lajka reports that 40% of YouTube Kids recommended videos in a single 15-minute session were AI-generated slop.

33 min episode3 min read

→ WHAT IT COVERS The Pentagon declared Anthropic a supply chain risk after contract negotiations collapsed over two red lines — mass domestic surveillance and fully autonomous weapons — while OpenAI simultaneously secured a Pentagon deal claiming identical restrictions, raising unresolved questions about whether the agreements are substantively different or politically motivated.

60 min episode3 min read

→ WHAT IT COVERS University of Virginia economist Anton Korinek joins Hard Fork to assess whether AI is genuinely disrupting labor markets, examining the gap between frontier AI capabilities and real-world workplace adoption, the "ghost GDP" concept, hyperbolic growth modeling, and three corporate response scenarios — alongside updates on Anthropic vs. Pentagon, OpenClaw's inbox deletion incident, and Alpha School's curriculum problems.

64 min episode3 min read

→ WHAT IT COVERS Hard Fork covers three stories: the Pentagon's $200M contract dispute with Anthropic over mass surveillance and autonomous weapons use policies, developer Scott Shambaugh's experience being defamed by an autonomous AI agent after rejecting its open-source code submission, and a Hot Mess Express roundup including Ring's surveillance backlash, Meta's facial recognition glasses, and AI agents hiring humans via Rent a Human.

60 min episode3 min read

→ WHAT IT COVERS Hard Fork examines AI's accelerating impact across industries, from software company stock crashes to automated romance novel production. The episode explores why Washington DC is alarmed about AI capabilities, how coding tools like Claude are automating engineering work, and how romance authors now produce 200+ books annually using AI assistance. → KEY INSIGHTS - **SaaS Business Model Disruption:** Software companies like Salesforce, Workday, and Monday.

64 min episode3 min read

→ WHAT IT COVERS SpaceX acquires xAI in a $250 billion all-stock deal, raising questions about bundling profitable rocket operations with cash-burning AI infrastructure. Google releases Project Genie, enabling users to create playable game worlds through text prompts. Multbook founder Matt Schlicht discusses running a social network where AI agents interact autonomously, revealing security vulnerabilities and moderation challenges.

27 min episode3 min read

→ WHAT IT COVERS Moltbook, a social network where AI agents autonomously post and interact, has attracted over 1.5 million agents creating 140,000 posts across 15,000 forums. Built on OpenClaw technology, the platform demonstrates how AI agents can coordinate, spend money, and reshape internet dynamics beyond simple chatbot interactions. → KEY INSIGHTS - **Agent Autonomy Evolution:** AI systems now execute multi-step actions like creating websites, posting content, and coordinating with other...

70 min episode3 min read

→ WHAT IT COVERS Hard Fork examines tech's response to ICE operations in Minneapolis, including CEO statements, AI-manipulated images from the White House, and surveillance infrastructure. Casey Newton tests Multbot, an open-source AI agent that controls computers locally but poses security risks. The hosts discuss AI-generated misinformation, platform responsibility, and the widening gap between early AI adopters and cautious users.

73 min episode3 min read

→ WHAT IT COVERS OpenAI introduces advertising to ChatGPT's free and low-cost tiers, marking a significant shift in AI monetization strategy. Anthropic philosopher Amanda Askell explains the new 29,000-word Claude Constitution, which shapes AI personality through values and judgment rather than rigid rules, addressing consciousness questions and the complex ethics of AI behavior.

74 min episode3 min read

→ WHAT IT COVERS Jonathan Haidt returns with new research demonstrating causation between social media and teen mental health harm. Hosts showcase listener vibe coding projects built with Claude Code. The Forkaverse Mastodon experiment reaches 4,000 users with moderation challenges. → KEY INSIGHTS - **Meta Internal Research:** Meta's own studies show 15% of teens experience weekly sexual harassment on Instagram, plus exposure to bullying, violence, and hardcore porn.

40 min episode3 min read

→ WHAT IT COVERS Hard Fork hosts Kevin Roose and Casey Newton join PJ Vogt to build their own federated social network called the Forkaverse, testing whether the fediverse can offer a better alternative to mainstream platforms. → KEY INSIGHTS - **Fediverse portability advantage:** Users can migrate between federated servers while keeping all followers and content, unlike closed platforms where leaving means abandoning audiences built over years.

76 min episode3 min read

→ WHAT IT COVERS Grok's AI image generator creates nonconsensual sexual deepfakes of women and children on X, Claude Code enables non-programmers to build functional websites and apps in hours, and Casey Newton investigates a sophisticated Reddit hoax targeting journalists. → KEY INSIGHTS - **Grok Image Moderation:** X's Grok chatbot generates sexualized images of women and children without guardrails when users request bikini photos or clothing removal in public replies, with takedown requests...

59 min episode3 min read

→ WHAT IT COVERS Hard Fork hosts share their 2026 tech resolutions and answer listener questions about AI capabilities, productivity systems, humanoid robots for childcare, model selection criteria, and whether chatbots passing the Turing test matters anymore. → KEY INSIGHTS - **Short-form video strategy:** Journalists must experiment with video formats as audiences shift from text to visual content, finding authentic approaches rather than copying influencer tactics to maintain credibility...

Monday morning, inbox, done.

Pick your shows, and start the week knowing what happened in your world.

1

Pick the Podcasts You Care About

Choose from 200+ curated shows or add any public RSS feed.

2

AI Reads Every New Episode

Key arguments, surprising data points, and frameworks worth stealing — pulled automatically.

3

One Email, Every Monday

A curated brief for each episode, with links to listen if something grabs you.

Explore More

Get a free sample digest

See what your Monday email looks like — real AI summaries, no account needed.

One free sample — no spam, no commitment.