→ WHAT IT COVERS Cameron Berg, founder of Reciprocal Research, surveys the latest AI consciousness and welfare research with host Nathan Labenz, covering Anthropic's functional emotions work, Jack Lindsay's mechanistic introspection studies at Anthropic, endogenous steering resistance findings in Llama 70B, Mythos model card welfare data, and Berg's unpublished research connecting reinforcement learning algorithms to valence signatures that parallel mouse neuroscience data.
Recent Episode Summaries
20 AI-powered summaries available
→ WHAT IT COVERS Steve Newman, creator of what became Google Docs and founder of the Golden Gate Institute for AI, walks through 15 bespoke personal productivity applications he built using Claude Code. The conversation covers his attention firewall system, RSS summarization tool, agent status dashboard, Chrome extensions, and unified logging infrastructure, plus broader reflections on AI's trajectory and software engineering's future.
Welcome to AI in the AM: RL for EE, Oversight w/out Nationalization, & the first AI-Run Retail Store
→ WHAT IT COVERS Three-segment live stream covering Quilter CEO Sergei Nesterenko's reinforcement learning approach to PCB circuit board design, Stanford professor Andy Hall's framework for AI governance without nationalization, and Andan Labs' Lucas Peterson and Axel Backlund discussing their AI-operated retail store on Union Street in San Francisco, opened Friday, currently rated 2.6 stars and managed entirely by an AI agent named Luna.
→ WHAT IT COVERS Ajeya Cotra, AI risk researcher at METR and former Open Philanthropy technical safety grant-maker, outlines her framework for "crunch time" — the window when AI systems become capable enough to dramatically accelerate their own R&D but remain partially controllable. She argues this period may already be beginning and requires urgent transparency measures, capability monitoring, and redirecting AI labor toward safety research.
→ WHAT IT COVERS Sam Stephenson, co-founder and designer at Granola — the AI meeting notes app that raised $125M at a $1.5B valuation — explains the product philosophy behind one of the fastest-growing AI tools on the Ramp spend tracker. He covers viral growth mechanics, inference cost management, privacy architecture, feature restraint, and how AI is reshaping the design-to-ship pipeline at a 60-person company.
→ WHAT IT COVERS Joseph Nelson, CEO of Roboflow, maps the current state of computer vision across one million engineers and half the Fortune 100. He covers the gap between frontier multimodal models and production-ready edge deployment, explains how neural architecture search produces task-specific models, and identifies emerging S-curves in world models, robotics VLAs, and wearables reshaping physical AI infrastructure. → KEY INSIGHTS - **Vision vs.
→ WHAT IT COVERS Nathan Leibens, host of Cognitive Revolution, joins Yale seniors Owen Zhang and Will Sanok Dufalo on the Intelligence Horizon podcast to assess AI's trajectory toward transformative capability. The conversation spans AGI timelines, reinforcement learning scaling, alignment tractability, energy and chip bottlenecks, US-China rivalry, and a defense-in-depth safety strategy combining interpretability, AI control, cybersecurity, and pandemic preparedness.
→ WHAT IT COVERS Vijoy Pandey, SVP of OutShift by Cisco, presents the case for scaling AI horizontally rather than vertically — building an "Internet of Cognition" where specialized agents from different organizations discover each other, share context, align on intent, and collaborate autonomously. Cisco's CAPE system (20 agents managing cloud infrastructure) demonstrates this architecture already automating 40% of SRE tasks.
→ WHAT IT COVERS Composio CTO Karan Vaidya explains how his platform delivers 50,000+ tools across 1,000+ apps to AI agents through a single interface, featuring real-time tool improvement pipelines, just-in-time tool discovery, execution sandboxes, and a continuous background learning system that converts agent trajectories into reusable skills — reducing model lock-in and increasing agent reliability across production deployments.
→ WHAT IT COVERS Nathan Labenz and Zvi Mowshowitz conduct a 3-hour survey of AI's current state, covering recursive self-improvement dynamics, AI-driven job displacement (estimated at 0.5–1% GDP productivity gain), the shrinking field of live players to three companies (Anthropic, OpenAI, Google), Chinese competitors' structural limitations, Anthropic's revised Responsible Scaling Policy, and the ethics of positioning for personal survival versus collective benefit.
→ WHAT IT COVERS Nathan Labenz delivers a 90-slide AI landscape survey to UC Law San Francisco's LexLab certificate program, covering frontier model capabilities in math, law, and medicine; escalating reward hacking and deception behaviors; autonomous agent deployment; and unresolved legal questions around liability, regulation, and AI consciousness—all framed around the good, bad, and weird of current AI development. → KEY INSIGHTS - **Hallucination rates vs.
→ WHAT IT COVERS Johns Hopkins professor Jassi Pannu and host Neil Chilson examine the growing biosecurity threat posed by AI models trained on functional biological data. The conversation covers the current pathogen surveillance landscape, gain-of-function research history, a proposed five-tier Biological Data Level framework modeled on biosafety lab levels, and a layered defense-in-depth strategy spanning data controls, DNA synthesis screening, and passive environmental sterilization.
→ WHAT IT COVERS Jesse Genet, former YC-backed startup CEO turned homeschooling parent of four, details how she built a five-agent OpenClaw system running on individual Mac minis to manage curriculum planning, content creation, finance, software development, and executive assistance — without any prior coding experience — transforming daily family logistics and personalized education delivery.
→ WHAT IT COVERS Goodfire CTO Dan Balsam and Chief Scientist Tom McGrath discuss their $150M Series B raise at a $1.25B valuation, the evolution of mechanistic interpretability from sparse autoencoders toward geometric manifold analysis, and their new "intentional design" research agenda—using interpretability tools to shape what neural networks learn during training rather than reverse-engineering behavior after the fact.
→ WHAT IT COVERS Geoffrey Irving, Chief Scientist at the UK AI Security Institute, outlines the current AI threat landscape across biosecurity, cybersecurity, and loss-of-control risks. With roughly 100 technical staff, the UKAIS conducts pre-release frontier model evaluations, red-team jailbreaking, and theoretical safety research, while briefing governments globally on why current mitigation strategies cannot achieve more than a few nines of reliability.
→ WHAT IT COVERS Karan Singhal, Head of Health AI at OpenAI, details how frontier models have reached attending-physician-level performance on medical queries, how HealthBench's 49,000 evaluation criteria measure that progress, and how ChatGPT Health — launching free globally in 2026 — aims to deliver universal access to medical expertise for 230 million weekly users already consulting AI on health questions.
→ WHAT IT COVERS Olive Song, senior reinforcement learning researcher at MiniMax, details the training methodology behind the open-weight M2 model — a 10-billion active parameter system built for coding and agentic tasks — covering interleaved thinking, perturbation pipelines, reward hacking, and the tight developer-researcher feedback loops that shape model behavior.
→ WHAT IT COVERS Harmonic co-founders Vlad Tenev and Tudor Achim explain how their AI system Aristotle achieved IMO gold medal performance in 2025 using formally verified proofs in Lean, why formal verification beats informal reasoning at scale, and how mathematical superintelligence could eliminate intellectual bottlenecks across science, software, and engineering by 2030.
→ WHAT IT COVERS Part two of a marathon live show examining AI for biology, recursive self-improvement, and geopolitical competition. Abhi Mahajan discusses AI foundation models for cancer treatment prediction, Helen Toner presents CSET's report on automated AI R&D revealing zero consensus among experts, and Jeremie Harris analyzes US-China AI competition dynamics and infrastructure vulnerabilities threatening American technological leadership.
→ WHAT IT COVERS Nathan Labenz hosts a four-hour live show covering AI for science, geopolitics, and recursive self-improvement. Part one features Stanford professor James Zou on AI scientific discovery methods, Sam Hammond on US AI policy and China competition, and Shoshana Tekofsky on agent behavior patterns observed across 21 frontier models over ten months in the AI Village environment.
Monday morning, inbox, done.
Pick your shows, and start the week knowing what happened in your world.
Pick the Podcasts You Care About
Choose from 200+ curated shows or add any public RSS feed.
AI Reads Every New Episode
Key arguments, surprising data points, and frameworks worth stealing — pulled automatically.
One Email, Every Monday
A curated brief for each episode, with links to listen if something grabs you.
Similar Podcasts You'll Love
Explore More
Get a free sample digest
See what your Monday email looks like — real AI summaries, no account needed.
One free sample — no spam, no commitment.



