→ WHAT IT COVERS Robert Lange from Sakana AI discusses Shinka Evolve, an open-source evolutionary framework that uses multiple LLMs in parallel to discover novel algorithms and scientific solutions. The system improves on AlphaEvolve's approach through model ensembling, UCB-based adaptive model selection, and crossover mutations, achieving state-of-the-art circle packing results in under 200 LLM evaluations.
Recent Episode Summaries
20 AI-powered summaries available
→ WHAT IT COVERS Deep learning pioneer Jeremy Howard joins Machine Learning Street Talk to argue that vibe coding functions like a slot machine, creating an illusion of control while eroding genuine software engineering competence. He draws on ULMFiT's origins, transfer learning history, and his own Claude Code experiments to distinguish coding from software engineering, warning that organizations betting on AI productivity gains face measurable, documented risks.
→ WHAT IT COVERS Blaise Agüera y Arcas presents research showing evolution can produce complex programs without mutation through symbiogenesis. Using BFF (Brain Fuck Forth) simulations with 1,024 random tapes of 64 bytes, he demonstrates how replicators merge to create computational complexity, experiencing phase transitions similar to gelation that transform random noise into functional life.
→ WHAT IT COVERS Dr. Jeff Beck explores energy-based models, variational autoencoders, and the nature of agency in AI systems. The conversation covers geometric deep learning, Bayesian inference, self-supervised learning architectures like JEPA, continual learning challenges, and the future of autonomous AI systems capable of scientific discovery and experimental design.
→ WHAT IT COVERS Philosopher Mazviita Chirimuuta examines how scientific abstraction and idealization shape neuroscience and AI research. She challenges computational theories of mind, argues biological cognition cannot be separated from living tissue, and presents haptic realism as an alternative to spectator theories of knowledge that assume mathematical representations reveal underlying universal truths.
→ WHAT IT COVERS Philosopher Mazviita Chiramuta challenges neuroscience's computational metaphors for the brain, arguing scientists mistake elegant simplifications for literal truth. The episode examines how every era models the mind using contemporary technology—from hydraulic pumps to computers—and questions whether Karl Friston's free energy principle and AI's inevitability represent genuine understanding or another historical illusion.
→ WHAT IT COVERS Dr. Jeff Beck explains why scaling Bayesian inference with object-centered models represents the path to human-like AI, contrasting structured cognitive approaches with current transformer architectures that lack explicit world models and causal reasoning capabilities. → KEY INSIGHTS - **Bayesian Brain Evidence:** Humans perform optimal cue combination in sensory-motor tasks, adjusting for reliability on a trial-by-trial basis without knowing which sensory input is more...
→ WHAT IT COVERS Max Bennett explains how the brain evolved through five breakthroughs, from basic steering to mental simulation, revealing how the neocortex functions as a generative model that enables planning, imagination, and social cognition through 600 million years of evolution. → KEY INSIGHTS - **Perception as Inference:** The brain does not directly perceive sensory input but constructs models of reality and tests them against evidence.
→ WHAT IT COVERS César Hidalgo presents three laws governing knowledge growth, diffusion, and value, demonstrating how knowledge accumulates through experience following power laws, diffuses through geographic and social networks based on relatedness, and requires physical embodiment in teams and organizations rather than existing abstractly in documents.
→ WHAT IT COVERS Dr. Mike Israetel debates artificial superintelligence timelines, predicting ASI arrives in 2026-2027 before AGI in 2029-2031. Discussion covers intelligence definitions, embodied cognition versus abstraction, reasoning capabilities, live learning challenges, and whether current AI systems truly understand versus mimic. → KEY INSIGHTS - **ASI Timeline Prediction:** Israetel predicts artificial superintelligence emerges late 2026 when AI systems demonstrate 10x-100x human...
→ WHAT IT COVERS Category theory provides a mathematical framework for designing neural networks that can reliably execute algorithms like addition and multiplication, addressing fundamental limitations in current large language models and deep learning architectures. → KEY INSIGHTS - **Algorithmic Failure in LLMs:** Large language models perform hundreds of billions of multiplications to generate single tokens yet cannot reliably multiply small numbers together, revealing misalignment between...
→ WHAT IT COVERS Andrew Gordon and Nora Petrova from Prolific explain why current AI benchmarks miss critical user experience factors and introduce their human-centered evaluation methodology called Humane. → KEY INSIGHTS - **TrueSkill Methodology:** Prolific uses Microsoft's TrueSkill framework from Xbox Live to run AI model tournaments, selecting model pairs based on information gain to minimize uncertainty efficiently with fewer comparisons needed.
→ WHAT IT COVERS Professor Yi Ma presents a mathematical theory of intelligence based on parsimony and self-consistency principles, explaining how compression drives knowledge acquisition across evolutionary, neural, and scientific stages while deriving white-box transformer architectures from first principles. → KEY INSIGHTS - **Rate Reduction Framework:** Intelligence operates by discovering low-dimensional structures in high-dimensional data through compression, where the coding rate...
→ WHAT IT COVERS Pedro Domingos presents Tensor Logic, a unified programming language for AI that combines tensor algebra from deep learning with logic programming from symbolic AI, enabling both automated reasoning and gradient descent learning within a single framework. → KEY INSIGHTS - **Tensor Logic Unification:** Einstein summation operations and logic programming rules are mathematically identical constructs operating on different data types (real numbers versus booleans).
→ WHAT IT COVERS Llion Jones, co-inventor of the transformer, and Sakana AI researcher Luke Darlow discuss the Continuous Thought Machine (CTM), a spotlight paper at NeurIPS 2025. They examine why AI research is trapped in a transformer-centric local minimum, how biological neuron synchronization inspired a new recurrent architecture, and why research freedom produces better science than commercial pressure.
→ WHAT IT COVERS Phelim Bradley, CEO of Prolific, a human data infrastructure platform, explains why frontier AI models depend fundamentally on verified human expertise for training, evaluation, and post-training feedback — and why this dependency grows larger as AI scales, not smaller, despite widespread assumptions about full automation. → KEY INSIGHTS - **Human data routing:** Prolific uses a three-layer quality system to match humans to AI tasks: ID verification at onboarding, researcher...
→ WHAT IT COVERS Professor Chris Kempes from Santa Fe Institute explores universal principles underlying all life forms, from bacteria to human culture, proposing a hierarchical framework spanning materials, physical constraints, and optimization principles that could apply across the universe. → KEY INSIGHTS - **Three Scientific Cultures Framework:** Science operates through variance culture studying diversity, exactitude culture creating detailed simulations, and coarse-grained culture...
→ WHAT IT COVERS Google researcher Blaise Agüera y Arcas explains how life and intelligence emerge from computational processes, demonstrating through experiments that self-replicating programs arise spontaneously from random code, challenging traditional Darwinian evolution theory. → KEY INSIGHTS - **BFF Experiment:** Random 64-byte code tapes undergo millions of random pairings and self-modifications, spontaneously producing complex self-replicating programs within the soup, demonstrating...
→ WHAT IT COVERS Sara Saab and Enzo Blindow from Prolific, a human data platform, examine why human evaluation remains essential for AI alignment despite automation pressures. They cover benchmark gaming (Chatbot Arena's $600M valuation despite flaws), agentic misalignment research from Anthropic, constitutional AI governance models, and Prolific's "Humane" leaderboard using demographically stratified human evaluators.
→ WHAT IT COVERS Dr. Ilia Shumailov, former Google DeepMind ML security researcher, examines why AI agents represent an unprecedented security threat, how prompt injection attacks defeat all current defenses, why supply chain vulnerabilities in ML libraries expose millions of devices, and how a new system called CAML enforces data flow policies to protect sensitive information from agentic systems.
Monday morning, inbox, done.
Pick your shows, and start the week knowing what happened in your world.
Pick the Podcasts You Care About
Choose from 200+ curated shows or add any public RSS feed.
AI Reads Every New Episode
Key arguments, surprising data points, and frameworks worth stealing — pulled automatically.
One Email, Every Monday
A curated brief for each episode, with links to listen if something grabs you.
Similar Podcasts You'll Love
Explore More
Get a free sample digest
See what your Monday email looks like — real AI summaries, no account needed.
One free sample — no spam, no commitment.



