Skip to main content
Machine Learning Street Talk

Your Brain is Running a Simulation Right Now [Max Bennett]

197 min episode · 2 min read
·

Episode

197 min

Read time

2 min

Topics

Psychology & Behavior

AI-Generated Summary

Key Takeaways

  • Perception as Inference: The brain does not directly perceive sensory input but constructs models of reality and tests them against evidence. Visual illusions demonstrate this - you cannot see a duck and rabbit simultaneously because the brain renders one simulation at a time, explaining why we cannot unsee illusions once perceived.
  • Mental Simulation in Rats: Hippocampal place cells in rats activate not just in current locations but along potential future paths during decision-making pauses. Researchers observed rats imagining foregone choices in the orbital frontal cortex after making regrettable decisions, demonstrating model-based reinforcement learning in simple mammals through measurable neural activity.
  • Agranular Prefrontal Cortex: Layer four of the neocortex atrophies in mammalian frontal cortex during development because this region primarily generates intentions rather than processes sensory input. This architectural difference supports active inference theory - the brain fits behavior to its model of goals rather than constantly updating goals based on sensory feedback.
  • Primate Social Intelligence: Neocortex size in primates correlates directly with social group size, not with other mammals. Chimpanzees demonstrate theory of mind by distinguishing intentional from accidental actions, choosing experimenters who can see them, and engaging in multi-level deception - abilities emerging from uniquely primate brain regions like granular prefrontal cortex.
  • Self-Supervision Principle: Transformers and the neocortex both achieve generalization through self-supervised learning - predicting masked or future inputs without explicit labels. This shared principle suggests that generative models trained on prediction naturally develop rich internal representations, though transformers lack the autonomous neuron-level agency present in biological systems.

What It Covers

Max Bennett explains how the brain evolved through five breakthroughs, from basic steering to mental simulation, revealing how the neocortex functions as a generative model that enables planning, imagination, and social cognition through 600 million years of evolution.

Key Questions Answered

  • Perception as Inference: The brain does not directly perceive sensory input but constructs models of reality and tests them against evidence. Visual illusions demonstrate this - you cannot see a duck and rabbit simultaneously because the brain renders one simulation at a time, explaining why we cannot unsee illusions once perceived.
  • Mental Simulation in Rats: Hippocampal place cells in rats activate not just in current locations but along potential future paths during decision-making pauses. Researchers observed rats imagining foregone choices in the orbital frontal cortex after making regrettable decisions, demonstrating model-based reinforcement learning in simple mammals through measurable neural activity.
  • Agranular Prefrontal Cortex: Layer four of the neocortex atrophies in mammalian frontal cortex during development because this region primarily generates intentions rather than processes sensory input. This architectural difference supports active inference theory - the brain fits behavior to its model of goals rather than constantly updating goals based on sensory feedback.
  • Primate Social Intelligence: Neocortex size in primates correlates directly with social group size, not with other mammals. Chimpanzees demonstrate theory of mind by distinguishing intentional from accidental actions, choosing experimenters who can see them, and engaging in multi-level deception - abilities emerging from uniquely primate brain regions like granular prefrontal cortex.
  • Self-Supervision Principle: Transformers and the neocortex both achieve generalization through self-supervised learning - predicting masked or future inputs without explicit labels. This shared principle suggests that generative models trained on prediction naturally develop rich internal representations, though transformers lack the autonomous neuron-level agency present in biological systems.

Notable Moment

Bennett describes how David Redish recorded rats literally imagining eating food they chose not to take, watching their orbital frontal cortex activate for the foregone treat. This neural evidence of regret demonstrates that even simple mammals engage in counterfactual reasoning about alternative choices they could have made.

Know someone who'd find this useful?

You just read a 3-minute summary of a 194-minute episode.

Get Machine Learning Street Talk summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Machine Learning Street Talk

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.

You're clearly into Machine Learning Street Talk.

Every Monday, we deliver AI summaries of the latest episodes from Machine Learning Street Talk and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime