Skip to main content
Huberman Lab

How Your Thoughts Are Built & How You Can Shape Them | Dr. Jennifer Groh

136 min episode · 2 min read
·

Episode

136 min

Read time

2 min

AI-Generated Summary

Key Takeaways

  • Sound Localization Precision: The brain detects timing differences between ears as small as half a millisecond (shorter than a single neural action potential) to determine sound direction. This requires precise synaptic connections and coordinated neuron firing patterns to process information faster than individual cell communication speeds.
  • Eye Movement-Ear Connection: Eye movements physically move the eardrums through muscle contractions, creating measurable sounds in the ear canal. The two ears move in opposite directions like a wave, providing the brain with information about eye position to help integrate visual and auditory spatial information for accurate sound localization.
  • Thought as Sensory Simulation: Thinking may involve running mini-simulations across sensory brain areas. When thinking about a cat, visual cortex simulates appearance, auditory cortex simulates sound, and olfactory areas may activate smell memories. This explains why talking impairs driving performance during difficult merges - both tasks compete for the same neural resources.
  • Developmental Sound Learning: Infants must continuously relearn sound localization as their heads grow from half adult width to full size, changing the timing delays between ears. The ear's physical folds also filter sound frequencies differently for each person, creating unique spatial hearing fingerprints that require individual calibration throughout development.
  • Acoustic Environment Shaping: Sound bounces off multiple surfaces creating delayed copies that arrive at different times, but the brain integrates these into one coherent perception. Rooms with high ceilings and hard surfaces create long delays that become perceivable echoes, explaining why Gregorian chants use sustained notes rather than rapid transitions.

What It Covers

Dr. Jennifer Groh explains how the brain integrates vision and hearing to create perception, how eye movements physically alter sound processing in the ears, and presents a theory that thoughts are sensory-motor simulations running across brain regions.

Key Questions Answered

  • Sound Localization Precision: The brain detects timing differences between ears as small as half a millisecond (shorter than a single neural action potential) to determine sound direction. This requires precise synaptic connections and coordinated neuron firing patterns to process information faster than individual cell communication speeds.
  • Eye Movement-Ear Connection: Eye movements physically move the eardrums through muscle contractions, creating measurable sounds in the ear canal. The two ears move in opposite directions like a wave, providing the brain with information about eye position to help integrate visual and auditory spatial information for accurate sound localization.
  • Thought as Sensory Simulation: Thinking may involve running mini-simulations across sensory brain areas. When thinking about a cat, visual cortex simulates appearance, auditory cortex simulates sound, and olfactory areas may activate smell memories. This explains why talking impairs driving performance during difficult merges - both tasks compete for the same neural resources.
  • Developmental Sound Learning: Infants must continuously relearn sound localization as their heads grow from half adult width to full size, changing the timing delays between ears. The ear's physical folds also filter sound frequencies differently for each person, creating unique spatial hearing fingerprints that require individual calibration throughout development.
  • Acoustic Environment Shaping: Sound bounces off multiple surfaces creating delayed copies that arrive at different times, but the brain integrates these into one coherent perception. Rooms with high ceilings and hard surfaces create long delays that become perceivable echoes, explaining why Gregorian chants use sustained notes rather than rapid transitions.

Notable Moment

Groh describes an experiment where students struggled to generate words unrelated to the current conversation topic. Despite having vocabularies of thirty thousand words and young brains, multiple students independently said elephant or banana, demonstrating how deeply contextual constraints shape thought processes and limit apparent randomness in cognition.

Know someone who'd find this useful?

You just read a 3-minute summary of a 133-minute episode.

Get Huberman Lab summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Huberman Lab

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

This podcast is featured in Best Health Podcasts (2026) — ranked and reviewed with AI summaries.

You're clearly into Huberman Lab.

Every Monday, we deliver AI summaries of the latest episodes from Huberman Lab and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime