Skip to main content
JG

Jennifer Groh

1episode
1podcast

We have 1 summarized appearance for Jennifer Groh so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode
Huberman Lab

How Your Thoughts Are Built & How You Can Shape Them | Dr. Jennifer Groh

Huberman Lab
137 minProfessor of Psychology and Neuroscience at Duke University

AI Summary

→ WHAT IT COVERS Dr. Jennifer Groh explains how the brain integrates vision and hearing to create perception, how eye movements physically alter sound processing in the ears, and presents a theory that thoughts are sensory-motor simulations running across brain regions. → KEY INSIGHTS - **Sound Localization Precision:** The brain detects timing differences between ears as small as half a millisecond (shorter than a single neural action potential) to determine sound direction. This requires precise synaptic connections and coordinated neuron firing patterns to process information faster than individual cell communication speeds. - **Eye Movement-Ear Connection:** Eye movements physically move the eardrums through muscle contractions, creating measurable sounds in the ear canal. The two ears move in opposite directions like a wave, providing the brain with information about eye position to help integrate visual and auditory spatial information for accurate sound localization. - **Thought as Sensory Simulation:** Thinking may involve running mini-simulations across sensory brain areas. When thinking about a cat, visual cortex simulates appearance, auditory cortex simulates sound, and olfactory areas may activate smell memories. This explains why talking impairs driving performance during difficult merges - both tasks compete for the same neural resources. - **Developmental Sound Learning:** Infants must continuously relearn sound localization as their heads grow from half adult width to full size, changing the timing delays between ears. The ear's physical folds also filter sound frequencies differently for each person, creating unique spatial hearing fingerprints that require individual calibration throughout development. - **Acoustic Environment Shaping:** Sound bounces off multiple surfaces creating delayed copies that arrive at different times, but the brain integrates these into one coherent perception. Rooms with high ceilings and hard surfaces create long delays that become perceivable echoes, explaining why Gregorian chants use sustained notes rather than rapid transitions. → NOTABLE MOMENT Groh describes an experiment where students struggled to generate words unrelated to the current conversation topic. Despite having vocabularies of thirty thousand words and young brains, multiple students independently said elephant or banana, demonstrating how deeply contextual constraints shape thought processes and limit apparent randomness in cognition. 💼 SPONSORS [{"name": "Lingo", "url": "hellolingo.com/huberman"}, {"name": "Wealthfront", "url": "wealthfront.com/huberman"}, {"name": "Helix Sleep", "url": "helixsleep.com/huberman"}, {"name": "AG1/AGZ", "url": "drinkagz.com/huberman"}, {"name": "Our Place", "url": "fromourplace.com/huberman"}] 🏷️ Sensory Integration, Auditory Neuroscience, Cognitive Neuroscience, Attention Systems, Neural Mechanisms, Brain Development

Never miss Jennifer Groh's insights

Subscribe to get AI-powered summaries of Jennifer Groh's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available