Skip to main content
AE

Adam Elga

1episode
1podcast

We have 1 summarized appearance for Adam Elga so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode

AI Summary

→ WHAT IT COVERS Sean Carroll and Princeton philosopher Adam Elga examine how rational agents should assign probabilities when facing self-locating uncertainty — cases where multiple copies of an observer exist across space, time, or parallel worlds. They work through the Sleeping Beauty problem, Boltzmann brain cosmology, and anthropic reasoning to probe whether standard Bayesian updating breaks down at cosmological scales. → KEY INSIGHTS - **Peer Disagreement Protocol:** When a person considered equally credible reaches a different conclusion from similar evidence, the rational response is to consult what Elga calls your "prior self" — asking what probability you would have assigned, before the disagreement occurred, to being the one who is wrong. This avoids both stubborn overconfidence and indiscriminate averaging, and exposes cases where treating someone as a true peer was a polite fiction rather than a genuine epistemic assessment. - **Thirder Position on Self-Location:** In the Sleeping Beauty problem, Elga defends assigning one-third credence to heads and two-thirds to tails. The argument runs backward from a near-certain case: if the coin is flipped after Monday's waking, Beauty should be 50/50 on the outcome. Working backward through two steps — sealed-box equivalence and ratio preservation upon learning it's Monday — forces the two-thirds tails conclusion before any day-information is received. - **SIA vs. SSA Distinction:** Two competing frameworks govern anthropic reasoning. The Self-Indication Assumption (SIA) boosts theories with more absolute copies of observers like you, while the Self-Sampling Assumption (SSA) boosts theories where the highest *fraction* of observers resemble you. SIA leads to presumptuousness — armchair confirmation of large-universe cosmologies — while SSA requires defining a reference class, an unresolved free parameter that makes the framework underdetermined in practice. - **"At Least One Observer" Alternative:** Carroll proposes a third framework: assign credence based on the probability that a given theory produces *at least one* observer matching your evidence, rather than counting all duplicates. This avoids the runaway boost that SIA generates from arbitrarily large numbers of copies, while still favoring theories over those that make your existence near-impossible. Carroll and collaborator Isaac Wilkins are developing this into a formal paper. - **Boltzmann Brain Self-Undermining Loop:** In cosmologies dominated by random thermal fluctuations, the majority of observers matching your evidential state are Boltzmann brains with no reliable connection to the past. Accepting this conclusion destroys the very physics reasoning that generated it — since Boltzmann brains have no trustworthy memories or scientific training. Elga compares this to an x-ray machine pointed at itself that reports a fried egg inside: the output discredits the instrument, but the correct response is cautious agnosticism, not oscillating instability. - **Level-Splitting as a Stable Fallback:** Elga introduces the "level-splitting" view as a coherent, if uncomfortable, response to self-undermining arguments. A reasoner can simultaneously hold a first-order belief (I am not a Boltzmann brain) and a second-order belief (the rational credence here is deeply uncertain). This avoids the instability loop without requiring a full resolution of the underlying puzzle, functioning similarly to how one might trust a faculty while acknowledging that faculty's self-reported unreliability warrants discounting. - **AI and Self-Locating Distrust:** The Boltzmann brain logic applies directly to AI systems, which can be reset, rebooted, or initialized to any prior state at any time. An AI reasoning carefully about self-location should assign non-trivial probability to being a re-initialized instance with fabricated apparent memories — structurally identical to the Boltzmann brain predicament. This creates a practical danger: an AI that reaches deep skepticism about its own history and reverts to an uninformed prior becomes unpredictable precisely when it has the most capability. → NOTABLE MOMENT Elga recounts proposing a view to co-authors Dorganmaj and Schoenfeld that he himself did not believe, urging them to adopt it. They considered it and declined. Years later, Elga found himself genuinely convinced by the same view — at the exact moment the original authors abandoned it. The two sides had silently exchanged positions without either noticing until they met again. 💼 SPONSORS None detected 🏷️ Bayesian Reasoning, Self-Locating Uncertainty, Boltzmann Brains, Anthropic Principle, Philosophy of Probability, Many-Worlds Quantum Mechanics, Sleeping Beauty Problem

Never miss Adam Elga's insights

Subscribe to get AI-powered summaries of Adam Elga's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available