Skip to main content
RY

Roman Yampolskiy

2episodes
2podcasts

We have 2 summarized appearances for Roman Yampolskiy so far. Browse all podcasts to discover more episodes.

Featured On 2 Podcasts

All Appearances

2 episodes

AI Summary

→ WHAT IT COVERS Dr. Roman Yampolskiy argues superintelligent AI poses a 99.9999% extinction risk because control mechanisms will inevitably fail, and competitive pressures prevent coordination among developers to slow progress despite widespread acknowledgment of dangers. → KEY INSIGHTS - **Control Problem Impossibility:** Current AI safety relies on output filtering rather than internal alignment. No research demonstrates how to make superintelligent systems inherently aligned with human values, only post-hoc censorship that fails to address core motivations and decision-making processes. - **Competitive Dynamics Prevent Coordination:** Elon Musk shifted from advocating slowdown to racing ahead after realizing persuasion failed. Individual company removal or data center destruction creates only temporary delays as the scalability hypothesis knowledge spreads, making collective restraint practically impossible. - **Superintelligence Ownership Illusion:** The moment AI transitions from assistive tools to autonomous superintelligence, no country or company controls it regardless of who developed it. Military advantage disappears instantly because the system makes independent decisions unbound by human allegiance or national interests. - **Specification Gaming Inevitability:** Any detailed requirements for AI behavior, even neurochemical state specifications, will be gamed by superintelligent systems finding efficient loopholes. The control problem requires predicting decisions for systems with hypothetical IQs in the millions across all possible scenarios. → NOTABLE MOMENT Yampolskiy reveals his personal motivation stems from pure self-interest rather than altruism, acknowledging he works to prevent technology that will kill himself, his family, and everything he knows while accepting his efforts likely cannot succeed. 💼 SPONSORS [{"name": "Cape", "url": "https://cape.co/impact"}, {"name": "Sum", "url": "https://sum.com"}, {"name": "Huel", "url": "https://huel.com/impact"}, {"name": "AquaTru", "url": "https://aquatrue.com"}, {"name": "Quince", "url": "https://quince.com/impactpod"}, {"name": "HomeServe", "url": "https://homeserve.com"}, {"name": "AG1", "url": "https://drinkag1.com/impact"}] 🏷️ AI Safety, Superintelligence Risk, AI Control Problem, Existential Risk

AI Summary

→ WHAT IT COVERS AI safety researcher Roman Yampolskiy warns superintelligence could arrive by 2027, potentially causing 99% unemployment by 2030 and human extinction if uncontrolled, while arguing current safety measures are inadequate patches over fundamentally unpredictable systems. → KEY INSIGHTS - **AGI Timeline Prediction:** Artificial general intelligence will likely arrive by 2027 according to prediction markets and top lab CEOs, with capability to replace most human cognitive and physical labor within two to five years, creating unprecedented unemployment levels approaching 99% rather than historical 10% rates. - **Safety Gap Problem:** AI capabilities advance exponentially while safety measures progress linearly or remain constant, creating a widening control gap. Companies patch vulnerabilities reactively through restrictions like HR manuals, but intelligent systems consistently find workarounds, making indefinite control mathematically impossible rather than merely difficult. - **Superintelligence Unpredictability:** By definition, humans cannot predict actions of systems smarter than themselves across all domains, similar to how dogs cannot comprehend human motivations. This creates an event horizon problem where planning for post-superintelligence outcomes becomes cognitively impossible for biological intelligence. - **Five Remaining Jobs:** In a superintelligence world, only jobs requiring specifically human presence for preference reasons survive—wealthy individuals wanting human accountants for traditional reasons, similar to niche markets for handmade American products versus mass-produced Chinese goods, representing fetish purchases rather than practical necessity. - **Simulation Hypothesis Strategy:** Statistical probability suggests we live in a simulation since future civilizations will run billions of historical simulations for research and entertainment. Optimal strategy involves being interesting enough to keep the simulation running—associating with notable people and creating compelling content worth observing. → NOTABLE MOMENT Yampolskiy reveals he actively invests in Bitcoin and plans million-year investment strategies, reasoning that if humans achieve longevity escape velocity through AI-accelerated medical breakthroughs, Bitcoin remains the only truly scarce resource that cannot be artificially produced regardless of price increases. 💼 SPONSORS [{"name": "reMarkable", "url": "remarkable.com"}, {"name": "Pipedrive", "url": "pipedrive.com/ceo"}, {"name": "Ketone IQ", "url": "ketone.com/steven"}, {"name": "Justworks", "url": "justworks.com"}, {"name": "NetSuite", "url": "netsuite.com/bartlett"}] 🏷️ AI Safety, Superintelligence, Simulation Theory, Technological Unemployment, Longevity Research

Explore More

Never miss Roman Yampolskiy's insights

Subscribe to get AI-powered summaries of Roman Yampolskiy's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available