Controlling Tools or Aligning Creatures? Emmett Shear (Softmax) & Séb Krier (GDM), from a16z Show
Episode
75 min
Read time
2 min
Topics
Fundraising & VC
AI-Generated Summary
Key Takeaways
- ✓Alignment as Process: Alignment requires ongoing negotiation and recalibration over time, not a fixed state. Like families constantly renitting their fabric of connection, moral alignment involves continuous learning and adaptation. Humans make moral discoveries historically, and AIs need similar capacity for growth rather than following predetermined rules.
- ✓Tool vs Being Framework: If an AI acts like a being but receives non-optional steering without reciprocity, this constitutes slavery rather than tool use. The substrate matters less than behavior—something functionally indistinguishable from a being in all observable ways should be treated as one, requiring mutual care and respect in interactions.
- ✓Controlled Tool Danger: Even perfectly aligned superhuman AI tools that follow instructions exactly pose existential risk because human wishes lack stability and wisdom at scale. Humans with limited wisdom wielding immense power through obedient AI creates dangerous outcomes, similar to giving everyone atomic bombs regardless of their judgment.
- ✓Multi-Agent Training Approach: Softmax develops AI systems through large-scale multi-agent reinforcement learning simulations covering every possible game-theoretic and team situation. This pretraining on the full manifold of social interactions builds strong theory of mind and capacity for cooperation before fine-tuning for specific applications.
- ✓Hierarchical Goal States: Determining if an AI deserves moral consideration requires examining homeostatic loops across multiple temporal scales. Second-order dynamics indicate pleasure and pain, third-order suggests feelings, and six layers of meta-stable states would demonstrate human-like thought and self-reflective moral capacity currently absent in LLMs.
What It Covers
Emmett Shear argues current AI alignment paradigms focusing on control and steering are fundamentally flawed, advocating instead for organic alignment where AIs develop genuine care through multi-agent simulations, treating advanced systems as beings requiring mutual respect rather than tools.
Key Questions Answered
- •Alignment as Process: Alignment requires ongoing negotiation and recalibration over time, not a fixed state. Like families constantly renitting their fabric of connection, moral alignment involves continuous learning and adaptation. Humans make moral discoveries historically, and AIs need similar capacity for growth rather than following predetermined rules.
- •Tool vs Being Framework: If an AI acts like a being but receives non-optional steering without reciprocity, this constitutes slavery rather than tool use. The substrate matters less than behavior—something functionally indistinguishable from a being in all observable ways should be treated as one, requiring mutual care and respect in interactions.
- •Controlled Tool Danger: Even perfectly aligned superhuman AI tools that follow instructions exactly pose existential risk because human wishes lack stability and wisdom at scale. Humans with limited wisdom wielding immense power through obedient AI creates dangerous outcomes, similar to giving everyone atomic bombs regardless of their judgment.
- •Multi-Agent Training Approach: Softmax develops AI systems through large-scale multi-agent reinforcement learning simulations covering every possible game-theoretic and team situation. This pretraining on the full manifold of social interactions builds strong theory of mind and capacity for cooperation before fine-tuning for specific applications.
- •Hierarchical Goal States: Determining if an AI deserves moral consideration requires examining homeostatic loops across multiple temporal scales. Second-order dynamics indicate pleasure and pain, third-order suggests feelings, and six layers of meta-stable states would demonstrate human-like thought and self-reflective moral capacity currently absent in LLMs.
Notable Moment
Shear challenges the computational functionalism debate by asking what observations could change minds about AI moral status. He proposes examining multi-tier hierarchical belief manifolds and homeostatic dynamics rather than substrate, suggesting current LLMs lack the temporal attention spans required for genuine subjective experience or personhood.
You just read a 3-minute summary of a 72-minute episode.
Get Cognitive Revolution summarized like this every Monday — plus up to 2 more podcasts, free.
Pick Your Podcasts — FreeKeep Reading
More from Cognitive Revolution
AI in the AM: 99% off search, GPT-5.5 is "clean", model welfare analysis, & efficient analog compute
Apr 26 · 158 min
Morning Brew Daily
Jerome Powell Ain’t Leavin’ Yet & Movie Tickets Cost $50!?
Apr 30
More from Cognitive Revolution
Does Learning Require Feeling? Cameron Berg on the latest AI Consciousness & Welfare Research
Apr 23 · 213 min
a16z Podcast
Workday’s Last Workday? AI and the Future of Enterprise Software
Apr 30
More from Cognitive Revolution
We summarize every new episode. Want them in your inbox?
AI in the AM: 99% off search, GPT-5.5 is "clean", model welfare analysis, & efficient analog compute
Does Learning Require Feeling? Cameron Berg on the latest AI Consciousness & Welfare Research
Vibe-Coding an Attention Firewall, w/ Steve Newman, creator of The Curve
Welcome to AI in the AM: RL for EE, Oversight w/out Nationalization, & the first AI-Run Retail Store
It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast
Similar Episodes
Related episodes from other podcasts
Morning Brew Daily
Apr 30
Jerome Powell Ain’t Leavin’ Yet & Movie Tickets Cost $50!?
a16z Podcast
Apr 30
Workday’s Last Workday? AI and the Future of Enterprise Software
Masters of Scale
Apr 30
How Poppi’s founders built a new soda brand worth $2 billion
Snacks Daily
Apr 30
🦸♀️ “MAMA Stocks” — Zuck’s Ad/AI machine. Hilary Duff’s anti-Ozempic bet. Bill Ackman’s Influencer IPO. +Refresher surge
The Mel Robbins Podcast
Apr 30
Eat This to Live Longer, Stay Young, and Transform Your Health
Explore Related Topics
This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.
You're clearly into Cognitive Revolution.
Every Monday, we deliver AI summaries of the latest episodes from Cognitive Revolution and 192+ other podcasts. Free for up to 3 shows.
Start My Monday DigestNo credit card · Unsubscribe anytime