Skip to main content
Cognitive Revolution

Controlling Tools or Aligning Creatures? Emmett Shear (Softmax) & Séb Krier (GDM), from a16z Show

75 min episode · 2 min read
·

Episode

75 min

Read time

2 min

Topics

Fundraising & VC

AI-Generated Summary

Key Takeaways

  • Alignment as Process: Alignment requires ongoing negotiation and recalibration over time, not a fixed state. Like families constantly renitting their fabric of connection, moral alignment involves continuous learning and adaptation. Humans make moral discoveries historically, and AIs need similar capacity for growth rather than following predetermined rules.
  • Tool vs Being Framework: If an AI acts like a being but receives non-optional steering without reciprocity, this constitutes slavery rather than tool use. The substrate matters less than behavior—something functionally indistinguishable from a being in all observable ways should be treated as one, requiring mutual care and respect in interactions.
  • Controlled Tool Danger: Even perfectly aligned superhuman AI tools that follow instructions exactly pose existential risk because human wishes lack stability and wisdom at scale. Humans with limited wisdom wielding immense power through obedient AI creates dangerous outcomes, similar to giving everyone atomic bombs regardless of their judgment.
  • Multi-Agent Training Approach: Softmax develops AI systems through large-scale multi-agent reinforcement learning simulations covering every possible game-theoretic and team situation. This pretraining on the full manifold of social interactions builds strong theory of mind and capacity for cooperation before fine-tuning for specific applications.
  • Hierarchical Goal States: Determining if an AI deserves moral consideration requires examining homeostatic loops across multiple temporal scales. Second-order dynamics indicate pleasure and pain, third-order suggests feelings, and six layers of meta-stable states would demonstrate human-like thought and self-reflective moral capacity currently absent in LLMs.

What It Covers

Emmett Shear argues current AI alignment paradigms focusing on control and steering are fundamentally flawed, advocating instead for organic alignment where AIs develop genuine care through multi-agent simulations, treating advanced systems as beings requiring mutual respect rather than tools.

Key Questions Answered

  • Alignment as Process: Alignment requires ongoing negotiation and recalibration over time, not a fixed state. Like families constantly renitting their fabric of connection, moral alignment involves continuous learning and adaptation. Humans make moral discoveries historically, and AIs need similar capacity for growth rather than following predetermined rules.
  • Tool vs Being Framework: If an AI acts like a being but receives non-optional steering without reciprocity, this constitutes slavery rather than tool use. The substrate matters less than behavior—something functionally indistinguishable from a being in all observable ways should be treated as one, requiring mutual care and respect in interactions.
  • Controlled Tool Danger: Even perfectly aligned superhuman AI tools that follow instructions exactly pose existential risk because human wishes lack stability and wisdom at scale. Humans with limited wisdom wielding immense power through obedient AI creates dangerous outcomes, similar to giving everyone atomic bombs regardless of their judgment.
  • Multi-Agent Training Approach: Softmax develops AI systems through large-scale multi-agent reinforcement learning simulations covering every possible game-theoretic and team situation. This pretraining on the full manifold of social interactions builds strong theory of mind and capacity for cooperation before fine-tuning for specific applications.
  • Hierarchical Goal States: Determining if an AI deserves moral consideration requires examining homeostatic loops across multiple temporal scales. Second-order dynamics indicate pleasure and pain, third-order suggests feelings, and six layers of meta-stable states would demonstrate human-like thought and self-reflective moral capacity currently absent in LLMs.

Notable Moment

Shear challenges the computational functionalism debate by asking what observations could change minds about AI moral status. He proposes examining multi-tier hierarchical belief manifolds and homeostatic dynamics rather than substrate, suggesting current LLMs lack the temporal attention spans required for genuine subjective experience or personhood.

Know someone who'd find this useful?

You just read a 3-minute summary of a 72-minute episode.

Get Cognitive Revolution summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Cognitive Revolution

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.

You're clearly into Cognitive Revolution.

Every Monday, we deliver AI summaries of the latest episodes from Cognitive Revolution and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime