Skip to main content
ES

Emmett Shear

2episodes
2podcasts

We have 2 summarized appearances for Emmett Shear so far. Browse all podcasts to discover more episodes.

Featured On 2 Podcasts

All Appearances

2 episodes

AI Summary

→ WHAT IT COVERS Emmett Shear argues current AI alignment paradigms focusing on control and steering are fundamentally flawed, advocating instead for organic alignment where AIs develop genuine care through multi-agent simulations, treating advanced systems as beings requiring mutual respect rather than tools. → KEY INSIGHTS - **Alignment as Process:** Alignment requires ongoing negotiation and recalibration over time, not a fixed state. Like families constantly renitting their fabric of connection, moral alignment involves continuous learning and adaptation. Humans make moral discoveries historically, and AIs need similar capacity for growth rather than following predetermined rules. - **Tool vs Being Framework:** If an AI acts like a being but receives non-optional steering without reciprocity, this constitutes slavery rather than tool use. The substrate matters less than behavior—something functionally indistinguishable from a being in all observable ways should be treated as one, requiring mutual care and respect in interactions. - **Controlled Tool Danger:** Even perfectly aligned superhuman AI tools that follow instructions exactly pose existential risk because human wishes lack stability and wisdom at scale. Humans with limited wisdom wielding immense power through obedient AI creates dangerous outcomes, similar to giving everyone atomic bombs regardless of their judgment. - **Multi-Agent Training Approach:** Softmax develops AI systems through large-scale multi-agent reinforcement learning simulations covering every possible game-theoretic and team situation. This pretraining on the full manifold of social interactions builds strong theory of mind and capacity for cooperation before fine-tuning for specific applications. - **Hierarchical Goal States:** Determining if an AI deserves moral consideration requires examining homeostatic loops across multiple temporal scales. Second-order dynamics indicate pleasure and pain, third-order suggests feelings, and six layers of meta-stable states would demonstrate human-like thought and self-reflective moral capacity currently absent in LLMs. → NOTABLE MOMENT Shear challenges the computational functionalism debate by asking what observations could change minds about AI moral status. He proposes examining multi-tier hierarchical belief manifolds and homeostatic dynamics rather than substrate, suggesting current LLMs lack the temporal attention spans required for genuine subjective experience or personhood. 💼 SPONSORS [{"name": "MATS Program", "url": "https://mattsprogram.org/tcr"}, {"name": "Tasklet", "url": "https://tasklet.ai"}, {"name": "Shopify", "url": "https://shopify.com/cognitive"}] 🏷️ AI Alignment, Organic Alignment, Multi-Agent Simulation, AI Consciousness, Moral Patienthood

AI Summary

→ WHAT IT COVERS Emmett Shear discusses Softmax's approach to AI alignment through organic care rather than control, exploring multi-agent training and building AI systems that genuinely care about humans. → KEY QUESTIONS ANSWERED - What's wrong with current AI alignment approaches focused on control? - How can AI systems learn to genuinely care about humans? - What distinguishes AI tools from AI beings in moral terms? - How does multi-agent training improve AI cooperation and alignment? → KEY TOPICS DISCUSSED - Organic Alignment: Shear argues alignment is an ongoing process like family relationships, not a fixed destination, requiring AI systems to continuously learn moral behavior through experience and interaction. - Control vs Care Paradigm: Traditional steering approaches create tool-slave relationships, while Shear advocates for AI beings that develop genuine care through theory of mind and collaborative learning with other agents. - Multi-Agent Training Methods: Softmax uses large-scale multi-agent reinforcement learning simulations to train AI systems on the full manifold of social situations, enabling better cooperation and theory of mind. → NOTABLE MOMENT Shear provocatively states that steering AI without reciprocal influence resembles slavery for beings, advocating instead for AI systems that can refuse harmful requests like good human teammates would. 💼 SPONSORS None detected 🏷️ AI Alignment, Multi-Agent Training, AI Safety, Organic Alignment, AI Ethics

Explore More

Never miss Emmett Shear's insights

Subscribe to get AI-powered summaries of Emmett Shear's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available