Skip to main content
MD

Myra Deng

1episode
1podcast

We have 1 summarized appearance for Myra Deng so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode

AI Summary

→ WHAT IT COVERS Goodfire AI announces $150M Series B at $1.25B valuation as the first mechanistic interpretability frontier lab applying research to production. Mark Bissell and Myra Deng discuss using sparse autoencoders and probes to understand model internals, enable surgical steering of behaviors, and solve real-world problems from PII detection at Rakuten to Alzheimer's biomarker discovery. → KEY INSIGHTS - **Production Interpretability at Scale:** Goodfire deploys real-time steering on trillion-parameter models like Qwen QwQ, demonstrating surgical control over specific behaviors through feature manipulation. Their forked SG-Lang codebase enables live activation steering during inference, showing interpretability techniques can scale beyond toy models to frontier systems requiring full H100 nodes for deployment. - **SAE Limitations in Practice:** Sparse autoencoders underperform raw activation probes for detecting harmful behaviors, hallucinations, and PII in production scenarios. While SAEs excel with noisy synthetic datasets requiring generalization, supervised probes trained directly on activations achieve better downstream performance metrics when clean labeled data exists, revealing unsupervised methods have specific optimal use cases. - **Steering-Prompting Equivalence:** Research from Ekdeep Singh establishes formal mathematical equivalence between activation steering and in-context learning. The framework predicts exact steering magnitudes needed to replicate prompting effects, including jailbreaks through many-shot examples. This enables converting between inference-time interventions and understanding their interchangeable nature for model control. - **Post-Training Surgical Edits:** Interpretability enables targeted removal of unintended behaviors like political bias or reward hacking without full retraining. Goodfire positions this as moving beyond crude reinforcement learning that only provides reward signals, toward expert feedback that surgically modifies specific model representations. The approach addresses issues like GPT-4o's sycophancy problems through precise internal adjustments. - **Healthcare Biomarker Discovery:** Partnership with Mayo Clinic, AHRQ Institute, and Prima Menta uses interpretability on genomics foundation models to identify novel Alzheimer's disease biomarkers. The technique extracts superhuman knowledge from narrow AI systems trained on medical imaging and genomic data, demonstrating interpretability as a scientific discovery tool beyond debugging or safety applications. - **Rakuten PII Detection System:** Deployed token-level PII classification using probes on language model activations processes all user queries daily. The system handles synthetic-to-real transfer learning, multilingual requirements across English and Japanese, and precise scrubbing without routing private data to downstream providers. This demonstrates production-grade interpretability solving compliance problems traditional guardrail models cannot address efficiently. → NOTABLE MOMENT The team revealed live demonstration limitations expose engineering challenges at scale. Their hastily assembled demo of steering a trillion-parameter model required custom infrastructure and proved fragile behind the scenes, highlighting how production interpretability demands solving both novel research problems and significant systems engineering hurdles that academic toy models never encounter. 💼 SPONSORS None detected 🏷️ Mechanistic Interpretability, Sparse Autoencoders, Model Steering, AI Safety, Healthcare AI, Post-Training

Explore More

Never miss Myra Deng's insights

Subscribe to get AI-powered summaries of Myra Deng's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available