Skip to main content
YC

Yejin Choi

1episode
1podcast

We have 1 summarized appearance for Yejin Choi so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode

AI Summary

→ WHAT IT COVERS Yejin Choi, professor at Stanford HAI, explores democratizing AI through small language models that match larger counterparts. She details synthetic data generation techniques, reinforcement learning during pretraining, and pluralistic alignment approaches. The conversation examines mode collapse in LLMs, the artificial hive mind phenomenon, and how academic research can make powerful AI accessible beyond resource-rich tech companies. → KEY INSIGHTS - **Prismatic Synthesis for Data Diversity:** Generate synthetic math problems using a 32B parameter teacher model, compute gradient vectors with a 1.5B proxy model, apply k-means clustering to identify overrepresented patterns, then aggressively filter redundant examples. This iterative over-generation and filtration process produces 1 million diverse training examples that outperform datasets from 20x larger models by maximizing qualitative differences in the training distribution. - **Mode Collapse After Post-Training:** LLMs exhibit striking homogeneity across models after supervised fine-tuning and reinforcement learning, even for open-ended questions. When asked to generate random numbers or creative content with higher temperature settings, models like Llama, ChatGPT, and DeepSeek R1 produce nearly identical outputs, sometimes verbatim. Pretrained models show better diversity, but post-training optimization toward preferred answers creates intra-modal and inter-modal convergence that reduces output variety. - **Reinforcement Learning as Pretraining Objective:** During pretraining, encourage models to generate intermediate thoughts before predicting next tokens, rewarding predictions only when the thought increases conditional probability compared to predictions without thought. This information-gain approach requires more compute per token but produces models that perform better on reasoning benchmarks after standard post-training, similar to how humans benefit from learning logical thinking early in development. - **Making Out-of-Distribution In-Distribution:** Current AI systems only perform well on data similar to training examples, requiring comprehensive coverage of edge cases and corner scenarios. Post-training through supervised fine-tuning addresses some gaps, while reinforcement learning forces models to explore unexplored regions. This differs fundamentally from human learning efficiency, where people handle novel situations without extensive prior examples, representing a core limitation of the data-dependent paradigm. - **Three Types of Pluralistic Alignment:** Implement overtone pluralism by presenting multiple reasonable viewpoints for politically thorny questions rather than selecting majority opinion. Apply distributional pluralism to ensure AI decision distributions match human decision distributions across populations, avoiding super-skewed outcomes. Enable steerable pluralism so users can adjust models to different value systems within socially acceptable bounds, respecting cultural and religious diversity without enforcing artificial neutrality. - **Spectrum Tuning for Output Diversity:** Post-training methods can teach models to retain diverse generation patterns instead of converging on single correct answers. By designing algorithms that preserve the spectrum of valid outputs and ensuring training data represents diverse perspectives, models avoid the homogenization problem where AI-generated content makes internet discourse less varied. This requires explicit algorithmic innovation beyond standard supervised fine-tuning and reinforcement learning approaches. → NOTABLE MOMENT Choi reveals that when DeepSeek R1 undergoes pure reinforcement learning optimization, the model spontaneously begins code-switching between Chinese, English, and other languages mid-solution while solving math problems. The RL process only rewards correct final answers without constraining the reasoning path, allowing bizarre behaviors to emerge and even get reinforced, demonstrating why distillation and imitation learning become necessary to maintain human-interpretable outputs. 💼 SPONSORS None detected 🏷️ Small Language Models, Synthetic Data Generation, Pluralistic AI Alignment, Mode Collapse, Reinforcement Learning, AI Democratization

Never miss Yejin Choi's insights

Subscribe to get AI-powered summaries of Yejin Choi's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available