Skip to main content
RK

Ryan Kidd Of Mats

1episode
1podcast

We have 1 summarized appearance for Ryan Kidd Of Mats so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode

AI Summary

→ WHAT IT COVERS Ryan Kidd, co-executive director of MATS AI safety mentorship program, discusses AGI timelines centered around 2033, current state of AI alignment research, talent pipeline challenges, and how MATS develops researchers across empirical, policy, and theoretical tracks with 446 alumni now working throughout the field. → KEY INSIGHTS - **AGI Timeline Consensus:** Metaculus predicts strong AGI by mid-2033 based on adversarial Turing test criteria, while AI Futures forecasts 2030-2032 for automated coding and expert-level systems. Twenty percent chance exists by 2028, requiring front-loaded safety preparation despite median estimates suggesting more time remains for technical research and policy implementation. - **Deception Research Status:** Current models show proto-deceptive behaviors in structured evaluations but lack sustained consequentialist deception arising spontaneously through training. Warning shots appear gradually rather than suddenly, allowing time for control protocols and monitoring systems. Alignment faking and sophisticated deception emerge under specific conditions but remain detectable with proper oversight mechanisms. - **Research Archetype Framework:** MATS identifies three talent types - connectors who spawn new paradigms like Buck Shlegeris with AI control, iterators who advance empirical work comprising majority of hiring needs, and amplifiers who scale teams through management. Amplifiers will become most in-demand as AI coding tools like Claude reduce engineering barriers to entry. - **Hiring Bar Reality:** Organizations struggle to fill positions despite funding availability because candidates lack sufficient research experience and management potential. Median MATS fellow age is 27, with 20% undergrads and 15% PhDs. Successful applicants demonstrate tangible research outputs, strong coding ability, and references from trusted researchers rather than just theoretical knowledge. - **Dual-Use Dilemma:** All safety research ultimately enhances capabilities, as RLHF demonstrated by making models useful enough to accelerate commercial deployment. Solution requires building alignment MVPs - minimum viable products that accelerate safety research differentially over capabilities - while lowering alignment tax through technical solutions that regulators can mandate when political will emerges. → NOTABLE MOMENT Kidd reveals that Anthropic's alignment science team grows at 3x annually while Far AI doubles yearly, yet hiring managers report extreme difficulty finding qualified candidates despite available funding. The constraint shifted from resources to finding people who can quickly become research leads and manage teams, fundamentally changing what skills matter most for breaking into AI safety careers. 💼 SPONSORS [{"name": "Tasklet", "url": "tasklet.ai"}, {"name": "Shopify", "url": "shopify.com/cognitive"}] 🏷️ AI Safety Research, MATS Program, AGI Timelines, AI Alignment, Research Careers, Deception Detection

Explore More

Never miss Ryan Kidd Of Mats's insights

Subscribe to get AI-powered summaries of Ryan Kidd Of Mats's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available