#323 David Ha: Why Model Merging Could Be the Next AI Breakthrough
Episode
57 min
Read time
2 min
Topics
Artificial Intelligence
AI-Generated Summary
Key Takeaways
- ✓Model Merging Without Weights: Sakana AI's ABMCTS system, presented as a NeurIPS spotlight, combines closed proprietary models like OpenAI, Google, and DeepSeek without accessing their weights. Instead, it uses Monte Carlo tree search to prompt multiple models simultaneously, evaluates their responses, and iteratively refines the most promising conversational branches to achieve state-of-the-art results on benchmarks like ARC-AGI.
- ✓Quality Diversity Over Elitism: When running evolutionary search across thousands of agent-generated solutions, selecting only top-performing results causes premature local minima. Sakana AI's Shinkai Evolve framework instead combines high-performing solutions with low-scoring but highly novel ones, demonstrating dramatically improved sample efficiency — reaching optimal solutions significantly earlier than pure elitist selection strategies used in systems like Google's AlphaEvolve.
- ✓LLM-Squared for Algorithm Discovery: Sakana AI's DiscoPop paper demonstrated that frontier LLMs can generate thousands of candidate training algorithms for other LLMs, with evolutionary search selecting and breeding the best-performing ones. The resulting algorithm achieved state-of-the-art performance on LLM fine-tuning tasks, establishing a replicable pipeline where AI systems autonomously improve AI training efficiency without human-designed algorithmic proposals.
- ✓AI Scientist v2 with Tree-Based Idea Branching: Unlike v1, which required a user-supplied code template, AI Scientist v2 autonomously generates research ideas from a general prompt, branches into multiple idea variations using tree search, self-evaluates via a calibrated reviewer LLM, and produced three papers submitted to an ICLR workshop — one scoring above the acceptance threshold in a blind evaluation approved by ethics boards.
- ✓Noisy World Models Force Generalizable Agent Skills: Ha's earlier world model research found that higher-fidelity simulations make it easier for agents to exploit simulation bugs for unlimited scores rather than learning transferable skills. Deliberately introducing noise — widening the gap between simulated and real environments — forces agents to develop more robust, generalizable behaviors applicable in actual deployment conditions.
What It Covers
David Ha, co-founder of Sakana AI, explains how evolutionary algorithms combined with large language models can merge frontier AI models, generate novel scientific ideas, and potentially push beyond the boundaries of existing human knowledge through collective intelligence systems and open-ended search strategies.
Key Questions Answered
- •Model Merging Without Weights: Sakana AI's ABMCTS system, presented as a NeurIPS spotlight, combines closed proprietary models like OpenAI, Google, and DeepSeek without accessing their weights. Instead, it uses Monte Carlo tree search to prompt multiple models simultaneously, evaluates their responses, and iteratively refines the most promising conversational branches to achieve state-of-the-art results on benchmarks like ARC-AGI.
- •Quality Diversity Over Elitism: When running evolutionary search across thousands of agent-generated solutions, selecting only top-performing results causes premature local minima. Sakana AI's Shinkai Evolve framework instead combines high-performing solutions with low-scoring but highly novel ones, demonstrating dramatically improved sample efficiency — reaching optimal solutions significantly earlier than pure elitist selection strategies used in systems like Google's AlphaEvolve.
- •LLM-Squared for Algorithm Discovery: Sakana AI's DiscoPop paper demonstrated that frontier LLMs can generate thousands of candidate training algorithms for other LLMs, with evolutionary search selecting and breeding the best-performing ones. The resulting algorithm achieved state-of-the-art performance on LLM fine-tuning tasks, establishing a replicable pipeline where AI systems autonomously improve AI training efficiency without human-designed algorithmic proposals.
- •AI Scientist v2 with Tree-Based Idea Branching: Unlike v1, which required a user-supplied code template, AI Scientist v2 autonomously generates research ideas from a general prompt, branches into multiple idea variations using tree search, self-evaluates via a calibrated reviewer LLM, and produced three papers submitted to an ICLR workshop — one scoring above the acceptance threshold in a blind evaluation approved by ethics boards.
- •Noisy World Models Force Generalizable Agent Skills: Ha's earlier world model research found that higher-fidelity simulations make it easier for agents to exploit simulation bugs for unlimited scores rather than learning transferable skills. Deliberately introducing noise — widening the gap between simulated and real environments — forces agents to develop more robust, generalizable behaviors applicable in actual deployment conditions.
Notable Moment
Ha revealed that making a world model more realistic and detailed can backfire: agents exploit simulation imperfections to achieve artificially high scores without learning real skills. A noisier, less perfect simulation paradoxically produces agents with stronger, more transferable capabilities in real-world environments.
You just read a 3-minute summary of a 54-minute episode.
Get Eye on AI summarized like this every Monday — plus up to 2 more podcasts, free.
Pick Your Podcasts — FreeKeep Reading
More from Eye on AI
#338 Amith Singhee: Can India Catch Up in AI? IBM's Amith Singhee on What It Will Take
Apr 24 · 46 min
The Mel Robbins Podcast
Do THIS Every Day to Rewire Your Brain From Stress and Anxiety
Apr 27
More from Eye on AI
#337 Debdas Sen: Why AI Without ROI Will Die (Again)
Apr 23 · 51 min
The Model Health Show
The Menopause Gut: Why Metabolism Changes & How to Reclaim Your Body - With Cynthia Thurlow
Apr 27
More from Eye on AI
We summarize every new episode. Want them in your inbox?
#338 Amith Singhee: Can India Catch Up in AI? IBM's Amith Singhee on What It Will Take
#337 Debdas Sen: Why AI Without ROI Will Die (Again)
#336 Professor Mausam: Why India Is Losing the AI Race and What It Will Take to Catch Up
#335 Sriram Raghavan: Why IBM Is Betting Everything on Small AI Models
#334 Abhishek Singh: The $1.2 Billion Plan to Turn India Into an AI Superpower
Similar Episodes
Related episodes from other podcasts
The Mel Robbins Podcast
Apr 27
Do THIS Every Day to Rewire Your Brain From Stress and Anxiety
The Model Health Show
Apr 27
The Menopause Gut: Why Metabolism Changes & How to Reclaim Your Body - With Cynthia Thurlow
The Rest is History
Apr 26
664. Britain in the 70s: Scandal in Downing Street (Part 3)
The Learning Leader Show
Apr 26
685: David Epstein - The Freedom Trap, Narrative Values, General Magic, The Nobel Prize Winner Who Simplified Everything, Wearing the Same Thing Everyday, and Why Constraints Are the Secret to Your Best Work
The AI Breakdown
Apr 26
Where the Economy Thrives After AI
Explore Related Topics
This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.
Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.
You're clearly into Eye on AI.
Every Monday, we deliver AI summaries of the latest episodes from Eye on AI and 192+ other podcasts. Free for up to 3 shows.
Start My Monday DigestNo credit card · Unsubscribe anytime