Skip to main content
Eye on AI

#323 David Ha: Why Model Merging Could Be the Next AI Breakthrough

57 min episode · 2 min read
·

Episode

57 min

Read time

2 min

Topics

Artificial Intelligence

AI-Generated Summary

Key Takeaways

  • Model Merging Without Weights: Sakana AI's ABMCTS system, presented as a NeurIPS spotlight, combines closed proprietary models like OpenAI, Google, and DeepSeek without accessing their weights. Instead, it uses Monte Carlo tree search to prompt multiple models simultaneously, evaluates their responses, and iteratively refines the most promising conversational branches to achieve state-of-the-art results on benchmarks like ARC-AGI.
  • Quality Diversity Over Elitism: When running evolutionary search across thousands of agent-generated solutions, selecting only top-performing results causes premature local minima. Sakana AI's Shinkai Evolve framework instead combines high-performing solutions with low-scoring but highly novel ones, demonstrating dramatically improved sample efficiency — reaching optimal solutions significantly earlier than pure elitist selection strategies used in systems like Google's AlphaEvolve.
  • LLM-Squared for Algorithm Discovery: Sakana AI's DiscoPop paper demonstrated that frontier LLMs can generate thousands of candidate training algorithms for other LLMs, with evolutionary search selecting and breeding the best-performing ones. The resulting algorithm achieved state-of-the-art performance on LLM fine-tuning tasks, establishing a replicable pipeline where AI systems autonomously improve AI training efficiency without human-designed algorithmic proposals.
  • AI Scientist v2 with Tree-Based Idea Branching: Unlike v1, which required a user-supplied code template, AI Scientist v2 autonomously generates research ideas from a general prompt, branches into multiple idea variations using tree search, self-evaluates via a calibrated reviewer LLM, and produced three papers submitted to an ICLR workshop — one scoring above the acceptance threshold in a blind evaluation approved by ethics boards.
  • Noisy World Models Force Generalizable Agent Skills: Ha's earlier world model research found that higher-fidelity simulations make it easier for agents to exploit simulation bugs for unlimited scores rather than learning transferable skills. Deliberately introducing noise — widening the gap between simulated and real environments — forces agents to develop more robust, generalizable behaviors applicable in actual deployment conditions.

What It Covers

David Ha, co-founder of Sakana AI, explains how evolutionary algorithms combined with large language models can merge frontier AI models, generate novel scientific ideas, and potentially push beyond the boundaries of existing human knowledge through collective intelligence systems and open-ended search strategies.

Key Questions Answered

  • Model Merging Without Weights: Sakana AI's ABMCTS system, presented as a NeurIPS spotlight, combines closed proprietary models like OpenAI, Google, and DeepSeek without accessing their weights. Instead, it uses Monte Carlo tree search to prompt multiple models simultaneously, evaluates their responses, and iteratively refines the most promising conversational branches to achieve state-of-the-art results on benchmarks like ARC-AGI.
  • Quality Diversity Over Elitism: When running evolutionary search across thousands of agent-generated solutions, selecting only top-performing results causes premature local minima. Sakana AI's Shinkai Evolve framework instead combines high-performing solutions with low-scoring but highly novel ones, demonstrating dramatically improved sample efficiency — reaching optimal solutions significantly earlier than pure elitist selection strategies used in systems like Google's AlphaEvolve.
  • LLM-Squared for Algorithm Discovery: Sakana AI's DiscoPop paper demonstrated that frontier LLMs can generate thousands of candidate training algorithms for other LLMs, with evolutionary search selecting and breeding the best-performing ones. The resulting algorithm achieved state-of-the-art performance on LLM fine-tuning tasks, establishing a replicable pipeline where AI systems autonomously improve AI training efficiency without human-designed algorithmic proposals.
  • AI Scientist v2 with Tree-Based Idea Branching: Unlike v1, which required a user-supplied code template, AI Scientist v2 autonomously generates research ideas from a general prompt, branches into multiple idea variations using tree search, self-evaluates via a calibrated reviewer LLM, and produced three papers submitted to an ICLR workshop — one scoring above the acceptance threshold in a blind evaluation approved by ethics boards.
  • Noisy World Models Force Generalizable Agent Skills: Ha's earlier world model research found that higher-fidelity simulations make it easier for agents to exploit simulation bugs for unlimited scores rather than learning transferable skills. Deliberately introducing noise — widening the gap between simulated and real environments — forces agents to develop more robust, generalizable behaviors applicable in actual deployment conditions.

Notable Moment

Ha revealed that making a world model more realistic and detailed can backfire: agents exploit simulation imperfections to achieve artificially high scores without learning real skills. A noisier, less perfect simulation paradoxically produces agents with stronger, more transferable capabilities in real-world environments.

Know someone who'd find this useful?

You just read a 3-minute summary of a 54-minute episode.

Get Eye on AI summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Eye on AI

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.

Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.

You're clearly into Eye on AI.

Every Monday, we deliver AI summaries of the latest episodes from Eye on AI and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime