[State of Code Evals] After SWE-bench, Code Clash & SOTA Coding Benchmarks recap — John Yang
Episode
17 min
Read time
2 min
Topics
Fundraising & VC, Software Development
AI-Generated Summary
Key Takeaways
- ✓SWE-bench Extensions: The benchmark expanded beyond its Django-focused Python origins to include multilingual support across nine languages (JavaScript, Rust, Java, C, Ruby) and 40 repositories, plus multimodal capabilities. Independent teams created variants like SWE-bench Pro without original author involvement, showing benchmark adoption.
- ✓Code Clash Framework: This new evaluation method replaces unit tests with programming tournaments where two or more language models maintain separate codebases, iteratively improving them each round before competing in arenas. Models must demonstrate long-horizon development skills with consequential, dependent changes rather than isolated task completion.
- ✓Benchmark Diversification: New domain-specific benchmarks emerged including SWE-ficiency for code optimization without behavior changes, Psy-code for scientific computing, SEC-bench for security, and SRE-bench for operations. Each targets specific coding domains beyond general software engineering, enabling more targeted model evaluation and development.
- ✓Academic Data Limitations: Academic researchers lack access to valuable user interaction data that companies like Cognition and Cursor collect naturally through product usage. Building compelling products or creating realistic user simulators both present significant challenges, limiting academic progress on human-AI collaboration research compared to industry.
What It Covers
John Yang discusses the evolution of SWE-bench coding benchmarks since its October 2022 launch, including multilingual extensions across nine languages, the new Code Clash tournament framework for long-horizon development evaluation, and emerging challenges in coding evaluation methodology.
Key Questions Answered
- •SWE-bench Extensions: The benchmark expanded beyond its Django-focused Python origins to include multilingual support across nine languages (JavaScript, Rust, Java, C, Ruby) and 40 repositories, plus multimodal capabilities. Independent teams created variants like SWE-bench Pro without original author involvement, showing benchmark adoption.
- •Code Clash Framework: This new evaluation method replaces unit tests with programming tournaments where two or more language models maintain separate codebases, iteratively improving them each round before competing in arenas. Models must demonstrate long-horizon development skills with consequential, dependent changes rather than isolated task completion.
- •Benchmark Diversification: New domain-specific benchmarks emerged including SWE-ficiency for code optimization without behavior changes, Psy-code for scientific computing, SEC-bench for security, and SRE-bench for operations. Each targets specific coding domains beyond general software engineering, enabling more targeted model evaluation and development.
- •Academic Data Limitations: Academic researchers lack access to valuable user interaction data that companies like Cognition and Cursor collect naturally through product usage. Building compelling products or creating realistic user simulators both present significant challenges, limiting academic progress on human-AI collaboration research compared to industry.
Notable Moment
Yang reveals that when Cognition released Devon with strong SWE-bench results, he received only two weeks advance notice via email. The release sparked an industry arms race in coding benchmarks, transforming SWE-bench from a little-used academic project into a central evaluation standard.
You just read a 3-minute summary of a 14-minute episode.
Get Latent Space summarized like this every Monday — plus up to 2 more podcasts, free.
Pick Your Podcasts — FreeKeep Reading
More from Latent Space
Physical AI that Moves the World — Qasar Younis & Peter Ludwig, Applied Intuition
Apr 27 · 72 min
Morning Brew Daily
Jerome Powell Ain’t Leavin’ Yet & Movie Tickets Cost $50!?
Apr 30
More from Latent Space
AIE Europe Debrief + Agent Labs Thesis: Unsupervised Learning x Latent Space Crossover Special (2026)
Apr 23 · 54 min
a16z Podcast
Workday’s Last Workday? AI and the Future of Enterprise Software
Apr 30
More from Latent Space
We summarize every new episode. Want them in your inbox?
Physical AI that Moves the World — Qasar Younis & Peter Ludwig, Applied Intuition
AIE Europe Debrief + Agent Labs Thesis: Unsupervised Learning x Latent Space Crossover Special (2026)
Shopify’s AI Phase Transition: 2026 Usage Explosion, Unlimited Opus-4.6 Token Budget, Tangle, Tangent, SimGym — with Mikhail Parakhin, Shopify CTO
🔬 Training Transformers to solve 95% failure rate of Cancer Trials — Ron Alfa & Daniel Bear, Noetik
Notion’s Token Town: 5 Rebuilds, 100+ Tools, MCP vs CLIs and the Software Factory Future — Simon Last & Sarah Sachs of Notion
Similar Episodes
Related episodes from other podcasts
Morning Brew Daily
Apr 30
Jerome Powell Ain’t Leavin’ Yet & Movie Tickets Cost $50!?
a16z Podcast
Apr 30
Workday’s Last Workday? AI and the Future of Enterprise Software
Masters of Scale
Apr 30
How Poppi’s founders built a new soda brand worth $2 billion
Snacks Daily
Apr 30
🦸♀️ “MAMA Stocks” — Zuck’s Ad/AI machine. Hilary Duff’s anti-Ozempic bet. Bill Ackman’s Influencer IPO. +Refresher surge
The Mel Robbins Podcast
Apr 30
Eat This to Live Longer, Stay Young, and Transform Your Health
Explore Related Topics
This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.
Read this week's Software Engineering Podcast Insights — cross-podcast analysis updated weekly.
You're clearly into Latent Space.
Every Monday, we deliver AI summaries of the latest episodes from Latent Space and 192+ other podcasts. Free for up to 3 shows.
Start My Monday DigestNo credit card · Unsubscribe anytime