Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah Hill-Smith
Episode
78 min
Read time
2 min
Topics
Artificial Intelligence
AI-Generated Summary
Key Takeaways
- ✓Independent Benchmarking Economics: Artificial Analysis runs evaluations costing hundreds to thousands of dollars monthly, using mystery shopper policies with unidentified accounts to prevent labs from optimizing specific endpoints. They maintain independence by never accepting payment for better rankings while monetizing through enterprise subscriptions and private benchmarking services.
- ✓Intelligence Cost Deflation: GPT-4 level intelligence now costs 100-1000x less than at launch, yet total AI spending increases simultaneously. This paradox occurs because frontier models use 10x more tokens through reasoning chains and agentic workflows, creating a smile curve where both cheap commodity intelligence and expensive frontier capabilities grow.
- ✓Hallucination Measurement Innovation: The Omniscience Index scores models from negative 100 to positive 100, deducting points for incorrect answers rather than rewarding guesses. Claude models show lowest hallucination rates at 15-20%, while intelligence level shows no correlation with hallucination tendency, revealing post-training recipe differences between labs.
- ✓Agentic Benchmark Methodology: GDP-VAL AA uses 220 sub-tasks across 44 white-collar job scenarios, running models through their open-source STIRRUP harness with code execution, web search, and context management. Models in custom harnesses outperform their official chatbot versions, with Gemini 3 Pro using 95% confidence intervals requiring multiple evaluation runs.
- ✓Hardware Efficiency Reality: Blackwell generation GPUs deliver 2-3x throughput gains over Hopper for most workloads, not the marketed 4x, with actual improvements varying by model sparsity. Total parameter count correlates more strongly with knowledge retention than active parameters, suggesting sparse models like Kimi K2 at 3% activation still benefit from larger total sizes.
What It Covers
George Cameron and Micah-Hill Smith explain how Artificial Analysis became the independent benchmarking standard for AI models, covering their methodology for measuring intelligence, speed, cost, hallucination rates, and openness across hundreds of models and providers.
Key Questions Answered
- •Independent Benchmarking Economics: Artificial Analysis runs evaluations costing hundreds to thousands of dollars monthly, using mystery shopper policies with unidentified accounts to prevent labs from optimizing specific endpoints. They maintain independence by never accepting payment for better rankings while monetizing through enterprise subscriptions and private benchmarking services.
- •Intelligence Cost Deflation: GPT-4 level intelligence now costs 100-1000x less than at launch, yet total AI spending increases simultaneously. This paradox occurs because frontier models use 10x more tokens through reasoning chains and agentic workflows, creating a smile curve where both cheap commodity intelligence and expensive frontier capabilities grow.
- •Hallucination Measurement Innovation: The Omniscience Index scores models from negative 100 to positive 100, deducting points for incorrect answers rather than rewarding guesses. Claude models show lowest hallucination rates at 15-20%, while intelligence level shows no correlation with hallucination tendency, revealing post-training recipe differences between labs.
- •Agentic Benchmark Methodology: GDP-VAL AA uses 220 sub-tasks across 44 white-collar job scenarios, running models through their open-source STIRRUP harness with code execution, web search, and context management. Models in custom harnesses outperform their official chatbot versions, with Gemini 3 Pro using 95% confidence intervals requiring multiple evaluation runs.
- •Hardware Efficiency Reality: Blackwell generation GPUs deliver 2-3x throughput gains over Hopper for most workloads, not the marketed 4x, with actual improvements varying by model sparsity. Total parameter count correlates more strongly with knowledge retention than active parameters, suggesting sparse models like Kimi K2 at 3% activation still benefit from larger total sizes.
Notable Moment
The team revealed they ran DeepSeek V3 evaluations on Boxing Day 2024 in New Zealand, immediately recognizing it as a breakthrough moment before the world noticed weeks later with R1. Their early detection came from systematic tracking of global players beyond mainstream attention.
You just read a 3-minute summary of a 75-minute episode.
Get Latent Space summarized like this every Monday — plus up to 2 more podcasts, free.
Pick Your Podcasts — FreeKeep Reading
More from Latent Space
AIE Europe Debrief + Agent Labs Thesis: Unsupervised Learning x Latent Space Crossover Special (2026)
Apr 23 · 54 min
Odd Lots
Presenting Foundering Season 6: The Killing of Bob Lee, Part 1
Apr 26
More from Latent Space
Shopify’s AI Phase Transition: 2026 Usage Explosion, Unlimited Opus-4.6 Token Budget, Tangle, Tangent, SimGym — with Mikhail Parakhin, Shopify CTO
Apr 22 · 72 min
Masters of Scale
Possible: Netflix co-founder Reed Hastings: stories, schools, superpowers
Apr 25
More from Latent Space
We summarize every new episode. Want them in your inbox?
AIE Europe Debrief + Agent Labs Thesis: Unsupervised Learning x Latent Space Crossover Special (2026)
Shopify’s AI Phase Transition: 2026 Usage Explosion, Unlimited Opus-4.6 Token Budget, Tangle, Tangent, SimGym — with Mikhail Parakhin, Shopify CTO
🔬 Training Transformers to solve 95% failure rate of Cancer Trials — Ron Alfa & Daniel Bear, Noetik
Notion’s Token Town: 5 Rebuilds, 100+ Tools, MCP vs CLIs and the Software Factory Future — Simon Last & Sarah Sachs of Notion
Extreme Harness Engineering for Token Billionaires: 1M LOC, 1B toks/day, 0% human code, 0% human review — Ryan Lopopolo, OpenAI Frontier & Symphony
Similar Episodes
Related episodes from other podcasts
Odd Lots
Apr 26
Presenting Foundering Season 6: The Killing of Bob Lee, Part 1
Masters of Scale
Apr 25
Possible: Netflix co-founder Reed Hastings: stories, schools, superpowers
The Futur
Apr 25
Why Process is Better Than AI w/ Scott Clum | Ep 430
20VC (20 Minute VC)
Apr 25
20Product: Replit CEO on Why Coding Models Are Plateauing | Why the SaaS Apocalypse is Justified: Will Incumbents Be Replaced? | Why IDEs Are Dead and Do PMs Survive the Next 3-5 Years with Amjad Masad
This Week in Startups
Apr 25
The Defense Tech Startup YC Kicked Out of a Meeting is Now Arming America | E2280
Explore Related Topics
This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.
Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.
You're clearly into Latent Space.
Every Monday, we deliver AI summaries of the latest episodes from Latent Space and 192+ other podcasts. Free for up to 3 shows.
Start My Monday DigestNo credit card · Unsubscribe anytime