Skip to main content

Recent Episode Summaries

20 AI-powered summaries available

46 min episode3 min read

→ WHAT IT COVERS IBM Research India Director Amith Singhee examines why India has lagged in AI development despite abundant engineering talent, what conditions must converge for India to compete globally, and how IBM's enterprise-focused AI research — spanning hybrid cloud deployment, Granite LLMs, COBOL modernization, and agentic systems — addresses real-world business constraints.

51 min episode3 min read

→ WHAT IT COVERS Debdas Sen, CEO of TCG Digital, explains how his firm deploys hybrid AI combining proprietary knowledge graphs, enterprise data, and external LLMs to solve high-stakes industrial problems in energy and life sciences, arguing that AI without measurable ROI risks repeating the collapse seen after the 1990s hype cycle. → KEY INSIGHTS - **ROI threshold as project filter:** TCG Digital applies a 10x return benchmark when scoping client engagements — if a client spends $5M, the...

60 min episode3 min read

→ WHAT IT COVERS Professor Mausam of IIT Delhi analyzes why India lags behind the US and China in AI development despite having 1.4 billion people and elite technical institutions. He examines faculty shortages, funding diffusion, compute delays, brain drain, and government initiatives, arguing that systemic change—starting with attracting top professors—is the prerequisite for building a genuine AI ecosystem.

60 min episode3 min read

→ WHAT IT COVERS IBM Research VP Sriram Raghavan explains why IBM trains its Granite models — currently 2B and 8B parameters — directly using reinforcement learning rather than distilling from larger models, and how combining direct RL training with inference-time scaling allows small models to match GPT-4o and Claude 3.5 on code and math benchmarks at a fraction of the cost. → KEY INSIGHTS - **Direct RL vs.

34 min episode3 min read

→ WHAT IT COVERS Abhishek Singh, head of India's AI Mission, outlines India's $1.2 billion, five-year national AI program spanning compute infrastructure, data platforms, talent retention, and sovereign model development, positioning India as a global AI player competing with the US and China across seven strategic pillars. → KEY INSIGHTS - **Compute Subsidization Model:** India incentivizes private sector GPU investment rather than buying compute directly, then subsidizes end-user costs by 40%.

58 min episode3 min read

→ WHAT IT COVERS Adi Kuruganti, Chief AI and Developer Ops at Automation Anywhere, explains why most enterprise agentic AI pilots fail to reach production, how combining deterministic automation with agentic AI drives mission-critical outcomes, and what a realistic three-to-five-year path toward autonomous enterprise operations looks like. → KEY INSIGHTS - **Pilot-to-Production Gap:** Most enterprise agentic AI deployments stall because teams treat deployment as a technology problem rather than...

54 min episode3 min read

→ WHAT IT COVERS Dan Faulkner, CEO of SmartBear, examines how AI coding tools like Claude Code and OpenAI Codex are accelerating software production faster than application testing can keep pace, creating an "application integrity" gap where clean, passing code still fails real end users in deployed environments. → KEY INSIGHTS - **Application Integrity Gap:** Clean code passing unit tests does not guarantee a working application.

58 min episode3 min read

→ WHAT IT COVERS Sergey Levine, co-founder of Physical Intelligence and UC Berkeley professor, explains how robotic foundation models work, why diverse real-world data outperforms simulation, how Vision Language Action models enable generalist robots, and what the path toward autonomous continual learning systems looks like over the next several years.

60 min episode3 min read

→ WHAT IT COVERS Sebastian Risi, researcher at Sakana AI and author of *Neuroevolution*, explains why evolutionary algorithms offer a fundamentally different path to AI than gradient descent — covering plastic neural networks that rewire during operation, networks that grow from a single neuron, and how combining large language models with evolutionary search could automate scientific discovery. → KEY INSIGHTS - **Neuroevolution vs.

59 min episode3 min read

→ WHAT IT COVERS Sebastian Risi, researcher at Sakana AI, explains neuroevolution — using evolutionary algorithms instead of gradient descent to optimize neural networks — and explores biologically inspired approaches including plastic networks, growing architectures, and combining large language models with evolutionary search to advance AI capabilities.

61 min episode3 min read

→ WHAT IT COVERS Izhar Medalsy, CEO of Quantum Elements, explains how his company builds large-scale digital twins of quantum hardware — simulating up to 100 noisy qubits on classical supercomputers — and uses AI to identify noise sources, optimize error suppression, and push algorithm accuracy from 80% to 99% on IBM's platform using Shor's algorithm.

48 min episode3 min read

→ WHAT IT COVERS Kevin Tian, cofounder and CEO of Doppel, explains how AI-native social engineering attacks—spanning deepfake phone calls, fake LinkedIn personas, SEO poisoning, and brand impersonation—are scaling faster than human defenses, and how Doppel's platform scans, takes down, and simulates these multichannel threats for hundreds of enterprise customers.

42 min episode3 min read

→ WHAT IT COVERS Baris Gultekin, Snowflake's Head of Product for AI, explains how Snowflake builds enterprise AI agents that operate directly within governed data environments, covering the architecture behind Snowflake Intelligence, structured data retrieval challenges, agent reliability frameworks, and why data preparation is now the prerequisite for any viable enterprise AI strategy.

67 min episode3 min read

→ WHAT IT COVERS Pathway cofounder Zuzanna Stamirowska presents the Dragon Hatchling (BDH) architecture, a post-transformer neural network modeled on brain-like graph dynamics. The system stores state on edges rather than nodes, enables persistent memory at inference time, and trains comparably to GPT-2 while targeting enterprise reasoning tasks requiring small data and long-horizon coherence.

47 min episode3 min read

→ WHAT IT COVERS Phelim Brady, cofounder and CEO of Prolific, explains how his human data platform connects verified global participants with AI labs and researchers for post-training evaluation. With roughly 2 million registered participants and a 50/50 split between academic research and AI work, Prolific addresses the growing demand for rigorous human judgment in model evaluation.

46 min episode3 min read

→ WHAT IT COVERS Sharon Zhou, VP of AI at AMD and Stanford PhD graduate, explains how AMD uses AI agents and reinforcement learning to autonomously generate and optimize low-level GPU kernel code, enabling language models to run faster on AMD hardware while reducing the rare human expertise bottleneck in kernel engineering. → KEY INSIGHTS - **Catastrophic Forgetting Prevention:** When fine-tuning models without access to original pre-training data, reintroducing as little as 1% of pre-training...

57 min episode3 min read

→ WHAT IT COVERS David Ha, co-founder of Sakana AI, explains how evolutionary algorithms combined with large language models can merge frontier AI models, generate novel scientific ideas, and potentially push beyond the boundaries of existing human knowledge through collective intelligence systems and open-ended search strategies. → KEY INSIGHTS - **Model Merging Without Weights:** Sakana AI's ABMCTS system, presented as a NeurIPS spotlight, combines closed proprietary models like OpenAI,...

54 min episode3 min read

→ WHAT IT COVERS BCG Senior Partner Amanda Luther presents findings from an annual AI maturity study tracking 1,000–1,500 companies across 41 capability dimensions. Only 5% of companies qualify as AI leaders generating measurable P&L impact, while a widening value gap separates them from the 60% still classified as laggards or emerging adopters. → KEY INSIGHTS - **AI Maturity Distribution:** BCG's study segments companies into four tiers: 60% are laggards or emerging, 35% are scaling with...

61 min episode3 min read

→ WHAT IT COVERS Nick Frosst, Cohere cofounder and former Google Brain researcher under Geoffrey Hinton, explains why Cohere focuses on enterprise AI rather than AGI. He discusses building capital-efficient models requiring only two GPUs versus 16-plus for competitors, achieving 95% production deployment versus industry's 5%, and why transformer architectures remain dominant despite alternatives like capsule networks and neuroevolution approaches.

68 min episode3 min read

→ WHAT IT COVERS Carter Huffman, CTO of Modulate, explains how his company built ensemble AI models that analyze voice conversations in real time at massive scale. The architecture processes hundreds of millions of hours monthly for gaming safety, fraud detection, and voice AI applications by routing audio to specialized models rather than using single foundation models, achieving superior accuracy at one-thousandth the cost.

Monday morning, inbox, done.

Pick your shows, and start the week knowing what happened in your world.

1

Pick the Podcasts You Care About

Choose from 200+ curated shows or add any public RSS feed.

2

AI Reads Every New Episode

Key arguments, surprising data points, and frameworks worth stealing — pulled automatically.

3

One Email, Every Monday

A curated brief for each episode, with links to listen if something grabs you.

Explore More

Get a free sample digest

See what your Monday email looks like — real AI summaries, no account needed.

One free sample — no spam, no commitment.