Skip to main content
KS

Karan Singhal

1episode
1podcast

We have 1 summarized appearance for Karan Singhal so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode

AI Summary

→ WHAT IT COVERS Karan Singhal, Head of Health AI at OpenAI, details how frontier models have reached attending-physician-level performance on medical queries, how HealthBench's 49,000 evaluation criteria measure that progress, and how ChatGPT Health — launching free globally in 2026 — aims to deliver universal access to medical expertise for 230 million weekly users already consulting AI on health questions. → KEY INSIGHTS - **HealthBench Hard as a capability benchmark:** OpenAI's HealthBench Hard dataset was constructed by selecting questions where existing models performed worst, making it adversarially difficult. GPT-4o scored 0% when the benchmark launched; current OpenAI models score approximately 40%, while competitor models sit around 20%. This benchmark remains unsaturated, making it the most reliable external signal for tracking genuine medical AI progress rather than saturated multiple-choice exam scores that no longer differentiate frontier models. - **Worst-of-N sampling as a safety metric:** Rather than relying on log-probability calibration — which breaks down with reasoning models that emit thinking tokens — OpenAI measures model reliability by sampling outputs 20–50 times and recording the worst result. The key finding: o3's worst-of-N performance substantially exceeded GPT-4o's best-case performance. For users, this means running a reasoning model like GPT-5 with thinking enabled once already approximates the reliability benefit of multiple sampling passes. - **260-physician network structures model behavior:** Instead of writing rules from first principles, OpenAI works with a tiered cohort of 260+ physicians — strategic advisers, a Slack-integrated annotation community, and a small core team that translates physician consensus into training data and evaluation rubrics. ChatGPT for Healthcare underwent nine red-teaming waves over six months with this group before launch, producing culturally calibrated, uncertainty-aware responses rather than a single-author spec. - **Context volume is the primary performance lever today:** Models perform at their ceiling when given maximum patient context. Uploading exported EMR PDFs, lab results, and wearable data into a reasoning model produces outputs competitive with attending physicians on most non-subspecialty cases. ChatGPT Health, launching in early 2026, automates this by connecting directly to electronic medical records and consumer wearables like Apple Health, eliminating the manual export-and-paste workflow that currently limits most patients. - **First RCT of AI physician copilots shows statistically significant outcome improvement:** OpenAI partnered with Kenya's PendaHealth clinic network to run what is described as the first randomized controlled trial of an LLM-based clinical copilot. Clinicians in the treatment arm received real-time AI flags while entering notes into their EMR; patients treated by AI-assisted clinicians showed statistically significant improvements in diagnosis and treatment outcomes versus the control group, providing real-world validation beyond offline benchmark performance. - **Chain-of-thought reasoning has not drifted toward illegibility at scale:** Concerns that reinforcement learning pressure would cause models to develop opaque internal "neurolese" dialects in their thinking tokens have not materialized at current scale. Models default to English reasoning because it aligns with their training prior, and OpenAI has actively studied whether scaling RL degrades this — finding no robust evidence of that trend yet. This preserves chain-of-thought as a practical safety monitoring tool for detecting scheming or undesirable reasoning patterns. - **ChatGPT Health launches free with no ads and no training on user data:** OpenAI is releasing ChatGPT Health globally at no cost, without rate limits, and with explicit commitments that connected health data — including medical records and wearables — will not be used to train foundation models. Health data is stored in an isolated, separately encrypted partition within ChatGPT, segregated from general memories and other app integrations, specifically to lower the activation energy for patients who would otherwise avoid connecting sensitive medical information. → NOTABLE MOMENT During a discussion of model reliability, Singhal revealed that OpenAI's nano-tier models — the smallest, cheapest GPT-5 variants available via API — now perform comparably to o3, which was the flagship reasoning model only months ago. This compression of capability into smaller models suggests the performance floor for medical AI is rising faster than most observers track. 💼 SPONSORS [{"name": "Granola", "url": "https://granola.ai"}, {"name": "Anthropic Claude", "url": "https://claude.ai/tcr"}, {"name": "Servul", "url": "https://servul.com/cognitive"}, {"name": "Framer", "url": "https://framer.com/cognitive"}, {"name": "Tasklet", "url": "https://tasklet.ai"}] 🏷️ Medical AI, HealthBench, OpenAI, AI Safety, Clinical Decision Support, Universal Basic Intelligence, AI Evaluation

Explore More

Never miss Karan Singhal's insights

Subscribe to get AI-powered summaries of Karan Singhal's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available