Skip to main content
AM

Abhi Mahajan

1episode
1podcast

We have 1 summarized appearance for Abhi Mahajan so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode

AI Summary

→ WHAT IT COVERS Part two of a marathon live show examining AI for biology, recursive self-improvement, and geopolitical competition. Abhi Mahajan discusses AI foundation models for cancer treatment prediction, Helen Toner presents CSET's report on automated AI R&D revealing zero consensus among experts, and Jeremie Harris analyzes US-China AI competition dynamics and infrastructure vulnerabilities threatening American technological leadership. → KEY INSIGHTS - **Biology AI Validation Gap:** Most AI biology papers suffer from hidden confounding variables that domain experts recognize but language models miss. Small molecule binding affinity studies can be confounded by which chemist produced the molecule, since chemists specialize in specific targets and create similar-looking compounds. Export controls on chips demonstrably slowed Chinese AI development, with DeepSeek's CEO publicly stating before their breakthrough that chip access was their primary bottleneck, not algorithmic capability. - **Cancer Treatment Biomarkers:** Noetic AI profiles tumors using four modalities - pathology slides, 16-plex spatial proteomics for cell types, 19,000-gene spatial transcriptomics for functional state, and exome sequencing for genetic alterations. Their foundation model uses self-supervised masking to create tumor embeddings that identify response populations falling into distinct regions of embedding space, potentially revealing biomarkers no human understands but that predict treatment response better than traditional markers. - **Clinical Trial Economics:** Ninety-seven percent of oncology trials fail, but post-failure analysis typically reveals some patients responded to the drug. Researchers identify complex, heterogeneous biological signatures in responders involving specific cytokine groups or granzyme gene expression patterns. These discoveries rarely lead to actionable insights because the biomarkers defining patient response may be fundamentally non-human-legible, requiring black box models to capture the relevant biological information. - **AI R&D Automation Uncertainty:** CSET's closed-door workshop with frontier lab researchers, policy experts, and AI safety researchers failed to establish any consensus about automated AI R&D timelines or impacts. Participants agreed on near-term 2026-2027 developments but diverged completely on whether systems will fully replace human researchers or hit fundamental bottlenecks. This represents a major source of strategic surprise with participants holding incompatible world models despite examining identical evidence. - **Infrastructure Vulnerability Assessment:** Every American AI data center faces compromise risk from Chinese-manufactured components and personnel. Fifty percent of top AI researchers are Chinese nationals, including those at US frontier labs. The power grid contains Chinese transformer components with documented trojans designed for takedown capability. A plausible Taiwan invasion scenario begins with China attempting to disable the American electrical grid, preventing any AI competition before chip manufacturing questions become relevant. - **Biological Ground Truth Problem:** Biology lacks verifiable ground truth for clinically valuable problems, unlike math and coding where rewards are cheap and fast. Training reinforcement learning on toxicology requires observing effects over seconds to years, across multiple species, with dose-dependent and organ-specific outcomes only observable in vivo. This makes the biology AI feedback loop fundamentally slower than software domains, limiting recursive improvement potential regardless of algorithmic advances. - **S-Curve Parameter Disagreement:** AI capability development follows an S-curve with three critical parameters - lead-up duration, curve steepness, and ceiling height. Most experts cluster in two camps: short lead-up plus steep curve plus high ceiling, or long lead-up plus gradual curve plus low ceiling. Unexplored combinations like steep curve with low ceiling or gradual curve with high ceiling may better describe reality, particularly regarding superhuman-but-not-godlike AI plateaus. → NOTABLE MOMENT Helen Toner describes the workshop's first session where Ryan Greenblatt, Nicholas Carlini, Dash Kapoor, and Thomas Larson argued so intensely about automated AI research and development that they continued debating straight through the coffee break while other participants stood up to get refreshments. This captured the workshop's core finding: leading experts examining identical evidence maintain fundamentally incompatible world models about whether recursive self-improvement will occur. 💼 SPONSORS [{"name": "GovAI", "url": "https://governance.ai/opportunities"}, {"name": "Blitsy", "url": "https://blitsy.com"}, {"name": "Granola", "url": "https://cognitiverevolution.ai"}, {"name": "Tasklet", "url": "https://tasklet.ai"}, {"name": "Servo", "url": "https://serval.com/cognitive"}] 🏷️ AI for Biology, Recursive Self-Improvement, Cancer Treatment AI, Automated AI R&D, US-China AI Competition, AI Infrastructure Security, Foundation Models

Explore More

Never miss Abhi Mahajan's insights

Subscribe to get AI-powered summaries of Abhi Mahajan's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available