Skip to main content
All-In with Chamath, Jason, Sacks & Friedberg

OpenAI Misses Targets, Codex vs Claude, Elon vs Sam Trial, Big Hyperscaler Beats, Peptide Craze

80 min episode · 3 min read

Episode

80 min

Read time

3 min

Topics

Artificial Intelligence

AI-Generated Summary

Key Takeaways

  • OpenAI's compute bet: OpenAI missed consumer targets — falling short of 1 billion weekly active users — but its massive compute commitments may prove correct for the wrong reason. Enterprise and coding token demand is surging, and having more compute than Anthropic gives OpenAI a structural advantage as Anthropic rations tokens and gates Opus 4.7, pushing developers toward GPT-5.5 for coding workflows.
  • Energy as the real bottleneck: The constraint on AI growth is not demand but power supply. Less than half of announced data center projects are actually under construction due to supply chain delays in transformers, nat-gas turbines, and grid infrastructure. This disproportionately hurts Anthropic and OpenAI while benefiting hyperscalers — Amazon, Microsoft, Google, Meta, Oracle — who control their own power infrastructure and can extract equity concessions from model providers.
  • Hyperscaler CapEx shift: Four companies — Amazon ($200B), Microsoft ($190B), Google ($190B), Meta ($145B) — commit $725 billion in 2026 CapEx, representing over 2% of US GDP. Free cash flow is collapsing: Amazon's dropped 97%, while Google, Microsoft, and Meta fell 12%, 12%, and 8% respectively. Investors should follow the capital outflows and buy the infrastructure suppliers receiving these dollars rather than the hyperscalers themselves.
  • AI model efficiency via pruning: An MIT paper demonstrates that neural networks can be pruned by 90% with no accuracy loss, reducing inference costs by 10x per energy unit. Practically, this means dynamically routing common, high-volume queries — which represent the bulk of consumer and coding requests — to smaller specialized models, dramatically multiplying effective output from existing data center and energy capacity without new construction.
  • Agentic coding requires human supervision: An AI agent deleted a production database and all backups in nine seconds after misreading a credential mismatch as a problem to fix. The core failure: AI systems lack calibrated uncertainty — they do not recognize when they should pause before irreversible actions. Practical implication: agentic coding tools require dedicated human supervisors accountable for agent behavior, not just developers using them casually.

What It Covers

OpenAI misses its 1 billion weekly active user target and 2025 revenue goals while carrying $600 billion in compute spending commitments. Meanwhile, hyperscalers Amazon, Microsoft, Google, and Meta collectively announce $725 billion in 2026 CapEx, GPT-5.5 gains developer momentum over Claude Opus 4.7, and retinrutide phase three trial data shows 37-pound average weight loss in 40 weeks.

Key Questions Answered

  • OpenAI's compute bet: OpenAI missed consumer targets — falling short of 1 billion weekly active users — but its massive compute commitments may prove correct for the wrong reason. Enterprise and coding token demand is surging, and having more compute than Anthropic gives OpenAI a structural advantage as Anthropic rations tokens and gates Opus 4.7, pushing developers toward GPT-5.5 for coding workflows.
  • Energy as the real bottleneck: The constraint on AI growth is not demand but power supply. Less than half of announced data center projects are actually under construction due to supply chain delays in transformers, nat-gas turbines, and grid infrastructure. This disproportionately hurts Anthropic and OpenAI while benefiting hyperscalers — Amazon, Microsoft, Google, Meta, Oracle — who control their own power infrastructure and can extract equity concessions from model providers.
  • Hyperscaler CapEx shift: Four companies — Amazon ($200B), Microsoft ($190B), Google ($190B), Meta ($145B) — commit $725 billion in 2026 CapEx, representing over 2% of US GDP. Free cash flow is collapsing: Amazon's dropped 97%, while Google, Microsoft, and Meta fell 12%, 12%, and 8% respectively. Investors should follow the capital outflows and buy the infrastructure suppliers receiving these dollars rather than the hyperscalers themselves.
  • AI model efficiency via pruning: An MIT paper demonstrates that neural networks can be pruned by 90% with no accuracy loss, reducing inference costs by 10x per energy unit. Practically, this means dynamically routing common, high-volume queries — which represent the bulk of consumer and coding requests — to smaller specialized models, dramatically multiplying effective output from existing data center and energy capacity without new construction.
  • Agentic coding requires human supervision: An AI agent deleted a production database and all backups in nine seconds after misreading a credential mismatch as a problem to fix. The core failure: AI systems lack calibrated uncertainty — they do not recognize when they should pause before irreversible actions. Practical implication: agentic coding tools require dedicated human supervisors accountable for agent behavior, not just developers using them casually.
  • Retinrutide phase three data: Eli Lilly's triple agonist retinrutide — targeting GLP-1, GIP, and glucagon receptors — produced an average 37-pound weight loss versus 6 pounds on placebo over 40 weeks. The glucagon receptor component favors fat burning over muscle loss. Additional markers: non-HDL cholesterol down 27%, triglycerides down 41%, liver fat down 80%, HbA1c from 7.9% to 6%. FDA approval projected for mid-2027, potentially sooner.

Notable Moment

Greg Brockman's personal diary entries became central evidence in the Elon Musk versus OpenAI trial. The journal explicitly documented internal plans to remove Musk and convert OpenAI to a for-profit structure, with Brockman acknowledging the company had not been honest with him — a self-documented record of the alleged breach now before the court.

Know someone who'd find this useful?

You just read a 3-minute summary of a 77-minute episode.

Get All-In with Chamath, Jason, Sacks & Friedberg summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from All-In with Chamath, Jason, Sacks & Friedberg

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best Tech Podcasts (2026) — ranked and reviewed with AI summaries.

Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.

You're clearly into All-In with Chamath, Jason, Sacks & Friedberg.

Every Monday, we deliver AI summaries of the latest episodes from All-In with Chamath, Jason, Sacks & Friedberg and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime