Skip to main content
Deep Questions with Cal Newport

Ep. 367: What if AI Doesn’t Get Much Better Than This?

97 min episode · 2 min read
·

Episode

97 min

Read time

2 min

Topics

Artificial Intelligence

AI-Generated Summary

Key Takeaways

  • Scaling Law Failure: The 2020 Kaplan paper showed language models improved dramatically when made bigger, enabling GPT-3 and GPT-4 breakthroughs. By fall 2024, this stopped working—OpenAI's Orion, Meta's Behemoth, and Elon Musk's Grok-3 all failed to deliver expected leaps despite massive compute investments, ending the path to AGI.
  • Post-Training Shift: After pre-training scaling failed, AI companies pivoted to post-training techniques like reinforcement learning and test-time compute to squeeze better performance from existing models. This produced incremental improvements measured by benchmark percentages rather than transformative new capabilities, fundamentally changing the industry trajectory from revolutionary to evolutionary progress.
  • Job Market Reality: Media reports conflate unrelated factors—tech sector layoffs stem from pandemic overhiring corrections, not AI replacement. A resurfaced MIT study found 95% of companies attempting AI implementation failed and abandoned it. Actual AI revenue totals only 35 billion dollars annually versus 560 billion in capital expenditures over eighteen months.
  • Computer Science Careers: Master's degrees in computer science typically provide positive salary returns because two-year programs enable higher starting positions that offset lost earnings. PhDs require five-plus years and should only be pursued for research careers, not pure salary optimization. Degree quality and institutional reputation matter significantly for hiring outcomes.
  • Digital Minimalism Practice: Ed Sheeran eliminated his phone in 2015 after accumulating 10,000 contacts, switching to iPad-only email checked weekly. People adapted without conflict—no enforcement mechanisms exist requiring instant availability. The feared social consequences of communication boundaries rarely materialize; others simply adjust their expectations and move forward with their lives.

What It Covers

Cal Newport examines why GPT-5 disappointed expectations, revealing how AI scaling laws stopped working in 2024, forcing companies to shift from breakthrough pre-training to incremental post-training improvements while overstating economic disruption claims.

Key Questions Answered

  • Scaling Law Failure: The 2020 Kaplan paper showed language models improved dramatically when made bigger, enabling GPT-3 and GPT-4 breakthroughs. By fall 2024, this stopped working—OpenAI's Orion, Meta's Behemoth, and Elon Musk's Grok-3 all failed to deliver expected leaps despite massive compute investments, ending the path to AGI.
  • Post-Training Shift: After pre-training scaling failed, AI companies pivoted to post-training techniques like reinforcement learning and test-time compute to squeeze better performance from existing models. This produced incremental improvements measured by benchmark percentages rather than transformative new capabilities, fundamentally changing the industry trajectory from revolutionary to evolutionary progress.
  • Job Market Reality: Media reports conflate unrelated factors—tech sector layoffs stem from pandemic overhiring corrections, not AI replacement. A resurfaced MIT study found 95% of companies attempting AI implementation failed and abandoned it. Actual AI revenue totals only 35 billion dollars annually versus 560 billion in capital expenditures over eighteen months.
  • Computer Science Careers: Master's degrees in computer science typically provide positive salary returns because two-year programs enable higher starting positions that offset lost earnings. PhDs require five-plus years and should only be pursued for research careers, not pure salary optimization. Degree quality and institutional reputation matter significantly for hiring outcomes.
  • Digital Minimalism Practice: Ed Sheeran eliminated his phone in 2015 after accumulating 10,000 contacts, switching to iPad-only email checked weekly. People adapted without conflict—no enforcement mechanisms exist requiring instant availability. The feared social consequences of communication boundaries rarely materialize; others simply adjust their expectations and move forward with their lives.

Notable Moment

Newport reveals that by summer 2024, all major AI companies privately knew their scaling strategies had failed. OpenAI's GPT-5 used five to ten times more compute than GPT-4 but delivered only marginal improvements, while tech CEOs continued making grandiose AGI claims publicly despite internal disappointments.

Know someone who'd find this useful?

You just read a 3-minute summary of a 94-minute episode.

Get Deep Questions with Cal Newport summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Deep Questions with Cal Newport

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best Mindset Podcasts (2026) — ranked and reviewed with AI summaries.

Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.

You're clearly into Deep Questions with Cal Newport.

Every Monday, we deliver AI summaries of the latest episodes from Deep Questions with Cal Newport and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime