Ep. 367: What if AI Doesn’t Get Much Better Than This?
Episode
97 min
Read time
2 min
Topics
Artificial Intelligence
AI-Generated Summary
Key Takeaways
- ✓Scaling Law Failure: The 2020 Kaplan paper showed language models improved dramatically when made bigger, enabling GPT-3 and GPT-4 breakthroughs. By fall 2024, this stopped working—OpenAI's Orion, Meta's Behemoth, and Elon Musk's Grok-3 all failed to deliver expected leaps despite massive compute investments, ending the path to AGI.
- ✓Post-Training Shift: After pre-training scaling failed, AI companies pivoted to post-training techniques like reinforcement learning and test-time compute to squeeze better performance from existing models. This produced incremental improvements measured by benchmark percentages rather than transformative new capabilities, fundamentally changing the industry trajectory from revolutionary to evolutionary progress.
- ✓Job Market Reality: Media reports conflate unrelated factors—tech sector layoffs stem from pandemic overhiring corrections, not AI replacement. A resurfaced MIT study found 95% of companies attempting AI implementation failed and abandoned it. Actual AI revenue totals only 35 billion dollars annually versus 560 billion in capital expenditures over eighteen months.
- ✓Computer Science Careers: Master's degrees in computer science typically provide positive salary returns because two-year programs enable higher starting positions that offset lost earnings. PhDs require five-plus years and should only be pursued for research careers, not pure salary optimization. Degree quality and institutional reputation matter significantly for hiring outcomes.
- ✓Digital Minimalism Practice: Ed Sheeran eliminated his phone in 2015 after accumulating 10,000 contacts, switching to iPad-only email checked weekly. People adapted without conflict—no enforcement mechanisms exist requiring instant availability. The feared social consequences of communication boundaries rarely materialize; others simply adjust their expectations and move forward with their lives.
What It Covers
Cal Newport examines why GPT-5 disappointed expectations, revealing how AI scaling laws stopped working in 2024, forcing companies to shift from breakthrough pre-training to incremental post-training improvements while overstating economic disruption claims.
Key Questions Answered
- •Scaling Law Failure: The 2020 Kaplan paper showed language models improved dramatically when made bigger, enabling GPT-3 and GPT-4 breakthroughs. By fall 2024, this stopped working—OpenAI's Orion, Meta's Behemoth, and Elon Musk's Grok-3 all failed to deliver expected leaps despite massive compute investments, ending the path to AGI.
- •Post-Training Shift: After pre-training scaling failed, AI companies pivoted to post-training techniques like reinforcement learning and test-time compute to squeeze better performance from existing models. This produced incremental improvements measured by benchmark percentages rather than transformative new capabilities, fundamentally changing the industry trajectory from revolutionary to evolutionary progress.
- •Job Market Reality: Media reports conflate unrelated factors—tech sector layoffs stem from pandemic overhiring corrections, not AI replacement. A resurfaced MIT study found 95% of companies attempting AI implementation failed and abandoned it. Actual AI revenue totals only 35 billion dollars annually versus 560 billion in capital expenditures over eighteen months.
- •Computer Science Careers: Master's degrees in computer science typically provide positive salary returns because two-year programs enable higher starting positions that offset lost earnings. PhDs require five-plus years and should only be pursued for research careers, not pure salary optimization. Degree quality and institutional reputation matter significantly for hiring outcomes.
- •Digital Minimalism Practice: Ed Sheeran eliminated his phone in 2015 after accumulating 10,000 contacts, switching to iPad-only email checked weekly. People adapted without conflict—no enforcement mechanisms exist requiring instant availability. The feared social consequences of communication boundaries rarely materialize; others simply adjust their expectations and move forward with their lives.
Notable Moment
Newport reveals that by summer 2024, all major AI companies privately knew their scaling strategies had failed. OpenAI's GPT-5 used five to ten times more compute than GPT-4 but delivered only marginal improvements, while tech CEOs continued making grandiose AGI claims publicly despite internal disappointments.
You just read a 3-minute summary of a 94-minute episode.
Get Deep Questions with Cal Newport summarized like this every Monday — plus up to 2 more podcasts, free.
Pick Your Podcasts — FreeKeep Reading
More from Deep Questions with Cal Newport
How Do I Build “Cognitive Fitness”? | Monday Advice
Apr 27 · 51 min
Morning Brew Daily
Jerome Powell Ain’t Leavin’ Yet & Movie Tickets Cost $50!?
Apr 30
More from Deep Questions with Cal Newport
Is AI Trending Up or Down in 2026? | AI Reality Check
Apr 23 · 73 min
a16z Podcast
Workday’s Last Workday? AI and the Future of Enterprise Software
Apr 30
More from Deep Questions with Cal Newport
We summarize every new episode. Want them in your inbox?
How Do I Build “Cognitive Fitness”? | Monday Advice
Is AI Trending Up or Down in 2026? | AI Reality Check
Do I Need More Discipline? | Monday Advice
Is Claude Mythos “Terrifying”? | AI Reality Check
Ep. 400: Should I Embrace “Slow Technology”?
Similar Episodes
Related episodes from other podcasts
Morning Brew Daily
Apr 30
Jerome Powell Ain’t Leavin’ Yet & Movie Tickets Cost $50!?
a16z Podcast
Apr 30
Workday’s Last Workday? AI and the Future of Enterprise Software
Masters of Scale
Apr 30
How Poppi’s founders built a new soda brand worth $2 billion
Snacks Daily
Apr 30
🦸♀️ “MAMA Stocks” — Zuck’s Ad/AI machine. Hilary Duff’s anti-Ozempic bet. Bill Ackman’s Influencer IPO. +Refresher surge
The Mel Robbins Podcast
Apr 30
Eat This to Live Longer, Stay Young, and Transform Your Health
Explore Related Topics
This podcast is featured in Best Mindset Podcasts (2026) — ranked and reviewed with AI summaries.
Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.
You're clearly into Deep Questions with Cal Newport.
Every Monday, we deliver AI summaries of the latest episodes from Deep Questions with Cal Newport and 192+ other podcasts. Free for up to 3 shows.
Start My Monday DigestNo credit card · Unsubscribe anytime