Skip to main content
20VC (20 Minute VC)

20VC: Cohere Founder on How Cohere Compete with OpenAI and Anthropic $BNs | Why Counties Should Fund Their Own Models & the Need for Model Sovereignty | How Sam Altman Has Done a Disservice to AI with Nick Frosst

67 min episode · 2 min read
·

Episode

67 min

Read time

2 min

Topics

Startups, Artificial Intelligence

AI-Generated Summary

Key Takeaways

  • Enterprise Model Differentiation: Cohere trains models specifically for enterprise tool use and business data integration rather than consumer engagement metrics, using synthetic data from fake companies, emails, and APIs to optimize workplace augmentation over conversational ability or entertainment value.
  • Scaling Law Limitations: Throwing more compute at models does not guarantee exponential progress, as evidenced by GPT-5's worse user experience with auto-selection delays. The industry still uses 2017 transformer architecture with minimal algorithmic changes, making data quality and product work more critical than raw compute power.
  • Efficient Model Training: Cohere trains models to fit on two GPUs, spending orders of magnitude less than competitors on foundational models. This efficiency addresses enterprise deployment bottlenecks where companies lack GPU access, making the sweet spot between performance, cost, and available infrastructure crucial for production deployment.
  • Benchmark Gaming Reality: Industry benchmarks like HellaSWAG and ARC AGI challenge do not reflect actual enterprise utility. Models can be trained specifically to perform well on benchmarks without improving real workplace value. Customer success depends on practical task completion, not leaderboard rankings or mathematical reasoning tests.
  • Forward-Deployed Engineering Value: Enterprise AI deployment requires forward-deployed engineers to customize models for specific business contexts, internal tools, and documentation. This approach is not poor technology but necessary infrastructure work, similar to how industrial revolution required labor policy alongside technological advancement to create sustainable productivity gains.

What It Covers

Nick Frosst, Cohere cofounder, discusses competing against OpenAI and Anthropic with enterprise-focused models, challenges Sam Altman's AGI predictions, explains why scaling laws have limits, and advocates for sovereign AI models and forward-deployed engineering approaches.

Key Questions Answered

  • Enterprise Model Differentiation: Cohere trains models specifically for enterprise tool use and business data integration rather than consumer engagement metrics, using synthetic data from fake companies, emails, and APIs to optimize workplace augmentation over conversational ability or entertainment value.
  • Scaling Law Limitations: Throwing more compute at models does not guarantee exponential progress, as evidenced by GPT-5's worse user experience with auto-selection delays. The industry still uses 2017 transformer architecture with minimal algorithmic changes, making data quality and product work more critical than raw compute power.
  • Efficient Model Training: Cohere trains models to fit on two GPUs, spending orders of magnitude less than competitors on foundational models. This efficiency addresses enterprise deployment bottlenecks where companies lack GPU access, making the sweet spot between performance, cost, and available infrastructure crucial for production deployment.
  • Benchmark Gaming Reality: Industry benchmarks like HellaSWAG and ARC AGI challenge do not reflect actual enterprise utility. Models can be trained specifically to perform well on benchmarks without improving real workplace value. Customer success depends on practical task completion, not leaderboard rankings or mathematical reasoning tests.
  • Forward-Deployed Engineering Value: Enterprise AI deployment requires forward-deployed engineers to customize models for specific business contexts, internal tools, and documentation. This approach is not poor technology but necessary infrastructure work, similar to how industrial revolution required labor policy alongside technological advancement to create sustainable productivity gains.

Notable Moment

Frosst directly criticizes Sam Altman for making obviously wrong predictions about AGI timelines and existential threats, arguing this world tour warning global leaders was academically disingenuous and damaged productive discourse about real AI risks like income inequality and workforce disruption rather than imagined digital gods.

Know someone who'd find this useful?

You just read a 3-minute summary of a 64-minute episode.

Get 20VC (20 Minute VC) summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from 20VC (20 Minute VC)

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best Investing Podcasts (2026) — ranked and reviewed with AI summaries.

Read this week's Startups & Product Podcast Insights — cross-podcast analysis updated weekly.

You're clearly into 20VC (20 Minute VC).

Every Monday, we deliver AI summaries of the latest episodes from 20VC (20 Minute VC) and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime