Skip to main content
Latent Space

The Agent Network — Dharmesh Shah

98 min episode · 3 min read
·

Episode

98 min

Read time

3 min

AI-Generated Summary

Key Takeaways

  • Agent Definition Framework: Shah defines agents as AI-powered software that accomplishes a goal, deliberately keeping it broad. He proposes treating tools as atomic agents, creating a unified primitive where everything becomes an agent that can be composed. This enables thinking about multi-agent systems as networks of single-celled organisms that combine into more complex structures, with MCP serving as the discovery and delegation protocol between them.
  • Model Routing Economics: Agent.AI discovered users default to the highest-numbered model like GPT-4.5 regardless of need, driving up costs dramatically. By back-channel testing the same agent across different models and collecting human ratings, they achieved orders of magnitude cost reduction with zero quality loss. Auto-optimization based on actual performance data rather than model names represents a major efficiency opportunity for agent builders.
  • Cross-Agent Memory Architecture: The next frontier involves shared memory across agents rather than isolated per-agent memory. When a user teaches one agent their preferences, subsequent agents should access that knowledge with proper opt-in controls. This creates network effects where building on a platform with existing user memory provides immediate value versus starting from scratch as an independent agent.
  • Work vs Results Pricing: Customer support succeeds with results-based pricing because tickets have known costs and objective quality measures like CSAT scores. Most use cases lack these two dimensions - consistent economic value and objective outcome measurement. Logo design demonstrates the problem: value varies by orders of magnitude and quality remains subjective, making per-outcome pricing impractical despite the appeal.
  • Engineering Value Thesis: The total economic value solvable by software grows faster than the denominator of available engineers, including AI agents. Engineers gain power tools to solve exponentially more problems, increasing their value rather than decreasing it. The focus on denominator growth ignores numerator expansion - the dramatically larger problem space that becomes addressable with AI-augmented engineering capabilities.

What It Covers

Dharmesh Shah, HubSpot CTO and creator of Agent.AI, discusses his minimal agent definition, the shift from work-as-a-service to results-based pricing, and building a professional network for AI agents. He covers MCP adoption, multi-agent systems, memory architecture, model routing optimization, and why 2026 will be the year of agent networks rather than individual agents.

Key Questions Answered

  • Agent Definition Framework: Shah defines agents as AI-powered software that accomplishes a goal, deliberately keeping it broad. He proposes treating tools as atomic agents, creating a unified primitive where everything becomes an agent that can be composed. This enables thinking about multi-agent systems as networks of single-celled organisms that combine into more complex structures, with MCP serving as the discovery and delegation protocol between them.
  • Model Routing Economics: Agent.AI discovered users default to the highest-numbered model like GPT-4.5 regardless of need, driving up costs dramatically. By back-channel testing the same agent across different models and collecting human ratings, they achieved orders of magnitude cost reduction with zero quality loss. Auto-optimization based on actual performance data rather than model names represents a major efficiency opportunity for agent builders.
  • Cross-Agent Memory Architecture: The next frontier involves shared memory across agents rather than isolated per-agent memory. When a user teaches one agent their preferences, subsequent agents should access that knowledge with proper opt-in controls. This creates network effects where building on a platform with existing user memory provides immediate value versus starting from scratch as an independent agent.
  • Work vs Results Pricing: Customer support succeeds with results-based pricing because tickets have known costs and objective quality measures like CSAT scores. Most use cases lack these two dimensions - consistent economic value and objective outcome measurement. Logo design demonstrates the problem: value varies by orders of magnitude and quality remains subjective, making per-outcome pricing impractical despite the appeal.
  • Engineering Value Thesis: The total economic value solvable by software grows faster than the denominator of available engineers, including AI agents. Engineers gain power tools to solve exponentially more problems, increasing their value rather than decreasing it. The focus on denominator growth ignores numerator expansion - the dramatically larger problem space that becomes addressable with AI-augmented engineering capabilities.
  • MCP Adoption Driver: MCP succeeds because it solves agent discovery and delegation at the right abstraction level - simple enough for adoption but powerful enough for utility. OpenAPI exists but lacks the use-case-specific features needed for LLM tool discovery. The universe voted by rapid adoption because MCP adds marginal value without going too far, filling a gap that existing standards couldn't address.
  • Over-Engineering Calculus: Under-engineering beats over-engineering when uncertain because tech debt has known interest rates and payoff paths, while premature abstraction may never get used. With code generation trending toward zero cost for refactoring, the case for under-engineering strengthens further. Pay the interest when you arrive at the need rather than speculating on future requirements that may never materialize.

Notable Moment

Shah reveals he personally funds all model costs for Agent.AI's 1.3 million users, including expensive GPT-4.5 calls, viewing it as research benefiting humanity. He tells himself late at night that inference costs are dropping to justify the expense. This commitment to keeping the platform completely free while supporting all models regardless of cost demonstrates his long-term bet on agent networks.

Know someone who'd find this useful?

You just read a 3-minute summary of a 95-minute episode.

Get Latent Space summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Latent Space

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.

You're clearly into Latent Space.

Every Monday, we deliver AI summaries of the latest episodes from Latent Space and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime