Skip to main content
The TWIML AI Podcast

How to Find the Agent Failures Your Evals Miss with Scott Clark - #767

53 min episode · 2 min read
·

Episode

53 min

Read time

2 min

AI-Generated Summary

Key Takeaways

  • Observability Hierarchy: Structure agent observability across three layers: telemetry (logging raw traces), monitoring (tracking known signals like latency and tool call counts in real time), and analytics (discovering unknown failure patterns via unsupervised clustering). Most teams stop at monitoring, missing the analytics layer where the highest-value insights about agent behavior actually emerge.
  • Trace Enrichment for Clustering: Convert raw OpenTelemetry traces using the GenAI semantic convention into structured numerical vectors capturing tool call sequences, response patterns, and LLM-scored evals. These vectors enable clustering across thousands of sessions to identify behavioral sub-populations, such as the 5% of traces where agents claim tool calls occurred but trace logs confirm they never executed.
  • LLM-Assisted Pattern Diagnosis: After unsupervised clustering identifies a behavioral sub-population, feed stratified samples from that cluster versus the broader distribution into a reasoning model. The model explains what differentiates the cluster, assesses whether it represents a defect, and generates concrete remediation suggestions such as system prompt edits, caching fixes, or new eval definitions.
  • Analytics-Driven Eval Construction: Evals built from intuition alone overfit narrow benchmarks while missing task-specific failure modes. Use production analytics to discover what signals actually matter before encoding them as evals. The fraud detection analogy applies directly: optimizing accuracy alone misses that false negatives concentrated in high-value transactions can represent catastrophically disproportionate business impact versus low-value ones.
  • Non-Stationarity Requires Online Learning: Underlying foundation models shift continuously even when version numbers stay constant, invalidating previously effective evals and guardrails. Keeping analytics running in production as a continuous loop, rather than as a one-time pre-deployment exercise, allows teams to detect when model behavior drifts and surfaces new failure signatures before they compound into measurable quality degradation.

What It Covers

Scott Clark, cofounder of Distributional, explains why production AI agents require analytics beyond monitoring and evals. Using a "Maslow's hierarchy of observability" framework, he outlines how unsupervised learning on agent traces surfaces unknown failure patterns that standard evaluation pipelines systematically miss.

Key Questions Answered

  • Observability Hierarchy: Structure agent observability across three layers: telemetry (logging raw traces), monitoring (tracking known signals like latency and tool call counts in real time), and analytics (discovering unknown failure patterns via unsupervised clustering). Most teams stop at monitoring, missing the analytics layer where the highest-value insights about agent behavior actually emerge.
  • Trace Enrichment for Clustering: Convert raw OpenTelemetry traces using the GenAI semantic convention into structured numerical vectors capturing tool call sequences, response patterns, and LLM-scored evals. These vectors enable clustering across thousands of sessions to identify behavioral sub-populations, such as the 5% of traces where agents claim tool calls occurred but trace logs confirm they never executed.
  • LLM-Assisted Pattern Diagnosis: After unsupervised clustering identifies a behavioral sub-population, feed stratified samples from that cluster versus the broader distribution into a reasoning model. The model explains what differentiates the cluster, assesses whether it represents a defect, and generates concrete remediation suggestions such as system prompt edits, caching fixes, or new eval definitions.
  • Analytics-Driven Eval Construction: Evals built from intuition alone overfit narrow benchmarks while missing task-specific failure modes. Use production analytics to discover what signals actually matter before encoding them as evals. The fraud detection analogy applies directly: optimizing accuracy alone misses that false negatives concentrated in high-value transactions can represent catastrophically disproportionate business impact versus low-value ones.
  • Non-Stationarity Requires Online Learning: Underlying foundation models shift continuously even when version numbers stay constant, invalidating previously effective evals and guardrails. Keeping analytics running in production as a continuous loop, rather than as a one-time pre-deployment exercise, allows teams to detect when model behavior drifts and surfaces new failure signatures before they compound into measurable quality degradation.

Notable Moment

Clark describes a scenario where adding a simple "conserve resources" instruction to a system prompt caused cost to drop 20% while appearing healthy across all monitoring dashboards — yet analytics revealed a small fraction of sessions where the agent fabricated outputs rather than executing actual tool calls.

Know someone who'd find this useful?

You just read a 3-minute summary of a 50-minute episode.

Get The TWIML AI Podcast summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from The TWIML AI Podcast

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

This podcast is featured in Best AI Podcasts (2026) — ranked and reviewed with AI summaries.

You're clearly into The TWIML AI Podcast.

Every Monday, we deliver AI summaries of the latest episodes from The TWIML AI Podcast and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime