Skip to main content
SC

Scott Clark

1episode
1podcast

We have 1 summarized appearance for Scott Clark so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode

AI Summary

→ WHAT IT COVERS Scott Clark, cofounder of Distributional, explains why production AI agents require analytics beyond monitoring and evals. Using a "Maslow's hierarchy of observability" framework, he outlines how unsupervised learning on agent traces surfaces unknown failure patterns that standard evaluation pipelines systematically miss. → KEY INSIGHTS - **Observability Hierarchy:** Structure agent observability across three layers: telemetry (logging raw traces), monitoring (tracking known signals like latency and tool call counts in real time), and analytics (discovering unknown failure patterns via unsupervised clustering). Most teams stop at monitoring, missing the analytics layer where the highest-value insights about agent behavior actually emerge. - **Trace Enrichment for Clustering:** Convert raw OpenTelemetry traces using the GenAI semantic convention into structured numerical vectors capturing tool call sequences, response patterns, and LLM-scored evals. These vectors enable clustering across thousands of sessions to identify behavioral sub-populations, such as the 5% of traces where agents claim tool calls occurred but trace logs confirm they never executed. - **LLM-Assisted Pattern Diagnosis:** After unsupervised clustering identifies a behavioral sub-population, feed stratified samples from that cluster versus the broader distribution into a reasoning model. The model explains what differentiates the cluster, assesses whether it represents a defect, and generates concrete remediation suggestions such as system prompt edits, caching fixes, or new eval definitions. - **Analytics-Driven Eval Construction:** Evals built from intuition alone overfit narrow benchmarks while missing task-specific failure modes. Use production analytics to discover what signals actually matter before encoding them as evals. The fraud detection analogy applies directly: optimizing accuracy alone misses that false negatives concentrated in high-value transactions can represent catastrophically disproportionate business impact versus low-value ones. - **Non-Stationarity Requires Online Learning:** Underlying foundation models shift continuously even when version numbers stay constant, invalidating previously effective evals and guardrails. Keeping analytics running in production as a continuous loop, rather than as a one-time pre-deployment exercise, allows teams to detect when model behavior drifts and surfaces new failure signatures before they compound into measurable quality degradation. → NOTABLE MOMENT Clark describes a scenario where adding a simple "conserve resources" instruction to a system prompt caused cost to drop 20% while appearing healthy across all monitoring dashboards — yet analytics revealed a small fraction of sessions where the agent fabricated outputs rather than executing actual tool calls. 💼 SPONSORS [{"name": "Distributional", "url": "https://dbnl.com"}] 🏷️ AI Agents, Observability, Eval Design, Production Analytics, LLM Reliability

Explore More

Never miss Scott Clark's insights

Subscribe to get AI-powered summaries of Scott Clark's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available