Skip to main content
KB

Kiriti Badham

1episode
1podcast

We have 1 summarized appearance for Kiriti Badham so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode

AI Summary

→ WHAT IT COVERS Aishwarya Reganti and Kiriti Badham share lessons from 50+ AI deployments at OpenAI, Google, and Amazon, explaining why most AI products fail due to non-determinism and agency-control tradeoffs, plus their framework for building successful AI systems. → KEY INSIGHTS - **Non-deterministic challenge:** AI products differ fundamentally from traditional software because both input (user behavior via natural language) and output (LLM responses) are unpredictable. This means product builders cannot map workflows deterministically like booking.com, requiring new approaches to handle uncertainty on both ends of the interaction. - **Agency-control tradeoff:** Start AI products with high human control and low AI autonomy, then gradually increase agency as trust builds. For customer support, begin with routing suggestions humans review, progress to draft responses, then eventually autonomous ticket resolution—taking four to six months minimum for meaningful ROI. - **CCCD framework:** Continuous Calibration Continuous Development replaces traditional CI/CD for AI. Scope capabilities with curated data, deploy with evaluation metrics, then analyze emerging behavior patterns users exhibit that weren't predicted. Iterate when new data distribution patterns stop appearing, signaling readiness for increased autonomy. - **Leadership requirement:** Successful AI adoption requires CEO-level engagement. The Rackspace CEO blocks 4-6 AM daily for "catching up with AI," rebuilding intuitions from scratch. Leaders must accept their decade of experience may not apply and become the "dumbest person in the room" willing to learn from everyone. - **Evaluation balance:** False dichotomy exists between pre-deployment evals and production monitoring—both are essential. Evals catch known failure modes during development. Production monitoring reveals emerging patterns through implicit signals like answer regeneration, which indicates customer dissatisfaction even without explicit thumbs-down feedback. High-transaction products need both approaches simultaneously. → NOTABLE MOMENT Air Canada's chatbot hallucinated a refund policy that didn't exist, and the company had to honor it legally. This incident illustrates why constraining AI autonomy matters—74% of enterprises cite reliability concerns as their biggest barrier to deploying customer-facing AI products, preferring productivity tools with lower risk. 💼 SPONSORS [{"name": "Merge", "url": "https://merge.dev/lenny"}, {"name": "Strela", "url": "https://strela.io/lenny"}, {"name": "Brex", "url": "https://brex.com"}] 🏷️ AI Product Development, Evaluation Frameworks, AI Agents, Enterprise AI Adoption, Product Leadership

Explore More

Never miss Kiriti Badham's insights

Subscribe to get AI-powered summaries of Kiriti Badham's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available