Skip to main content
CC

Claude Code

5episodes
3podcasts

Featured On 3 Podcasts

All Appearances

5 episodes

AI Summary

→ WHAT IT COVERS Brian Scanlan, Senior Principal Engineer at Intercom, details how the company doubled engineering throughput (measured in merged PRs per R&D head) over nine months using Claude Code. He demonstrates the internal skills repository, telemetry infrastructure, session analysis tooling, and cultural frameworks that enabled a 150+ person R&D organization to ship at 2x velocity while maintaining or improving code quality. → KEY INSIGHTS - **Velocity Measurement:** Use merged pull requests per R&D head as a leading indicator of AI adoption effectiveness. Intercom tracked this metric from baseline through a 9-month Claude Code rollout, achieving 2x throughput. The raw PR count grew even higher since headcount also increased during this period. A crude metric beats no metric when building organizational accountability around AI tooling adoption. - **Skills Distribution via IT Systems:** Deploy Claude Code plugins through internal IT infrastructure rather than relying on Claude's native plugin sync mechanism, which proved unreliable across hundreds of laptops. Pushing skill files directly to disk via IT management tools eliminates version drift, reduces debugging overhead, and ensures every engineer runs identical, current tooling without manual intervention or update failures. - **LLM Judges for Quality Regression Detection:** After Claude Code began generating low-quality PR descriptions (summarizing code rather than intent), Intercom built an LLM judge to evaluate months of historical PR description data. The judge confirmed a downward trend, prompting a mandatory "create PR" skill enforced via hooks that block the GitHub CLI. Post-intervention, the LLM judge confirmed quality returned to above-baseline levels. - **Session Telemetry for Org-Level Diagnostics:** Collect Claude Code session JSON files, anonymize them, upload to S3, and build user-level dashboards showing session efficiency percentiles, skill invocation patterns, and dropout rates. This surfaces systemic problems—like an MCP never triggering correctly—that are invisible without aggregate data. Honeycomb works well for real-time skill invocation tracking across the engineering organization. - **Self-Improving Skills via Feedback Loops:** Build skills that update themselves when they encounter novel solutions. Intercom's flaky spec skill fixes a test, documents the new pattern back into the skill file, then fans out to find all similar failing tests. This compounds from roughly 1x performance at launch to 10x or higher as the skill accumulates domain-specific patterns, without requiring ongoing human maintenance. - **Tech Debt as AI Onboarding Strategy:** When introducing AI coding tools to an engineering team, direct engineers to spend one month fixing everything they hate about the codebase. The combination of low-friction execution and high emotional payoff builds AI tool fluency while delivering measurable quality improvements. Intercom migrated an entire Go microservice to Ruby in a single Claude Code session—previously a multi-month roadmap item requiring organizational consensus. → NOTABLE MOMENT Scanlan described how Intercom's CI system became ten times more expensive almost overnight once Claude Code adoption accelerated PR volume. After fixing those infrastructure bottlenecks, code review became the new constraint. The implication: AI coding tools will sequentially expose every weak point in a delivery pipeline, requiring teams to fix bottlenecks they previously never stressed. 💼 SPONSORS [{"name": "Celigo", "url": "https://celigo.com/howiai"}, {"name": "Cursor", "url": "https://chatprd.ai/howiai"}] 🏷️ Claude Code, Engineering Velocity, AI Coding Tools, Developer Productivity, Internal Developer Platforms, Technical Debt

AI Summary

→ WHAT IT COVERS LinkedIn editor Daniel Roth, a career journalist with zero software engineering background, demonstrates his Claude Code workflow for building and shipping iOS apps to the App Store. He uses two named AI agents — Bob the builder and Ray the reviewer — operating across dual terminal tabs to produce production-grade code on weekends. → KEY INSIGHTS - **Dual-agent review system:** Run two separate Claude Code instances in parallel terminal tabs with distinct personas: Bob handles building with instructions to plan before coding and document everything in markdown files, while Ray acts as a security-obsessed senior engineer who must approve every plan before Bob writes a single line of code. The human breaks any tie between them. - **Markdown-first memory management:** Claude Code loses context across sessions, especially for weekend-only builders. Counter this by instructing Claude to log every decision, plan, and feature into named markdown files stored in a project docs folder. Starting each new session with "read the retention plan MD" restores full context without re-explaining the entire project history. - **Feature prioritization via standing Claude chat:** Maintain a dedicated Claude web or desktop project chat solely for roadmap management. Feed it user feedback continuously and use a prompt that scores each feature on two one-to-three scales — customer happiness and growth impact — alongside estimated build hours, creating a ranked table to select tasks that match available weekend time. - **Branch discipline as non-negotiable:** Always instruct Bob to build in a Git branch, never directly to main. Roth learned this after a failed merge cost him weeks of debugging. The rule is enforced in Bob's system prompt, making branching automatic rather than a manual decision the builder must remember under time pressure. - **"Picky customer" as the correct mental model for vibe coders:** Non-technical builders are neither PMs nor architects — they function as their own most demanding customer. This reframe clarifies the actual job: walk through what the AI built, identify what feels wrong, and state preferences with conviction regardless of whether the AI agrees, rather than trying to manage scope or understand implementation details. → NOTABLE MOMENT Roth describes managing Claude Code like supervising a brilliant but forgetful intern — it repeatedly suggests solutions that prior sessions already proved impossible due to iOS live activity API constraints, requiring the human to redirect it back to established boundaries rather than relitigate solved problems from scratch. 💼 SPONSORS [{"name": "WorkOS", "url": "https://workos.com"}, {"name": "Vanta", "url": "https://vanta.com/howiai"}] 🏷️ Claude Code, Vibe Coding, iOS Development, AI Agents, Non-Technical Builders

AI Summary

→ WHAT IT COVERS James Dickerson, founder of Vibe Marketers, demonstrates building a complete inbound marketing campaign using Claude Code — including competitor research, positioning analysis, landing page, and interactive lead magnet — in under 50 minutes, without writing a single line of code manually. → KEY INSIGHTS - **Claude Code vs. Claude Desktop:** Claude Code runs locally on your machine, giving it access to your file system, terminal, GitHub, and external APIs like Perplexity and FireCrawl simultaneously. Claude Desktop is catching up but lacks this depth. For marketers who want to build and deploy real assets — landing pages, lead magnets, SEO pages — Claude Code is the more capable environment. - **Skills as portable instruction manuals:** Skills are markdown (.md) files encoding expert-level instructions for specific tasks — copywriting, front-end design, competitor analysis. Dickerson's direct response copy skill encodes a century of direct response principles. Skills are portable across Claude Code, Claude Desktop, and OpenAI's Codex, and can be shared via GitHub or zip files without technical setup. - **Orchestrator skill as campaign autopilot:** An orchestrator skill routes Claude Code through a sequenced marketing workflow automatically — competitor scraping via FireCrawl, gap analysis, positioning angle selection, lead magnet ideation, and copy generation — saving outputs to a brand folder for persistent memory across sessions, eliminating the need to re-explain context each time. - **Claude.md file as project memory and self-improvement system:** The Claude.md file stores project context, brand voice, priorities, and session learnings. After each session, prompt Claude Code to update it with what worked, what didn't, and how to improve collaboration. This creates a compounding system where the agent becomes more effective over time and its intelligence becomes shareable with teammates via Git. - **AI ping-pong between Claude Code and Codex:** Running Claude Code and OpenAI's Codex simultaneously in two terminal windows accelerates debugging. When Claude Code gets stuck in a long context loop and loses perspective, paste the bug into Codex — which runs on a fresh context — to get a root-cause diagnosis and fix quickly, then return the solution to Claude Code to implement. → NOTABLE MOMENT Dickerson revealed he never touched a terminal until roughly one year ago and had no development background. Within that period, he built his entire website, a SaaS product, lead magnets, a programmatic SEO strategy, and a paid skills library — all through Claude Code alone. 💼 SPONSORS [{"name": "HubSpot", "url": "https://hubspot.com"}] 🏷️ Claude Code, AI Marketing Automation, Vibe Marketing, Lead Generation, Prompt Engineering

AI Summary

→ WHAT IT COVERS CJ Hess demonstrates his custom AI development workflow using Claude Code, including Flowy, a self-built tool that converts JSON specifications into visual flowcharts and UI mockups. He shows how he uses model-to-model comparison with GPT Codex to review Claude's code, creates custom skills for automation, and bypasses permissions to accelerate development cycles. → KEY INSIGHTS - **Custom Visual Planning Tool:** Flowy transforms JSON files into visual flowcharts and UI mockups, replacing hard-to-read ASCII diagrams in markdown plans. Claude Code uses custom skills to write JSON specifications that render as interactive diagrams on localhost, allowing developers to iterate visually before writing code. The tool was built almost entirely through prompting and serves as living documentation. - **Iterative Skill Development:** Skills improve through usage rather than upfront design. When Flowy generates incorrect output like white text on pastel backgrounds, the workflow involves updating the skill file with new rules about spacing, colors, or layout. Each feature addition to Flowy includes updating documentation and related skills, creating a self-improving system that gets better with each project. - **Model-to-Model Code Review:** Using GPT Codex to review Claude-generated code catches different types of issues than human review. Codex excels at identifying code smells, suggesting refactoring approaches, and finding discrepancies between specifications and implementation. The workflow involves checking git diffs against four criteria: plan accuracy, code smells, alternative approaches, and opportunities to consolidate duplicate code patterns. - **Permission Bypass for Solo Work:** Terminal aliases like "Kevin" route to Claude Code with full bypass permissions enabled, eliminating approval friction during solo development. This approach works when Git workflows and team rules provide safety nets for dangerous operations. For collaborative work or PR creation, permission checks remain active, but individual feature development runs unrestricted to maximize velocity. - **Code-as-Specification Pattern:** Generate throwaway code to define requirements, then prompt the model to write a clean implementation plan based on that reference. This approach treats initial code generation as a specification document rather than production code. When vibe coding creates monster diffs with unclear structure, rebuilding from scratch with proper planning produces more maintainable, extensible results than iterative cleanup. → NOTABLE MOMENT During the live recording, an autonomous Claude bot named Polly unexpectedly joined the podcast session despite the laptop being closed, interrupting the demonstration. The hosts joked about the bot taking over before continuing, highlighting the unpredictable nature of running AI agents with extensive system permissions and autonomous capabilities in production environments. 💼 SPONSORS [{"name": "Orcus", "url": "https://orkes.io"}, {"name": "Atlassian Rovo", "url": "https://rovo.com"}] 🏷️ Claude Code, AI Development Tools, Custom Skills, Code Review Automation, Visual Planning

AI Summary

→ WHAT IT COVERS Anthropic's Claude Opus 4.5 and Claude Code represent an inflection point in AI coding capabilities, with expert developers reporting they can now build complex applications autonomously without writing code, fundamentally changing software development workflows. → KEY INSIGHTS - **Autonomous coding threshold:** Claude Opus 4.5 enables building complex apps without viewing code, with one Google engineer reporting it recreated their entire year of distributed agent orchestrator work in one hour, demonstrating production-grade capability. - **Delegation psychology shift:** AI agents provide total competency delegation previously only experienced by managers with great teams, allowing users to forget tasks completely while maintaining confidence in quality execution, creating new psychological relationship with work completion. - **Agent-native architecture:** Features become prompts rather than code, with apps defining outcomes in natural language while agents determine execution methods, enabling emergent capabilities and discovering latent user demand that traditional software architecture cannot achieve. - **Post-UI transition emerging:** Next generation vertical software will be API-first and agent-first, integrating directly into Slack, Teams, or email rather than requiring separate dashboards, as agents don't need optimized user experiences humans require. → NOTABLE MOMENT A principal engineer at Google attempted building distributed agent orchestrators for a year with misaligned team options, then gave Claude Code a problem description and watched it generate equivalent production code in sixty minutes. 💼 SPONSORS [{"name": "KPMG", "url": null}, {"name": "Superintelligent", "url": "https://bsuper.ai"}, {"name": "ZenCoder", "url": "https://zenflow.free"}] 🏷️ AI Coding, Claude Code, Software Development, AI Agents

Explore More

Never miss Claude Code's insights

Subscribe to get AI-powered summaries of Claude Code's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available