Skip to main content
CK

Chris Kelly

3episodes
1podcast

Featured On 1 Podcast

All Appearances

3 episodes

AI Summary

→ WHAT IT COVERS Tailscale Chief Strategy Officer David Carney outlines how Tailscale is evolving from a VPN replacement into a full networking platform. The conversation covers TSIDP (a private OIDC provider), TSNET (a Go library for building network-native apps), multi-tailnet isolation, and Aperture, Tailscale's new AI gateway that consolidates API keys and logs all LLM interactions with identity attached. → KEY INSIGHTS - **TSIDP for passwordless internal auth:** Tailscale's open-source TSIDP project (github.com/tailscale/tsidp) acts as a private OIDC/OAuth 2.1 endpoint inside your tailnet. Tools like Proxmox that support OIDC can be configured to authenticate silently via TSIDP, eliminating login prompts entirely. Because every Tailscale connection already carries verified user identity, TSIDP simply reflects that identity back to internal apps — no repeated OAuth flows, no password managers needed for self-hosted infrastructure. - **TSNET turns any Go app into a tailnet node:** TSNET is a Go library that embeds a complete Tailscale networking stack into any Go application. Once compiled in, the app appears as a named node on the tailnet with its own IP address in the CG-NAT range, inherits ACL policies, and gets identity and encryption baked in at layer 3. This eliminates firewall port management, IP whitelisting, and custom authentication systems — Aperture itself is built entirely on TSNET. - **Aperture solves the API key sprawl problem:** Aperture is Tailscale's early-alpha AI gateway (aperture.tailscale.com) that stores all LLM API keys centrally. Team members point coding agents at a single internal proxy endpoint (e.g., http://ai) instead of holding individual keys. Because every request arrives over Tailscale, the gateway knows the requester's identity automatically, making every API call attributable, auditable, and revocable without disrupting engineering workflows or rotating credentials across dozens of machines. - **Full LLM session logging enables team-level AI governance:** Aperture logs every API request and response — including full context windows sent on each stateless call — and consolidates them into sessions. Admins can review tool calls, token usage (input, output, cache, reasoning), and prompt patterns across the entire team. This creates a compliance trail linking git commits to specific coding sessions, enables prompt review analogous to code review, and allows security teams to analyze agent behavior both in real time and after the fact. - **Multi-tailnet isolation replaces complex ACL policy files:** Tailscale now supports multiple independent tailnets within one organization (blog post: "One Organization, Multiple Tailnets"). Rather than managing a single complex policy file where one misconfigured wildcard rule could expose all nodes, teams can spin up separate tailnets per workload — staging, production, per-customer, or per-agent sandbox. API-only tailnets (machine-to-machine, no user identity required) are available now; user-identity tailnets are in beta and accessible to home lab users. - **Dynamic Client Registration (DCR) removes MCP deployment friction:** MCP's OAuth 2.1 spec calls for Dynamic Client Registration, which allows MCP clients and servers to self-register against an auth endpoint without manual configuration steps. Most existing enterprise IDPs don't support DCR, making large-scale MCP rollouts operationally painful. TSIDP implements DCR natively, enabling MCP servers to spin up, join the tailnet, and register themselves automatically — removing the human-in-the-loop bottleneck that was slowing MCP adoption across organizations in late 2024. - **MCP spec fatigue caused a strategic pullback worth noting:** After heavy conference engagement through summer and fall 2024, Tailscale deliberately slowed its MCP investment as spec churn accelerated and organizations began pausing implementations. The pattern observed: many companies were adopting MCP as a substitute for an actual AI strategy rather than solving a concrete problem. The practical lesson is to wait for standards to coalesce around a smaller set of stable primitives before building deep integrations — Tailscale pivoted toward the more tangible API key management problem instead. → NOTABLE MOMENT Carney describes how Tailscale uses Aperture internally to log every single coding agent interaction across the company — full prompts, full responses, all tool calls — and then points a coding agent at its own historical logs to analyze how it previously worked. This recursive feedback loop, where an agent reviews its own past sessions, surfaces workflow inefficiencies that would otherwise go unexamined. 💼 SPONSORS [{"name": "Fly.io", "url": "https://fly.io"}, {"name": "Augment Code", "url": "https://augmentcode.com"}, {"name": "NordLayer", "url": "https://nordlayer.com/thechangelog"}, {"name": "Squarespace", "url": "https://squarespace.com/changelog"}] 🏷️ Tailscale, Zero Trust Networking, AI Gateway, MCP Protocol, Identity Management, Home Lab Infrastructure, API Security

AI Summary

→ WHAT IT COVERS Adam Stacoviak of The Changelog interviews Burke Holland from GitHub Copilot about how Claude Opus 4.5, released around December 2024, created a measurable step-function improvement in agentic coding capability. The conversation covers practical AI-assisted development workflows, the economics of subsidized model access, the future of software craftsmanship, and whether developers will be replaced or transformed into polymaths. → KEY INSIGHTS - **Agentic Model Quality Threshold:** Sonnet 3.5 could build agentically but produced sloppy, spaghetti code that stalled on errors neither the developer nor the model could debug — because the developer hadn't written the code themselves. Opus 4.5 crossed a threshold where it one-shots functional native Windows tools using correct WinUI libraries, produces well-structured readable code, and completes full working apps in an afternoon. The practical test: Burke built a screen-capture-to-GIF tool, then extended it into a full screen recording editor in a few hours. - **Personal Software Economics — The SaaS Killer Pattern:** When a model reaches sufficient capability, replacing paid SaaS subscriptions with custom-built personal software becomes viable in a single afternoon. Burke replaced a paid routing app his wife used for her yard-sign business. Adam replaced an $500–$800/year invoicing service by prompting Claude Opus 4.6 with extended thinking to generate an optimal prompt, then handing that prompt to Augment Code's Auggie CLI overnight — waking to a working Rails app with invoicing, PDF generation, and email built in. - **Subsidized Token Economics Won't Last:** GitHub Copilot at $40/month offers request-based billing where one agent run doing 6,000 operations counts as a single request, with 1,500 requests per month included. Claude's $200/month max plan supports roughly one billion tokens monthly at an estimated provider cost of $25,000 — a $24,800 subsidy per user. Burke explicitly states this pricing cannot persist indefinitely and developers should maximize usage now, treating the current window as a finite opportunity before costs normalize. - **Plan Mode as Context Extraction, Not Documentation:** The value of agent plan mode is not producing a written plan — it is forcing the model to surface all the requirements and constraints the developer forgot to specify in the initial prompt. Running four to six planning loops with Opus 4.6 before execution dramatically improves output quality. Burke's current workflow: plan mode in Copilot CLI → autopilot with a custom agent called Anvil → confidence-threshold loop (targeting 95% confidence rather than "done") → verifiable output check via browser skill or unit tests. - **Multi-Model Orchestration as Standard Workflow:** Within GitHub Copilot, developers can route different subtasks to different models in a single run. Burke's Anvil agent classifies tasks as easy, medium, or hard, then delegates design work to Gemini, code refactoring to GPT-5.3 Codex, and planning/communication to Claude Opus 4.6 — potentially spawning 26 parallel sub-agents on large refactors. The practical framing: use Opus 4.6 as the communicative team lead and GPT-5.3 as the senior engineer who writes the actual code without needing to be pleasant about it. - **Conceptual Knowledge Accelerates Faster Than Syntax Knowledge:** Developers using AI are learning architectural concepts — ETL pipelines, Medallion architecture (bronze/silver/gold layers), UNIX sockets in Go, gRPC versus REST — at a rate impossible through traditional syntax-focused learning. The mechanism is iterative brainstorming: a developer brings a conceptual direction, the model explains the implementation landscape, the developer iterates, and the concept becomes table stakes within days. This expands rather than contracts developer knowledge, shifting the valuable skill from writing functions to directing architectural decisions. - **Production Shipping Remains the Unsolved Gap:** Vibe-coded projects proliferate but rarely reach production because deployment, security, architecture decisions, SLA management, and error handling still require substantial developer expertise. The analogy Burke uses: code has never been the hard part — getting software into production always has been, and AI has not changed that. Teams like VS Code's engineering group are actively building AI-assisted workflows specifically for production-quality software, where the editor cannot break for millions of users, and they do not yet have a complete answer. → NOTABLE MOMENT Burke describes letting a GitHub Copilot CLI agent run continuously in a loop for days, autonomously deciding which features to add to a multiplayer game where users roleplay as baby birds. He acknowledges it will likely produce an unwieldy, unshippable result — and frames this as an honest illustration of exactly where autonomous agentic software development currently breaks down at scale. 💼 SPONSORS [{"name": "Fly.io", "url": "https://fly.io"}, {"name": "Augment Code", "url": "https://augmentcode.com"}, {"name": "Squarespace", "url": "https://squarespace.com/changelog"}, {"name": "Notion", "url": "https://notion.com/changelog"}] 🏷️ Agentic Coding, Claude Opus, GitHub Copilot, AI-Assisted Development, Personal Software, Multi-Model Orchestration, Developer Workflows

AI Summary

→ WHAT IT COVERS Steve Ruiz, founder of TLDraw, joins The Changelog to discuss building and selling an SDK-based infinite canvas business with roughly 20 employees, navigating commercial licensing from MIT to a license-key enforcement model, generating nearly $1M in year-one revenue, and how agentic AI is reshaping both his development workflow and the demand for self-hostable, internally owned software tooling. → KEY INSIGHTS - **SDK Licensing Progression:** TLDraw cycled through three distinct license models before finding traction. Starting MIT-licensed, then noncommercial-only, then a watermark-removal model, and finally a production license-key enforcement model where the SDK disappears without a valid key. The watermark model actively suppressed deal sizes by anchoring customer perception of value to removing a small SVG rather than to the canvas's actual engineering depth. Switching to license-key enforcement removed that anchor entirely. - **Year-One Revenue Benchmark:** With a bare-bones go-to-market structure, no formal sales team, and fully negotiated individual deals, TLDraw generated close to $1M in its first commercial year. The approach was deliberately manual: negotiate every contract, test price elasticity by raising the ask when customers agreed quickly, and use each deal as a discovery session to understand how deeply the SDK embedded into a customer's product and revenue stream. - **Scope-Limited Enterprise Deals:** When large enterprises want TLDraw but only for one team or one feature, limiting the license agreement to that specific application or product line allows a lower price point without discounting the full company license. This creates a land-and-expand motion where adjacent teams later request their own agreements, effectively growing revenue inside the account without requiring a renegotiated enterprise-wide contract upfront. - **Agentic AI Compresses Feature Cycles:** A multi-week engineering project — a Comfy UI-style async image pipeline starter kit — was taken from concept to 80% completion in roughly two hours by handing Claude a description and links to reference products. The remaining work was UX steering and architecture review, not code writing. The full cycle from idea to shipped starter kit ran four to five days versus the anticipated two-engineer, multi-week timeline. - **Internal Tooling as a New Revenue Category:** Enterprise customers are now approaching TLDraw not for GDPR data-sovereignty reasons but because external SaaS tools expose insufficient API surfaces for AI agents. When a SaaS product's API omits data — such as Notion's version history being inaccessible programmatically — agents cannot function effectively. Customers want source-owned, internally deployable software they can extend freely, creating a new inbound category distinct from traditional self-hosted or SDK licensing. - **License-Key Enforcement Enables Sales Intelligence:** Requiring a license key for production use, while offering free keys for noncommercial and hobby projects plus a 100-day commercial trial key, generates a structured lead pipeline. Developers who register keys reveal their domain and email, allowing the sales team to identify the likely project owner or budget holder within the organization and initiate outreach — replacing the prior approach of monitoring CDN logs for watermark SVG requests. - **Performance Optimization as Differentiation:** TLDraw targets 120 frames per second on a React DOM canvas — no Canvas element, just divs and SVGs — through specific micro-optimizations: replacing Lodash set comparisons that silently convert sets to arrays, encoding draw-shape point data as 16-bit floats in fixed-size binary rather than JSON objects, and skipping hover-detection during camera panning on large boards. That last change alone halved frame duration during scroll on dense canvases. → NOTABLE MOMENT Steve described pitching a half-million-dollar annual license to a consultancy, only to learn their sole client was the US Department of Defense. He noted this type of customer — well-funded, requiring long-term reliability, and fully dependent on TLDraw for the project's viability — represents the clearest case where the SDK's absence would have simply killed the product rather than delayed it. 💼 SPONSORS [{"name": "Fly.io", "url": "https://fly.io"}, {"name": "Augment Code", "url": "https://augmentcode.com"}, {"name": "NordLayer", "url": "https://nordlayer.com/thechangelog"}, {"name": "Squarespace", "url": "https://squarespace.com/changelog"}] 🏷️ SDK Licensing, Infinite Canvas, Developer Tooling, Agentic AI, Internal Tooling, Open Source Monetization, Enterprise Sales

Explore More

Never miss Chris Kelly's insights

Subscribe to get AI-powered summaries of Chris Kelly's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available