Skip to main content
The Rework Podcast

AI Revisited

35 min episode · 2 min read
·

Episode

35 min

Read time

2 min

Topics

Artificial Intelligence

AI-Generated Summary

Key Takeaways

  • Agentic AI vs Auto-complete: AI agents running autonomously in terminal mode from scripts represent a fundamental shift from intrusive auto-complete tools in code editors. These agents execute bash commands, run programming tasks, and search the web independently for 10 seconds to multiple minutes, producing work developers can keep rather than interfering with their typing flow and thought process.
  • HackerOne Security Review Automation: 37signals deployed AI to pre-screen security vulnerability reports from external researchers, filtering low-quality submissions from legitimate threats. The system accesses historical report data and researcher credibility scores to identify high-value reports requiring human attention, functioning like spam detection applied to security research quality rather than blocking malicious content outright.
  • Console Access Log Auditing: Biweekly reviews of programmer access to production systems and customer data get automated through AI analysis. The system verifies that staff members access data only within granted customer permissions, handling the tedious work of checking 42-plus access events per review cycle that previously required manual human verification to ensure compliance.
  • 80-20 Code Generation Rule: AI agents consistently deliver 80 percent complete solutions that developers can either iterate with the agent or finish manually. The remaining 20 percent requires human refinement, but even 20 percent completion provides value if the output contains no counterproductive errors requiring more cleanup time than writing from scratch would take.
  • Multi-Model Draft Strategy: Using OpenCode terminal interface to query Claude, Gemini, OpenAI, and open-weight models simultaneously on the same problem generates five different working solutions in 2.5 to 6 minutes each. This approach surfaces diverse implementation ideas and working prototypes faster than single-model iteration, even when the final solution combines elements rather than accepting any single draft.

What It Covers

David Heinemeier Hansson explains 37signals' shift toward AI adoption after years of hesitation. The breakthrough came from agentic AI models running autonomously in terminal environments rather than auto-complete tools, plus dramatic improvements in model capability during late 2025. He details internal applications for security review automation and debugging assistance.

Key Questions Answered

  • Agentic AI vs Auto-complete: AI agents running autonomously in terminal mode from scripts represent a fundamental shift from intrusive auto-complete tools in code editors. These agents execute bash commands, run programming tasks, and search the web independently for 10 seconds to multiple minutes, producing work developers can keep rather than interfering with their typing flow and thought process.
  • HackerOne Security Review Automation: 37signals deployed AI to pre-screen security vulnerability reports from external researchers, filtering low-quality submissions from legitimate threats. The system accesses historical report data and researcher credibility scores to identify high-value reports requiring human attention, functioning like spam detection applied to security research quality rather than blocking malicious content outright.
  • Console Access Log Auditing: Biweekly reviews of programmer access to production systems and customer data get automated through AI analysis. The system verifies that staff members access data only within granted customer permissions, handling the tedious work of checking 42-plus access events per review cycle that previously required manual human verification to ensure compliance.
  • 80-20 Code Generation Rule: AI agents consistently deliver 80 percent complete solutions that developers can either iterate with the agent or finish manually. The remaining 20 percent requires human refinement, but even 20 percent completion provides value if the output contains no counterproductive errors requiring more cleanup time than writing from scratch would take.
  • Multi-Model Draft Strategy: Using OpenCode terminal interface to query Claude, Gemini, OpenAI, and open-weight models simultaneously on the same problem generates five different working solutions in 2.5 to 6 minutes each. This approach surfaces diverse implementation ideas and working prototypes faster than single-model iteration, even when the final solution combines elements rather than accepting any single draft.

Notable Moment

David describes an AI agent using a C debugger to diagnose a Rails console bug, working through multiple failed hypotheses before identifying the exact problematic commit and generating a working patch. He acknowledges he theoretically possessed the skills to solve this but would not have invested the hours required, choosing instead to work around the issue indefinitely.

Know someone who'd find this useful?

You just read a 3-minute summary of a 32-minute episode.

Get The Rework Podcast summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from The Rework Podcast

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best Business Podcasts (2026) — ranked and reviewed with AI summaries.

Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.

You're clearly into The Rework Podcast.

Every Monday, we deliver AI summaries of the latest episodes from The Rework Podcast and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime