Skip to main content
CG

Chris Grams

1episode
1podcast

We have 1 summarized appearance for Chris Grams so far. Browse all podcasts to discover more episodes.

Featured On 1 Podcast

All Appearances

1 episode
Software Engineering Daily

Hype and Reality of the AI Coding Shift

Software Engineering Daily
59 minCVP of Corporate Marketing at Sonar

AI Summary

→ WHAT IT COVERS Sonar's Chris Grams and Manish Kapoor discuss their State of Code Developer Survey with host Matt Merrill, revealing that 42% of developer code is already AI-generated, 96% of developers distrust that code, and how deterministic verification layers like SonarQube address the resulting quality and security gap. → KEY INSIGHTS - **The Verification Gap:** 42% of developer code is currently AI-generated, projected to reach 65% by 2027, yet 96% of developers do not fully trust AI-produced code. Engineering leaders must implement deterministic verification layers — tools that produce consistent, low-false-positive results — before shipping AI-generated code into production environments. - **The Great Toil Shift:** AI eliminates traditional toil tasks like writing documentation and tests, but replaces them with new toil: reviewing and verifying AI-generated code. Developers using AI daily spend roughly the same total time on toil as those who do not, and 38% report that reviewing AI code is harder than reviewing human-written code. - **Shadow AI Risk:** 35% of developers access AI tools through personal accounts rather than corporate-sanctioned platforms, exposing organizational IP and data to ungoverned third-party systems. Engineering leaders should establish governance policies that account for agentic workflows, where multiple agents exchange code, prompts, and context data simultaneously. - **LLM Selection Beyond Benchmarks:** Standard coding benchmarks only measure functional correctness. Sonar's leaderboard at sonar.com/leaderboard evaluates 35 models across security vulnerabilities, bug density, cognitive complexity, and cyclomatic complexity per million lines of code. Higher-performing models often produce more verbose, complex code — making holistic evaluation across all dimensions necessary before selecting a model for production use. - **Experience-Based AI Usage Divergence:** Junior developers report 40% productivity gains from AI tools but 66% admit the generated code appears correct while being functionally broken. Senior developers predominantly use AI for understanding legacy code and writing documentation. Both groups benefit from maintaining existing robust code review processes, which apply equally to AI-generated and human-written code. → NOTABLE MOMENT Sonar's analysis revealed that as LLM performance improved through most of 2024, code complexity scaled linearly alongside it — smarter models wrote more verbose, harder-to-maintain code. Only around November did top models begin producing performant code without the corresponding complexity increase, signaling a meaningful shift in model behavior. 💼 SPONSORS [{"name": "Estuary", "url": "https://estuary.dev"}, {"name": "Unblocked", "url": "https://getunblocked.com/sedaily"}, {"name": "Guardsquare", "url": "https://www.guardsquare.com"}] 🏷️ AI Code Generation, Static Analysis, Developer Productivity, LLM Evaluation, Agentic Workflows

Explore More

Never miss Chris Grams's insights

Subscribe to get AI-powered summaries of Chris Grams's podcast appearances delivered to your inbox weekly.

Start Free Today

No credit card required • Free tier available