Where's the Shovelware? Why AI Coding Claims Don't Add Up Mike Judge, a seasoned software developer with 25+ years of experience, expresses deep frustration over the exaggerated claims about AI coding productivity gains in the software development industry. Personal Investigation and Findings Inspired by a METR study that showed developers overestimate AI's productivity boost (self-reported 20% speed-up was actually a 19% slowdown), Judge conducted a six-week self-experiment comparing AI-assisted coding against manual coding tasks. His data revealed AI tools do not significantly speed up development; if anything, AI coding slowed him down by a median of 21%, aligning with the METR study. The difference was statistically insignificant, suggesting no real productivity advantage currently exists from AI coding tools. Industry Claims vs. Reality Major AI coding tools advertise massive productivity improvements: Cursor: "Extraordinarily productive." Claude Code: "Build Better Software Faster." GitHub Copilot: "Delegate like a boss." Google: Claims developers are 25% faster. OpenAI: Bold claims of coding efficiency. Surveys: 14% of developers claim a 10× increase in output. Despite widespread adoption (60% of developers use AI tools daily, 82% weekly), Judge argues these claims don't translate into actual increased output. The Missing Shovelware and Software Surge If AI enabled extreme productivity, the market should be flooded with new apps, tools, games, and software — "shovelware." Analysis of software release trends showed flat or no growth in new software titles across: Apple App Store releases Android App releases Domain name registrations Indie game releases on Steam GitHub project creation data No spike or boom corresponds with the rise of AI-assisted coding post-2022/2023, contradicting the narrative of an AI-driven software revolution. Real-World Consequences and Pressures Tech leaders rebrand as “AI-first,” use AI productivity narratives to justify layoffs, and lower developer compensation. Developers face pressure to adopt AI tools, though the tools often feel clunky and slow them down. People fear falling behind or losing jobs if they don't embrace AI tools. Key Takeaways Developers are not shipping more software than before — shipping volume is the ultimate productivity metric. Claims of being 10× more productive due to AI are likely false without solid evidence. Those pressured to use AI tools should trust their experience; if productivity feels hindered, they're not broken. Skepticism is warranted regarding the AI productivity hype, and executives should be questioned on their claims. Common Counterarguments Addressed "Learn to prompt properly to become a 10× engineer." No data supports new 10× productivity; if 14% were truly 10×, software output would have doubled, which it hasn't. "AI coding tools will improve over time." Billions invested, but after years these tools still provide no significant boost in shipping. "Early adoption is key to not falling behind." Prompt acceptance rates improve only slightly (29% to 34%) even after six months; adoption doesn't correlate with massive productivity gains. "Maybe quality is up, even if speed isn't." Code quality industry-wide has declined, not improved, with less testing and continuous improvement. "Domain names and websites are less relevant today." Data shows domain registrations are stable; ego domains remain popular. ".AI domain registrations are increasing." This reflects startup hype and pivoting, not general software creation growth. "Most development isn't just coding." Solo developers and new projects still involve code; no large drop in coding-centric software creation is evident. --- Mike Judge calls for honest appraisal of AI coding capabilities,