First attempt will be 95% garbage: A staff engineer's 6-week journey with Claude Code By Vincent Quigley, Staff Software Engineer at Sanity Published: September 2, 2025 --- Overview This article shares an insider’s perspective on integrating AI (specifically Claude Code) into software development workflows over six weeks. The author shifts from coding everything personally to delegating 80% of initial implementation to AI, focusing instead on architecture, review, and orchestrating the work of multiple AI agents. --- Four Coding Pivots in Career First 5 years: Reading books and SDK documentation. Next 12 years: Googling for crowd-sourced solutions. Last 18 months: Using Cursor for AI-assisted coding. Last 6 weeks: Using Claude Code for full AI delegation. Shift to Claude Code was rapid, becoming productive within hours. --- Actual AI Development Workflow AI used "to think with" during development. Code is rarely perfect on the first try. Successive iterations are essential. The Three Attempts Process First Attempt (95% garbage): AI builds initial context. Code mainly wrong; learnings extracted for next try. Second Attempt (50% garbage): AI grasps nuances better with defined approaches. Still does not produce fully usable output half the time. Third Attempt (Workable): AI produces starting code that can be iterated. Developer constantly reviews and adjusts. --- Overcoming AI Context Limitations AI forgets previous sessions (daily "amnesia"), requiring context to be rebuilt. Solutions: Claude.md Files: Maintain dedicated markdown files capturing architecture decisions, common patterns, gotchas, and documentation links. Tool Integrations: Use MCP integrations connecting AI with tools like Linear (tickets), Notion/Canvas (docs), databases (read-only), codebase, and GitHub PR histories to provide relevant context automatically. This moves development to start from "attempt two" instead of "attempt one" every time. --- Managing Multiple AI Agents Run several AI instances in parallel like a small team. Avoid parallelizing the same problem area to prevent confusion. Use project management tools (e.g., Linear) for tracking. Mark human-edited code explicitly to avoid AI confusion between machine and human changes. --- AI-Assisted Review Process The code review pipeline now has three stages: Claude reviews first: Automates finding missing tests, bugs, and improvement suggestions. Author reviews critically: Focuses on maintainability, architecture soundness, business logic, and integration. Team reviews normally: Peers usually unaware of AI-generated code; quality standards unchanged. This process reduces rounds and biases due to emotional attachment to code. --- Early Experiments with Background Agents Testing Slack-triggered Cursor agents for small tasks like business logic fixes. Mixed results: some successes but limitations such as no private package access, unsigned commits, and lack of tracking. Potential for automating backlog tickets is promising and under exploration. --- Costs and ROI AI usage costs around $1000–1500/month per senior engineer. Benefits include: Shipping features 2-3x faster. Managing multiple development threads. Eliminating boilerplate and repetitive tasks. Expect engineers to improve efficiency with AI spending over time. --- Challenges and Pitfalls Learning problem: AI doesn’t learn from past mistakes requiring extensive documentation and instructions. Confidence problem: AI confidently produces incorrect or broken code; human verification crucial especially for complex, performance-critical, or security-sensitive areas. Context limit problem: Large codebases overflow AI context windows. Break down issues and provide focused context. --- ##