The blog post titled "Don’t Build Multi-Agents" by Walden Yan, dated June 12, 2025, discusses principles and challenges in building AI agents, particularly warning against multi-agent architectures. Key ideas include: 1. Context Engineering: The author emphasizes the importance of managing context in AI agents. Context engineering involves dynamically providing the relevant task and conversation context to models, going beyond simple prompt engineering, and is crucial for reliability in long-running agents. 2. Problems with Multi-Agent Architectures: Common frameworks that employ multiple subagents working in parallel often fail due to miscommunication and conflicting assumptions between agents. For example, when tasked with building a Flappy Bird clone, two subagents might each misinterpret their subtasks, resulting in inconsistent and incompatible outputs that a final agent struggles to combine. 3. Principles for Agent Design: - Share Context Fully: Agents should share complete context and action histories rather than isolated messages, ensuring consistent understanding. - Actions Carry Implicit Decisions: Conflicting decisions made by different agents cause fragile results; thus, dispersed decision-making should be avoided. 4. Preferred Architectures: A single-threaded, linear agent that continuously carries forward context is simpler and more reliable. For handling very large tasks, the author proposes introducing a specialized model to compress long histories into key details to maintain context without overflow, a method Cognition has explored. 5. Current Real-World Practices: Examples like Claude Code demonstrate using subtasks sequentially rather than in parallel to maintain context consistency. Earlier coding agents used "edit apply" models, where large models output instructions and smaller models applied edits, but this could cause errors due to ambiguity. Now, combined models often handle decision-making and edits in a single action. 6. Multi-Agent Communication Challenges: While humans resolve conflicts through dialogue, current AI agents lack the reliability for deep, multi-agent interactions. Attempts at multi-agent collaborations result in fragile systems with dispersed decisions and poor context-sharing. The author believes future advances in single-threaded agent communication with humans will eventually enable safe multi-agent parallelism. 7. Outlook: The author calls for humility and flexibility as principles evolve with the field. Cognition uses these principles internally and invites readers to try their AI software engineer "Devin" at app.devin.ai or contact them via email for collaboration. The post serves as a guide to designing dependable, context-aware AI agents by avoiding multi-agent pitfalls and prioritizing context sharing and coherent decision making.