AI-Driven Development
Code Is Cheap, Trust Is Expensive
AI makes code generation fast and cheap. The bottleneck moved from writing code to reviewing, testing, and trusting changes. Speed gain is smaller than expected when only one part of the pipeline accelerates.
- Humans carry all consequences — bugs, security issues, regressions.
- More code is not more progress. It is more liability.
- Shadow accidental complexity: AI generates code that looks clean and passes linters but adds surface area humans must carry.
Two Kinds of Complexity
- Essential complexity: business rules, edge cases, domain boundaries. No tool removes it.
- Accidental complexity: inconsistent patterns, boilerplate, architecture friction. Rules and conventions fix it.
When accidental exceeds essential, the team stops solving business problems. AI makes the accidental side so easy that teams skip the essential side — generating code before understanding the domain creates systems that are well-written but wrong.
Conventions Are the Foundation
Without conventions, AI amplifies existing mess. Each generation drifts. Same problem solved three different ways across files.
- Define architecture conventions in files AI can read — project instructions (CLAUDE.md), rules files, custom skills.
- Framework and tooling choices should favor what AI tools know well. Popular, well-documented frameworks produce better AI output.
- With conventions, AI reduces accidental complexity at scale. The difference is not the AI. The difference is the constraints you give it.
Framework Selection
Always choose a framework when possible — unless the task is so simple that a framework adds unnecessary complexity. Accidental complexity is often already solved by popular frameworks.
- Pick frameworks that are popular, well-documented, and well-known by LLMs. AI tools follow established patterns better and produce higher-quality output.
- Thinking everything through yourself is too heavy and rarely perfect. Leverage what the ecosystem already solved.
- This reinforces the conventions principle: popular frameworks = better AI output = less drift.
The Engineer as Architect
Engineers should be architects. The role shifts from code producer to architect-reviewer.
- Humans own essential complexity: business logic, architecture, tradeoffs, boundary design.
- AI handles accidental complexity: boilerplate, scaffolding, repetitive patterns, convention enforcement.
- AI is a thinking partner, not just an execution tool. Use AI to research, propose options, surface tradeoffs, and challenge assumptions. But the decision about which option fits the business, the team, the situation — that stays human.
- AI can skip domain thinking and produce well-written code that is conceptually wrong.
The spec is the interface between architect and builder. Engineers define what the system does (interfaces, schemas, constraints). AI decides how to implement it. The spec covers the what and why — not production code or step-by-step instructions. Over-specifying implementation details produces bugs and removes AI’s creative freedom.
Iterate, do not perfect upfront. The first spec is never complete. Write the spec, let AI implement, discover gaps from running code, revise the spec, rebuild. Implementation is cheap — use that speed to iterate on understanding.
Structure Enables Trust
Structure is a safety mechanism, not aesthetics. Vertical slices with strict dependency rules enable quick answers to “what does this change affect?”
- Module ownership, reduced change amplification, and clear public APIs make AI-generated code trustable at scale.
- Tests shift from catching bugs to defining correctness. They become the contract between past and future changes.
- Quality moves earlier in the process. If validation happens late, the system chokes.
Documentation as Context
Keep documentation in the code. AI tools can read and understand decisions in context. External wikis AI cannot access are wasted context.
- Why a decision was made, what constraints exist, what success looks like — this is what AI and humans both need.
- Product-engineering collaboration happens before AI writes code, not after.
Decision Criteria
When evaluating how to use AI in a workflow: match autonomy to risk. High-convention, well-tested areas can be automated. Low-convention, high-ambiguity areas need human thinking first, AI execution second. If AI is generating code before the domain is understood, the process is backwards.
When deciding how much spec to write: match process to risk. Bug fixes and small features need plan mode only. Large cross-cutting features need a formal spec before plan mode. The question: can one engineer hold the entire change in their head? If yes, skip the spec.
Anti-patterns
- Generating code before understanding the domain — produces well-written systems that are conceptually wrong.
- No conventions, just prompts — each generation drifts. Same problem solved differently across files.
- Treating AI output as trusted by default — AI-generated code needs the same review as human code. More code surface = more review burden.
- Optimizing only the coding step — the pipeline is write → review → test → deploy. Speeding up one step without improving the others creates bottlenecks.
- Over-specifying implementation details — specs with production code become a source of bugs. The spec prescribes what, not how.
- Treating the first spec as final — specs improve through iteration. Build, discover gaps, revise. Trying to write the perfect spec upfront is waterfall with a new name.