Lesson 11 of 20

Lesson 10: Advanced Patterns

You've Covered the Fundamentals. Now the Expert Moves.


Pattern 1: The Living Spec

Instead of verbal requirements, maintain a spec file that Claude always reads.

project/
  CLAUDE.md          ← coding rules
  SPEC.md            ← current feature specifications
  DECISIONS.md       ← architectural decisions and why

SPEC.md — The current sprint's requirements in precise, testable form:

## Current Sprint: Subscription Management

### Feature: Pause Subscription
- User can pause for 1, 2, or 3 months
- Paused subscription does not bill during pause period
- Pausing resets billing cycle to resume date
- Email sent on pause, on upcoming resume (3 days before), on resume
- API: `POST /subscriptions/:id/pause { months: 1|2|3 }`
- Returns 409 if subscription is already paused or cancelled

### Acceptance Criteria
- [ ] Unit tests for billing cycle reset logic
- [ ] Integration test for the full pause → resume flow
- [ ] Email templates in /templates/email/subscription-*

Now Claude never misunderstands requirements — it reads them directly.

DECISIONS.md — Architectural decisions with rationale:

## ADR-001: Use Prisma over raw SQL
Date: 2024-01-15
Status: Accepted
Context: Needed type-safe DB access
Decision: Use Prisma ORM
Rationale: Team TypeScript familiarity, good migration tooling
Consequences: Must use Prisma client everywhere, no raw queries

## ADR-002: Event-driven service communication
Date: 2024-02-01
Status: Accepted
Context: Services were calling each other directly, creating coupling
Decision: All cross-service communication via event bus
Rationale: Decoupling, auditability, testability

When Claude asks "why do you do it this way?" — it's in DECISIONS.md.


Pattern 2: Test-Driven Claude

Write the tests first, let Claude write the implementation.

You: "Here are the tests I've written for the rate limiter.
     Make them pass. Don't modify the tests."

Benefits:

  • You specify the exact interface and behavior
  • Claude can't "solve" it by doing the wrong thing
  • You catch misunderstandings before implementation
  • You end up with tested code

This works especially well for:

  • Pure functions with clear inputs/outputs
  • API endpoints (test the contract, not the implementation)
  • Data transformations
  • Validation logic

Pattern 3: The Pair Review Loop

After Claude implements something non-trivial, do a structured review:

You: "You've implemented the changes. Now put on your code reviewer hat.
     Review your own implementation for:
     1. Correctness (does it actually solve the problem?)
     2. Edge cases (what inputs would break it?)
     3. Security (any injection, auth, or data exposure issues?)
     4. Performance (any obvious bottlenecks?)

     Then suggest the top 2-3 improvements."

Claude will often catch its own mistakes when asked to review its work. This is faster than finding bugs yourself.


Pattern 4: The Scratchpad

For complex reasoning tasks, give Claude explicit permission to think:

You: "I want you to use a scratchpad approach.
     First, think through this problem step by step — show your reasoning.
     Identify the risks and unknowns.
     Consider at least 2 approaches.
     Then commit to one approach and implement it."

The externalized reasoning helps Claude catch logical errors before they become code errors. It also shows you when Claude's understanding diverges from yours.


Pattern 5: The Constraint Game

Artificially constrain Claude to improve output quality:

Simplicity constraint:

"Implement this in the fewest possible lines. No helper functions. No abstraction." (Forces you to see if the simple solution is obvious)

Performance constraint:

"Implement this assuming it will be called 100,000 times per second." (Surfaces performance considerations early)

Security constraint:

"Implement this as if it will be attacked. Walk through every attack vector." (Better than a security review after the fact)

Testability constraint:

"Implement this so that every piece of logic can be unit tested in isolation." (Naturally produces better architecture)


Pattern 6: Continuous Improvement via Post-Mortems

After any session where something went wrong (Claude misunderstood, had to redo work, made a bad architectural decision), do a 2-minute post-mortem:

You: "We had to redo the auth middleware twice.
     Why did that happen, and what should we add to CLAUDE.md
     to prevent it next time?"

Claude will diagnose the root cause and suggest specific, actionable CLAUDE.md additions. Over time, your CLAUDE.md becomes a distilled record of every mistake — never repeated.


Pattern 7: Scaffolding New Features

When starting a new feature from scratch, use a structured scaffolding prompt:

You: "I need to build a [feature name].

Before writing any code:
1. List all the components this feature needs (API, DB, types, UI, tests)
2. Identify which components already exist vs. need to be created
3. Identify dependencies between components (what order to build in)
4. Flag any decisions I need to make before you start
5. Estimate which parts are risky or uncertain

Then get my approval before proceeding."

This surfaces architecture questions before you're 5 hours into implementation.


Pattern 8: The "Explain Like I'm Reviewing the PR" Request

When you want to understand a change Claude made:

You: "Explain the changes you just made as if you're writing a PR description.
     Include: what changed, why, what the risks are, and how to test it."

This gives you a mental model of the change that's much easier to review than reading a diff.


Pattern 9: Ambient Context Files

Beyond CLAUDE.md, maintain context files Claude can reference:

GLOSSARY.md — Project-specific terminology:

- **Merchant** — A business using our platform (not just any user)
- **Settlement** — The transfer of funds from platform to merchant
- **Float** — Money held in platform accounts before settlement
- **ACH** — Automated Clearing House, used for bank transfers

API.md — Your API conventions:

All endpoints return: `{ data: T | null, error: string | null, meta: { page, total } }`
Authentication: Bearer token in Authorization header
Errors: HTTP status codes + error field in body
Pagination: `?page=1&limit=20` (max limit: 100)

TESTING.md — Test patterns:

Unit tests: Pure functions, no DB, no HTTP, fast
Integration tests: Real DB (test container), mock HTTP
E2E tests: Playwright, test environment only
Coverage target: 80% unit, critical paths have integration tests

Reference these in CLAUDE.md so Claude knows they exist.


Pattern 10: The Reflection Loop

At the end of a long session, ask Claude to reflect:

You: "Looking at everything we did today:
     1. What did we build?
     2. What's still incomplete or fragile?
     3. What technical debt did we accumulate?
     4. What should I know before picking this up next session?
     5. What should be added to CLAUDE.md or MEMORY.md?"

This generates a handoff note for future-you (and future-Claude). It also surfaces risks you might not have noticed during implementation.


The Expert Mindset

The difference between a beginner and expert Claude Code user isn't about any single technique. It's about:

  1. Investing upfront. Good CLAUDE.md, good memory, good skills. This compounds.
  2. Reviewing before trusting. Plan mode, self-review requests, structured tests.
  3. Learning from every session. Post-mortems, CLAUDE.md updates, memory cultivation.
  4. Being precise, not verbose. Point to exact files and lines. State constraints explicitly.
  5. Treating Claude as a collaborator, not an oracle. Push back. Ask for alternatives. Refine.

The ceiling on what you can build with Claude Code is mostly determined by how well you've set up the collaborative context. The tools are there. The intelligence is there. Your job is to give it direction.