Lesson 14: Debugging Strategies
Debugging is where Claude Code earns its keep. It can read error messages, trace through code, form hypotheses, make fixes, and verify them — all in a loop. But getting the best debugging results requires knowing how to frame the problem. This lesson covers a systematic debugging framework and the prompting patterns that make Claude an effective debugging partner.
The Systematic Debugging Framework
Follow this five-step process for any bug:
1. Reproduce
Before anything else, make the bug happen reliably:
> I'm seeing a 500 error when I submit the contact form with
a long message (over 1000 characters). Run the app and
reproduce this by sending a POST to /api/contact with a
message body longer than 1000 characters.If you cannot reproduce it, Claude cannot fix it. Give Claude the exact steps, inputs, and environment that trigger the bug.
2. Isolate
Narrow down where the problem lives:
> The error happens somewhere between the route handler and
the database insert. Add temporary logging at each step
in the contact form submission flow to find exactly where
it fails.3. Diagnose
Understand why the failure occurs:
> The error is in the database insert. Read the schema for the
contacts table and compare it to the data we're trying to
insert. What's the mismatch?4. Fix
Make the targeted repair:
> The message column is VARCHAR(1000) but we're sending longer
messages. Change the column to TEXT type and add a migration.
Also add input validation in the route handler to give a clear
error instead of a 500.5. Verify
Confirm the fix works and nothing else broke:
> Run the reproduction steps again to confirm the bug is fixed.
Then run the full test suite to make sure nothing else broke.Sharing Error Messages Effectively
The most common debugging mistake is giving Claude too little context. An error message alone is rarely enough — Claude needs the stack trace, the input that caused it, and the surrounding context.
Bad: Partial Information
> I'm getting "TypeError: Cannot read property 'name' of undefined".
Fix it.Good: Full Context
> I'm getting this error when I call getUserProfile with a
user ID that exists in the database but has no linked account:
TypeError: Cannot read property 'name' of undefined
at getUserProfile (src/services/user.js:45:23)
at handleRequest (src/routes/profile.js:12:18)
at Layer.handle (node_modules/express/lib/router/layer.js:95:5)
The input user ID is "abc-123". This user exists in the users
table but has no row in the accounts table.The full stack trace tells Claude exactly which file and line to look at. The input context tells it what scenario triggers the bug. The domain knowledge (user exists but has no linked account) tells it what the code should handle.
Log Reading Strategies
For bugs in running systems, logs are your primary evidence:
> Read the last 50 lines of logs/app.log. Look for errors or
warnings around 14:30 today. Summarize what was happening
and what went wrong.For structured logs:
> Search the application logs for entries with requestId "req-7f3a"
and trace the full request lifecycle. What happened between
the request arriving and the error response?Claude excels at correlating log entries across time and across services. Give it a timeframe, a request ID, or a user ID to anchor the search.
Bisecting Issues
When something worked before and now it does not, use git history to find the breaking change:
> This test was passing on Monday but fails now. Look at the git
log for the past week and identify commits that touched
src/services/payment.js or its dependencies. Which commit
likely introduced the regression?For a more systematic approach:
> Use git bisect to find the commit that broke the
"checkout flow" test. The last known good commit is abc1234
(Monday). The first known bad commit is HEAD.
Run the test at each bisect step.Claude can automate the bisect process: check out a commit, run the test, report pass/fail, and continue until it finds the culprit.
Rubber Duck Debugging
Sometimes the best debugging strategy is explaining the problem clearly. Claude makes an excellent rubber duck — it asks clarifying questions and its responses often trigger your own insights.
> I'm confused about a race condition. Let me explain the flow
and you tell me where the timing issue might be:
1. User clicks "submit" which calls POST /api/orders
2. The handler creates an order record in the database
3. It then sends a message to the payment queue
4. A worker picks up the message and charges the card
5. The worker updates the order status to "paid"
Sometimes the order shows "pending" even after payment succeeds.
Where's the race condition?Claude will analyze the flow and likely identify that step 5 (the worker updating status) can race with the user refreshing the page between steps 2 and 5. Explaining the problem to Claude forces you to articulate the exact sequence, which often reveals the answer.
Common Debugging Prompts That Work
The Trace Prompt
> Trace the execution path when a user hits GET /api/dashboard.
Start from the route handler and follow every function call.
What data does each function expect and what does it return?The Hypothesis Prompt
> I think the bug is caused by the cache returning stale data
after a user updates their profile. Verify or disprove this
hypothesis by reading the cache invalidation code in
src/cache/user-cache.js.The Comparison Prompt
> This endpoint works for regular users but fails for admin users.
Compare the code paths for both user types and find where
they diverge.The "What Changed" Prompt
> This was working yesterday. Show me everything that changed
in src/ since yesterday's last commit. One of those changes
broke the dashboard.Example: Debugging a Failing API Endpoint
Here is a realistic debugging session:
> The POST /api/orders endpoint returns a 500 error with this
stack trace:
Error: SQLITE_CONSTRAINT: UNIQUE constraint failed: orders.order_number
at Database.run (node_modules/better-sqlite3/lib/methods/run.js:28:13)
at OrderRepository.create (src/repositories/order.js:15:22)
at OrderService.createOrder (src/services/order.js:34:30)
This happens intermittently — maybe 1 in 100 requests.
The order_number is generated by generateOrderNumber() in
src/utils/ids.js. Read that function and explain why it
might produce duplicates.Claude will read the function, likely find that it uses a timestamp or random number with insufficient entropy, and suggest a fix (UUIDs, a sequence, or a retry mechanism).
> Good analysis. Fix the generateOrderNumber function to use
UUIDs instead of timestamps. Update the tests. Run them
to make sure the fix works.When to Give More Context vs Narrow the Focus
Give more context when:
- Claude keeps making wrong assumptions about the codebase
- The bug involves multiple interacting systems
- The error message alone does not point to the root cause
- You have domain knowledge that is not in the code
Narrow the focus when:
- Claude is reading too many files and losing track
- The bug is clearly in one specific function or module
- You have already isolated the problem but need help with the fix
- Claude's responses are getting long and unfocused
# Giving more context
> Also read the middleware in src/middleware/auth.js — the
request might be failing because the auth token is missing
a required claim.
# Narrowing focus
> Stop looking at the database layer. The bug is definitely
in the parseInput function on line 42 of src/utils/parser.js.
Focus only on that function.The art of debugging with Claude is knowing when to zoom in and when to zoom out.
Key Takeaways
- Follow the five-step framework: Reproduce, Isolate, Diagnose, Fix, Verify
- Always share the full stack trace and the input that triggered the error, not just the error message
- Use git bisect through Claude to find regressions in the commit history
- Rubber duck debugging works: explaining the problem to Claude often reveals the answer
- Use targeted prompts (trace, hypothesis, comparison, "what changed") for different debugging scenarios
- Give more context when Claude is making wrong assumptions; narrow the focus when it is reading too broadly
- Let Claude iterate: reproduce the bug, fix it, re-run the test, and confirm the fix in one session