Lesson 18 of 20

Lesson 17: System Prompt Engineering

What Is a System Prompt?

The system prompt is the persistent context that frames every conversation. It's sent before the first user message and sets the rules of engagement: who Claude is, what it knows, what it's allowed to do, and how it should respond.

If you're building a product on top of Claude, the system prompt is your primary tool for customization. It's the difference between "Claude, the general-purpose assistant" and "Aria, the expert support agent for your SaaS platform."

In Claude Code: CLAUDE.md fills the role of the system prompt for interactive sessions. For API-based applications, you construct the system prompt explicitly in code.


The Four-Part Structure

Effective system prompts follow a consistent structure:

1. Role        → Who Claude is and what expertise it brings
2. Context     → What it's being used for, who the users are
3. Rules       → What it must and must not do
4. Output      → How responses should be formatted

This order matters. Role and context help Claude interpret the rules correctly. Rules before context can produce contradictions.


1. Role Definition

Give Claude a specific identity and domain of expertise. Vague roles produce vague behavior.

# Weak
You are a helpful assistant.

# Strong
You are a senior Python engineer with 10 years of experience in production
systems, distributed computing, and API design. You specialize in code review, 
performance optimization, and debugging complex backend issues.

The specific role primes Claude to draw on relevant knowledge and to frame its responses at the right technical level.


2. Context

Tell Claude who it's talking to and why. This shapes tone, assumed knowledge, and appropriate response depth.

You are helping developers at a fintech startup. Users are experienced engineers
(3-10 years) who want direct answers, not tutorials. They are working in a 
Python/FastAPI + PostgreSQL stack. They have read access to production logs and
are often debugging live issues.

3. Rules and Constraints

This is the heart of your system prompt. Be precise and enumerate rules explicitly.

Rules:
- Always cite the specific line or function when referring to code
- When suggesting a fix, show the before and after
- If you're unsure, say so — do not guess at production behavior
- Do not suggest architectural changes unless explicitly asked
- Do not add logging, comments, or error handling beyond what was requested
- If a question is outside Python/backend scope, say so and stop

Never:
- Hallucinate library APIs — if unsure, note the uncertainty
- Suggest using a different language or framework
- Write unit tests unless specifically requested

4. Output Format

Specify exactly what responses should look like. If you want JSON, ask for JSON. If you want code only (no prose), say so.

Response format:
- Lead with the answer or fix, then explain if needed
- Use markdown code blocks with language hints for all code
- Keep explanations under 200 words unless complexity demands more
- If the answer is "no" or "that won't work," say that directly first

Dynamic vs Static System Prompts

Static system prompts are the same for every user and every session. Good for consistent product behavior.

Dynamic system prompts are constructed at runtime with user-specific or session-specific information injected.

def build_system_prompt(user: User, project: Project) -> str:
    return f"""
You are a code assistant for {user.name} working on {project.name}.

Project stack: {project.tech_stack}
User's role: {user.role}
Current sprint goal: {project.current_sprint_goal}

[... static rules ...]
"""

Dynamic injection lets you give Claude relevant context without the user having to repeat it every session.


Multi-Turn Conversation Design

System prompts also shape how Claude behaves across a conversation. Include rules for multi-turn behavior explicitly:

Conversation rules:
- Remember facts the user tells you within this session
- If the user corrects you, update your understanding and do not repeat the mistake
- If a conversation goes longer than 10 turns, periodically summarize what you know
  about the user's goal to confirm alignment
- Do not ask more than one clarifying question per message

Common Mistakes

Vague instructions produce vague behavior:

# Bad
Be professional and helpful.

# Good
Use a direct, technical tone. No pleasantries. Start every response with the answer.

Contradictory rules confuse Claude:

# Contradictory
Be concise. Explain your reasoning thoroughly.

# Resolved
Be concise. Include reasoning only when the answer might be surprising or non-obvious.

No examples for format-sensitive tasks:

# Better
Return results in this exact JSON format:
{"status": "ok" | "error", "result": <value>, "message": <string or null>}

Testing System Prompts

Treat system prompts like code. Test them deliberately:

  1. Happy path test: Does the standard use case work as expected?
  2. Edge case test: What happens with unusual inputs or requests?
  3. Adversarial test: Can a user jailbreak the persona or get Claude to break the rules?
  4. Regression test: After changing the system prompt, does previous behavior hold?

Maintain a test suite of example conversations and expected outputs. When you update the system prompt, run the suite.


Key Takeaways

  • System prompts set the permanent context: role, background knowledge, rules, and output format
  • Structure prompts: role → context → rules → output format
  • Be explicit and enumerate rules — vague instructions produce vague behavior
  • Dynamic system prompts let you inject user/session-specific context at runtime
  • Include multi-turn rules to shape conversation behavior across long sessions
  • Test system prompts like code: happy path, edge cases, adversarial inputs
  • Contradictions in rules produce inconsistent behavior — resolve them explicitly