★ OverviewIntermediate11 min
Agent Prompt Design
How agent system prompts differ from chatbot prompts: structuring instructions, defining tool use policies, controlling chain-of-thought, and setting behavioral boundaries.
Quick Reference
- →Agent system prompts have three sections: role/identity, tool usage rules, and output format constraints
- →Explicitly list when to use each tool and when NOT to — ambiguous descriptions cause random tool selection
- →Use chain-of-thought instructions ('Think step by step before acting') to improve reasoning quality on complex tasks
- →Define behavioral boundaries: what the agent should refuse, when to escalate to a human, and how to handle ambiguity
- →Keep the system prompt under 2000 tokens — longer prompts dilute instruction-following and increase cost
Chatbot vs Agent Prompts
| Aspect | Chatbot prompt | Agent prompt |
|---|---|---|
| Primary goal | Generate helpful text responses | Choose and execute the right tools in the right order |
| Tool guidance | None — no tools | Explicit rules for when to use each tool |
| Iteration control | Single turn | Multi-step reasoning with stop conditions |
| Error handling | 'I'm sorry, I can't help with that' | Retry with different tool, escalate, or degrade gracefully |
| Output format | Free-form text | Structured output (JSON, tool calls) + text |
The biggest mistake is copying a chatbot prompt into an agent. Agent prompts are operating manuals — they must tell the LLM how to use tools, when to stop iterating, and how to handle failures.