Agent Architecture/Prompt Engineering for Agents
★ OverviewIntermediate11 min

Agent Prompt Design

How agent system prompts differ from chatbot prompts: structuring instructions, defining tool use policies, controlling chain-of-thought, and setting behavioral boundaries.

Quick Reference

  • Agent system prompts have three sections: role/identity, tool usage rules, and output format constraints
  • Explicitly list when to use each tool and when NOT to — ambiguous descriptions cause random tool selection
  • Use chain-of-thought instructions ('Think step by step before acting') to improve reasoning quality on complex tasks
  • Define behavioral boundaries: what the agent should refuse, when to escalate to a human, and how to handle ambiguity
  • Keep the system prompt under 2000 tokens — longer prompts dilute instruction-following and increase cost

Chatbot vs Agent Prompts

AspectChatbot promptAgent prompt
Primary goalGenerate helpful text responsesChoose and execute the right tools in the right order
Tool guidanceNone — no toolsExplicit rules for when to use each tool
Iteration controlSingle turnMulti-step reasoning with stop conditions
Error handling'I'm sorry, I can't help with that'Retry with different tool, escalate, or degrade gracefully
Output formatFree-form textStructured output (JSON, tool calls) + text

The biggest mistake is copying a chatbot prompt into an agent. Agent prompts are operating manuals — they must tell the LLM how to use tools, when to stop iterating, and how to handle failures.