Agent Architecture/Single-Agent Patterns
Intermediate10 min

Reflection & Self-Critique

How to add a self-critique step so the agent evaluates its own output, catches mistakes, and iterates before responding.

Quick Reference

  • Reflection = a second LLM call that critiques the first output and suggests improvements
  • Implement as a conditional edge: after the agent responds, route to a 'reflect' node that scores the output
  • Use structured output for the reflection (score: 1-5, feedback: string, should_retry: boolean)
  • Cap reflection loops at 2-3 iterations — diminishing returns after that, and latency adds up
  • Reflection is most valuable for complex generation tasks (code, analysis) and least valuable for simple lookups

When to Use Reflection

Definition

Reflection adds a self-critique step after the agent generates output. A second LLM call (or the same LLM with a different prompt) evaluates the output against explicit criteria and decides whether to accept it or retry.

Use caseReflection valueWhy
Code generationHighThe critic can check for syntax errors, missing edge cases, and security issues
Research summariesHighThe critic can verify claims against source material
Simple Q&ALowThe answer is either right or wrong; reflection rarely fixes factual errors
Tool-based lookupsLowThe tool result is the answer; reflecting on it adds latency without value
Creative writingMediumUseful for tone and structure; diminishing returns on style refinement