Agent Architecture/Single-Agent Patterns
Intermediate8 min

Prompt Chaining: Sequential LLM Pipelines

Sequential LLM calls with gate functions for conditional flows — each step transforms data for the next, with optional quality gates between steps.

Quick Reference

  • Prompt chaining = Step A → Gate → Step B → Gate → Step C → output
  • Each step is a focused LLM call with a specific role (extract → validate → format)
  • Gate functions between steps decide whether to continue, retry, or abort
  • Simpler than a full agent loop — no tool calling, no iterative reasoning
  • Implement with LangGraph Functional API (@entrypoint + @task) or Graph API
  • Best for data transformation pipelines where each step has clear input/output

What Is Prompt Chaining?

ExtractLLM #1ValidateGateFormatLLM #2OutputResult

Step A → Gate → Step B → Gate → Step C → Output

Definition

Prompt Chaining decomposes a task into a sequence of LLM calls, where each step's output becomes the next step's input. Gate functions between steps can validate quality, filter results, or conditionally branch the flow. It's the simplest multi-LLM pattern — no agents, no tools, just focused sequential processing.

Think of it as an assembly line: raw material enters, each station performs one transformation, and a finished product comes out the other end. Quality inspectors (gate functions) between stations catch defects early before they propagate downstream.