LangGraph/Control Flow
Intermediate8 min

Pre/Post Model Hooks

Custom logic before/after model calls for context management, guardrails, token tracking.

Quick Reference

  • Pre-hooks run before the LLM call — use them to inject system messages, trim context, or apply guardrails
  • Post-hooks run after the LLM response — use them for token counting, output validation, or logging
  • Hooks are defined as callbacks on the model node or via RunnableConfig callbacks
  • Token tracking: post-hooks receive the full LLM response including usage metadata for cost monitoring
  • Guardrails: pre-hooks can modify or reject inputs before they reach the model, preventing prompt injection

What Are Pre/Post Hooks?

Hooks = cross-cutting middleware for LLM calls

Hooks are functions that run before and/or after every LLM call in your graph. Use them for context management, guardrails, token tracking, and logging. They are the right place for concerns that apply to all model invocations, not just one specific node.

In a production agent, you rarely want to call the model raw. You need to trim messages to fit the context window, inject dynamic system prompts, track token usage for cost monitoring, validate outputs before they reach downstream nodes, and log everything for observability. Hooks give you a clean, reusable way to do all of this without polluting your node logic. Pre-hooks fire before the model sees the input; post-hooks fire after the model returns a response. Both receive the full context (messages, config, metadata) and can modify, reject, or annotate accordingly.