LangChain/Core Concepts
Intermediate8 min

Callbacks & Event Hooks

The callback system, BaseCallbackHandler, on_llm_start/end, on_tool_start/end, tracing integration, custom instrumentation.

Quick Reference

  • BaseCallbackHandler is the base class — override on_llm_start, on_llm_end, on_tool_start, on_tool_end
  • Pass callbacks=[handler] to .invoke() or set them globally on the model
  • AsyncCallbackHandler for non-blocking event processing in async chains
  • LangSmith tracing is built on the callback system under the hood
  • Use callbacks for logging, metrics, cost tracking, and custom telemetry

The Callback System

Lifecycle hooks

Callbacks are hooks that fire at every stage of a LangChain execution: when an LLM starts, when it finishes, when a tool is called, when an error occurs. LangSmith tracing is built on this system.

Here's the order in which hooks fire during a full agent execution — starting from chain.invoke() all the way to the final response:

chain.invoke()

execution starts

on_chain_start
hook
on_llm_start
hook
LLM API call

waiting for response…

runs
on_llm_end

token usage, response available

hook
on_tool_start

model decided to call a tool

hook
tool runs

e.g. get_weather(city='Tokyo')

runs
on_tool_end

tool result available

hook
on_llm_start → on_llm_end

model processes tool result

hook
on_chain_end
hook
response returned

chain.invoke() completes

hooks fire around execution — never inside it

  • on_llm_start — fires when a model call begins, receives the serialized model config and input prompts/messages
  • on_llm_end — fires when a model call completes, receives the full LLMResult with generations and token usage
  • on_llm_error — fires when a model call throws an exception, receives the error object for logging or recovery
  • on_tool_start — fires when a tool execution begins, receives the serialized tool config and input string
  • on_tool_end — fires when a tool execution completes, receives the tool output for logging or post-processing
  • on_chain_start / on_chain_end — fires at the beginning and end of any chain or Runnable execution, enabling full pipeline tracing