★ OverviewIntermediate10 min
LangSmith: See Everything
Full observability for your agent: automatic tracing, run visualization, latency breakdowns, and debugging failed runs step by step.
Quick Reference
- →Set LANGCHAIN_TRACING_V2=true and LANGCHAIN_API_KEY to enable automatic tracing — zero code changes required
- →Every LLM call, tool invocation, and chain step is captured as a span in a hierarchical trace
- →Use the LangSmith UI to inspect exact prompts sent to the LLM, token counts, and latency per step
- →Tag runs with metadata (user_id, environment, version) for filtering and aggregation in dashboards
- →LangSmith auto-captures input/output for every node in a LangGraph — including intermediate state
Zero-Config Tracing
Two env vars, full observability
LangSmith auto-traces every LLM call, tool invocation, and chain step with just two environment variables. No code changes, no SDK initialization, no decorators. Every LangChain and LangGraph execution is captured automatically.
Enable tracing — this is all you need
Every node execution becomes a span in a hierarchical trace. LangGraph nodes are auto-instrumented: you see the exact prompt sent to the LLM, the tool arguments, the state at each step, token counts, and latency per span — all without touching your application code.