All Topics

LangChain

v1.2

The developer interface for building with LLMs. One API for every model, composable chains, tools, memory, and structured output.

0/50
What is LangChain?

The ecosystem, the architecture, and why it exists. One API for every LLM, composable chains via LCEL, and the stability guarantees of v1.0+.

beginner9 min
LangChain v1 Migration Guide

Everything that changed in LangChain v1: create_agent replaces create_react_agent, ToolRuntime replaces InjectedState, middleware replaces hooks, and TypedDict is the only state type.

intermediate10 min
LangChain vs. LangGraph vs. Deep Agents

Three tools, one ecosystem. LangChain is the framework, LangGraph is the runtime, Deep Agents is the batteries-included harness. Here is when to use each.

beginner8 min
Chat Models & Providers

How ChatModel works under the hood. Provider packages, model initialization, streaming, and the invoke/ainvoke interface.

beginner8 min
Message Types

HumanMessage, AIMessage, SystemMessage, ToolMessage — the building blocks of every LangChain conversation.

beginner7 min
Messages

Messages are the fundamental unit of context in LangChain. HumanMessage, AIMessage, SystemMessage, and ToolMessage carry content, tool calls, and metadata through every model interaction.

beginner8 min
Standard Content Blocks

Provider-agnostic access to reasoning traces, citations, images, and text via the new content_blocks property on messages — no more per-provider parsing.

intermediate8 min
LCEL: The Pipe Operator

The pipe operator (|) composes Runnables into chains. Lazy evaluation, type safety, and the full Runnable interface.

beginner10 min
LCEL: Advanced Runnables

RunnableParallel, RunnableBranch, RunnableLambda, fallbacks, retry logic, and dynamic routing within LCEL chains.

intermediate11 min
Callbacks & Event Hooks

The callback system, BaseCallbackHandler, on_llm_start/end, on_tool_start/end, tracing integration, custom instrumentation.

intermediate8 min
Model Configuration

Configure temperature, max_tokens, retries, timeouts, and rate limiting when initializing a model. Track token usage across multiple models with UsageMetadataCallbackHandler.

beginner8 min
Batch Processing

Process multiple independent inputs in parallel with .batch(). Use batch_as_completed() to stream results as they finish and max_concurrency to control parallelism.

intermediate6 min
Configurable Models

Create a single model instance that can be swapped at runtime via config. Use configurable_fields to expose temperature, model name, and provider as runtime parameters — no code changes needed.

intermediate7 min
Multimodal Input

Pass images, audio, and files to multimodal models using content blocks. Build mixed text-and-image messages with HumanMessage content arrays — no special model class required.

intermediate7 min
Reasoning Models

Reasoning models (o3, Claude with extended thinking) emit internal thought steps before the final answer. Access reasoning via content_blocks, control effort with budget_tokens, and stream thinking tokens in real time.

intermediate6 min
Server-Side Tools

Some providers (Anthropic, OpenAI) offer built-in tools like web search that execute server-side — the provider runs them, not your code. Bind them the same way as local tools; results come back as server_tool_result content blocks.

intermediate5 min
Local Models

Run models locally with Ollama — no API keys, no network calls, no data leaving your machine. The same init_chat_model() interface works; swap the provider prefix and everything else stays identical.

beginner5 min
Tool Execution Loop

When a model returns tool calls, execute them and pass results back as ToolMessages. This three-step cycle — invoke → execute → invoke again — is the foundation of every tool-using agent.

intermediate7 min
Prompts That Work

ChatPromptTemplate, MessagesPlaceholder, few-shot prompting, and variable injection — everything you need to write prompts that produce consistent results.

beginner8 min
Structured Output

with_structured_output() turns any model into a typed data extractor. Pydantic schemas, JSON mode, and provider-specific strategies.

intermediate9 min
Output Parsers

StrOutputParser is the everyday default. JsonOutputParser is the fallback for streaming JSON or models without tool calling. PydanticOutputParser is legacy — use with_structured_output() instead.

intermediate7 min
Tools: Give Your LLM Arms

The @tool decorator, BaseTool, tool schemas from docstrings, bind_tools(), and the tool-call message cycle.

intermediate10 min
Provider Extras & Advanced Tools

The extras attribute (v1.2), Anthropic programmatic tool calling, OpenAI strict schemas, and advanced tool patterns.

intermediate8 min
Dynamic Tools

Not every tool should be available in every situation. Filter tools at runtime based on auth state, permissions, feature flags, or conversation stage using @wrap_model_call middleware.

intermediate8 min
Tool Error Handling

By default, tool errors crash the agent. Use @wrap_tool_call to intercept failures, return actionable error messages, and implement retry logic — all without touching the tool itself.

intermediate7 min
Tool Design Patterns

How tool names, descriptions, and error surfaces affect model selection accuracy. Designing composable, token-efficient tools that help the model choose correctly and chain reliably.

advanced10 min
Parallel Tool Calling

How parallel tool calling works across providers, executing concurrent tool calls with asyncio, handling dependencies and failures, and optimizing the tradeoff between round trips and token cost.

advanced10 min
ToolNode & ToolRuntime

ToolNode is the prebuilt LangGraph node that executes tools in a graph. ToolRuntime gives tools access to conversation state, immutable context, long-term store, and streaming — without those values appearing in the tool's schema.

intermediate10 min
Conversation Memory

How LangChain handles conversation memory. RunnableWithMessageHistory wraps any chain to automatically persist and inject session history — no magic classes, just explicit messages.

intermediate9 min
Managing Message History

Long conversations exceed context windows. Use @before_model middleware to trim or rebuild history, RemoveMessage to delete specific messages, and SummarizationMiddleware to compress old turns into a summary.

intermediate10 min
Memory Storage Backends

BaseChatMessageHistory is the interface every storage backend implements. Swap from in-memory to Redis or SQL with a one-line change in your factory function.

intermediate7 min
Middleware (v1.0+)

LangChain v1.0 introduced middleware — hooks that run before/after model calls. Message trimming, summarization, human-in-the-loop, and custom middleware via AgentMiddleware.

intermediate8 min
Prebuilt Middleware Catalog

LangChain and Deep Agents ship 15+ production-ready middleware for reliability, cost control, security, and agentic capabilities. Use them individually or stacked to cover cross-cutting concerns without touching your agent's core logic.

intermediate12 min
Custom Middleware

Build custom middleware with node-style hooks (before/after) for state updates and wrap-style hooks (wrap_model_call, wrap_tool_call) for retry, caching, and request mutation. Use request.override() to change the model or tools per call.

advanced12 min
create_agent

create_agent builds a graph-based agent runtime on top of LangGraph. Give it a model and tools — it handles the reasoning loop, tool dispatch, and stopping conditions.

intermediate9 min
System Prompt

Shape how your agent approaches tasks with a system prompt. Static strings for fixed personas, SystemMessage for provider features like prompt caching, and @dynamic_prompt for runtime-generated prompts.

beginner7 min
Custom State

Agents track more than messages. Extend AgentState with custom fields to carry user preferences, task progress, or any data your tools and middleware need across the conversation.

intermediate7 min
Dynamic Model Selection

Route to cheaper models for simple turns and powerful models for complex ones. @wrap_model_call intercepts every LLM request and lets you swap the model based on state, context, or cost targets.

intermediate7 min
Streaming

Surface real-time agent progress to users. Choose stream_mode='updates' for step-by-step progress, 'messages' for LLM tokens, or 'custom' for arbitrary signals from inside tools. Pass version='v2' for a unified chunk format.

intermediate12 min
Agent Structured Output

Make agents return typed Pydantic objects, dataclasses, or dicts instead of free text. Use ProviderStrategy for native schema enforcement or ToolStrategy for any tool-calling model — with automatic validation retries built in.

intermediate10 min
Document Loaders

Loading data from PDF, CSV, Notion, Slack, Google Drive, web pages. The DocumentLoader interface, lazy_load(), aload().

beginner9 min
Text Splitters

RecursiveCharacterTextSplitter, chunk_size, chunk_overlap, splitting strategies for different content types.

beginner8 min
Embedding Models

The Embeddings interface, embed_documents(), embed_query(), choosing a model (text-embedding-3-small vs large), dimensionality.

intermediate9 min
Vector Stores

Storing and querying embeddings, similarity_search(), Pinecone, Chroma, pgvector, FAISS. When to use which.

intermediate10 min
Retrievers

Retriever vs VectorStore, as_retriever(), custom retrievers, contextual compression, multi-query retriever, ensemble retriever.

intermediate9 min
Context Engineering in Agents

Context engineering is the #1 job of AI engineers. LangChain's agent abstractions are built around three context types (Model, Tool, Life-cycle) and three data sources (State, Store, Runtime Context).

advanced13 min
Guardrails

Validate and filter agent inputs and outputs using middleware hooks. Use before_agent for session-level input checks, after_agent for final output safety, and layer deterministic (regex) + model-based (LLM) guardrails for defense in depth.

intermediate9 min
Runtime & Context Injection

The Runtime object provides dependency injection for tools and middleware. Pass context_schema to create_agent, inject per-invocation data (user ID, connections) via context=, and access it anywhere via runtime.context.

intermediate7 min
Model Context Protocol (MCP)

MCP is an open protocol for exposing tools, resources, and prompts to LLMs. Use langchain-mcp-adapters to connect any MCP server to a LangChain agent.

advanced12 min
ToolRuntime: Unified Tool Context

ToolRuntime replaces InjectedState, InjectedStore, and InjectedConfig with a single typed parameter — giving tools access to state, context, store, stream_writer, and tool_call_id.

intermediate9 min