Deep Agents SDK: The Agent Harness
Deep Agents is the third layer of the LangChain stack — an agent harness with built-in planning, virtual filesystem, subagent spawning, and context engineering. The bridge from create_agent to production-grade autonomous agents.
Quick Reference
- →Three-tier model: LangChain (framework) → LangGraph (runtime) → Deep Agents (harness)
- →create_deep_agent() adds planning (write_todos), filesystem (read/write/edit files), and subagent spawning
- →Pluggable filesystem backends: in-memory, local disk, LangGraph store, sandboxes (Modal, Daytona, Deno)
- →Built-in context engineering: automatic offloading (>20K tokens) and summarization (85% window)
- →Model-agnostic: 100+ providers via LangChain init_chat_model
- →Deep Agents CLI: terminal coding agent built on the SDK
Where Deep Agents Fits
LangChain (abstractions) → LangGraph (runtime) → Deep Agents (batteries-included harness)
| Layer | Package | What It Provides | Entry Point |
|---|---|---|---|
| Framework | langchain | Model abstraction, tools, structured output, middleware | create_agent() |
| Runtime | langgraph | Durable execution, persistence, streaming, HITL | StateGraph / @entrypoint |
| Harness | deepagents | Planning, filesystem, subagents, context management | create_deep_agent() |
create_agent gives you a tool-calling agent. create_deep_agent gives you an agent that can plan multi-step tasks, manage files as working memory, spawn subagents for context isolation, and automatically manage its context window. It's the difference between a chatbot and an autonomous worker.
Use create_agent for conversational agents, Q&A bots, and simple tool-calling tasks. Use create_deep_agent when the agent needs to plan, research, write multi-file outputs, or run for more than a few turns autonomously.