Context Engineering in Agents
Context engineering is the #1 job of AI engineers. LangChain's agent abstractions are built around three context types (Model, Tool, Life-cycle) and three data sources (State, Store, Runtime Context).
Quick Reference
- →Context engineering = providing the right information and tools in the right format so the LLM can succeed
- →Three context types: Model Context (transient), Tool Context (persistent), Life-cycle Context (persistent)
- →Three data sources: State (short-term, conversation-scoped), Store (long-term, cross-session), Runtime Context (static config)
- →Model Context: system prompt, messages, tools, model selection, response format — all controllable via middleware
- →createMiddleware + wrapModelCall = intercept and modify what goes into every model call
Why Agents Fail
When agents fail, it's almost never because the model is too weak. It's because the model didn't have the right context. Context engineering is the discipline of providing the right information and tools in the right format so the LLM can accomplish a task. This is the #1 job of an AI engineer.
A typical agent loop has two steps: (1) Model call — the LLM receives a prompt and available tools, returns a response or tool call request. (2) Tool execution — the requested tools run and return results. This loop repeats until the LLM decides to stop. Context engineering is about controlling what happens at each step and between steps.
| Context Type | What You Control | Transient or Persistent |
|---|---|---|
| Model Context | What goes into model calls (instructions, history, tools, response format) | Transient |
| Tool Context | What tools can read and write (state, store, runtime config) | Persistent |
| Life-cycle Context | What happens between model and tool calls (summarization, guardrails, logging) | Persistent |