Intermediate10 min
MCP: Infinite Tools
The Model Context Protocol: how it works, how to connect MCP servers as LangChain tools, and building your own MCP server for custom integrations.
Quick Reference
- →MCP is an open protocol that lets any application expose tools, resources, and prompts to LLM agents via a standard interface
- →MCP servers provide tools (functions the agent can call), resources (data the agent can read), and prompts (templates)
- →Use langchain-mcp-adapters to convert MCP tools into LangChain BaseTool instances with one line of code
- →MCP transports: stdio (local process), SSE (HTTP streaming), and streamable HTTP for remote servers
- →Build custom MCP servers with the @modelcontextprotocol/sdk to expose any internal API as agent tools
What Is MCP
USB-C for AI agents
The Model Context Protocol is an open standard for connecting LLM applications to external data sources and tools. Define the integration once on the server side, and any MCP-compatible client (Claude Desktop, LangChain, Cursor, etc.) can use it immediately. No custom integration code per client.
- ▸Tools — functions the agent can call (search database, send email, create ticket). The agent decides when to call them based on the conversation.
- ▸Resources — data the agent can read (files, database records, API responses). Loaded into context on demand, not executed.
- ▸Prompts — reusable templates exposed by the server ("summarize this document", "review this PR"). Users or agents can invoke them.
- ▸Servers expose these primitives via a standard JSON-RPC protocol. Clients discover and consume them automatically.