Integrations
LangSmith for observability, OpenTelemetry for tracing, MCP for infinite tools, voice and multimodal agents, and real-time streaming patterns.
Full observability for your agent: automatic tracing, run visualization, latency breakdowns, and debugging failed runs step by step.
Trigger automated actions on production traces — route failing runs to annotation queues, auto-build evaluation datasets, and fire webhooks when anomalies appear.
Systematic evaluation with versioned datasets — create from traces, CSV, or manual entry, run experiments, compare results across prompt versions, and build annotation workflows.
Cross-service trace correlation — propagate trace context from your API gateway through microservices to LLM calls, and visualize the full request lifecycle in one trace.
Access your LangSmith workspace via Model Context Protocol — query traces, manage prompts, run experiments, and monitor billing from any MCP-compatible client.
Vendor-agnostic observability with OpenTelemetry: instrumenting LangGraph agents with spans, tracing across multi-step workflows, and exporting to any backend.
Building production dashboards for AI agents: structured logging, custom metrics (latency, cost, completion rate), and alerting on anomalies.
Retrieval-Augmented Generation from scratch: embedding documents, vector stores, retrieval strategies, and integrating retrieval into LangGraph agents.
The Model Context Protocol: how it works, how to connect MCP servers as LangChain tools, and building your own MCP server for custom integrations.
Interceptors wrap MCP tool calls in an onion pattern — inject context, add auth headers, implement retries, gate access, and return Commands for state updates. The middleware layer for MCP.
Beyond tools: MCP servers can expose resources (data files, DB records, API responses) and prompts (reusable templates) that agents can load on demand.
Secure MCP connections with OAuth 2.1, API keys, and custom auth — including delegated authentication where agents access services on behalf of users.
Build, deploy, and version production MCP servers with FastMCP — Streamable HTTP transport, health checks, rate limiting, and deployment patterns.
Building voice-based agents. Speech-to-text + LangGraph + text-to-speech pipeline, streaming audio, and interruption handling.
Building agents that process images, PDFs, and files: vision model integration, document parsing tools, image generation as a tool, and multimodal state management.
Deep dive into production voice agent pipelines. STT/TTS provider tradeoffs, latency budgets, interruption handling, telephony integration, and building a complete voice pipeline with Deepgram + ElevenLabs.
Server-Sent Events vs WebSocket for AI agent communication. When to use each, reconnection strategies, scaling patterns, and production code for both SSE streaming and WebSocket interactive agents.
Building agents that process video, images, and audio in real-time. Frame extraction, vision models, speaker diarization, pipeline orchestration, and cost optimization for multimodal AI systems.