LangChain/Core Concepts
Beginner8 min

Chat Models & Providers

How ChatModel works under the hood. Provider packages, model initialization, streaming, and the invoke/ainvoke interface.

Quick Reference

  • ChatModel is the base class — every provider extends it
  • One provider package per vendor: langchain-anthropic, langchain-openai, etc.
  • All models share: .invoke(), .stream(), .batch(), .ainvoke(), .astream()
  • Model profiles (v1.1) expose capabilities at runtime via .profile

The ChatModel Interface

ChatModel

ChatModel is the base class for all LLM interactions. Every provider implements the same interface: .invoke() for single calls, .stream() for token-by-token output, .batch() for parallel processing.

  • .invoke(input) — single synchronous call, returns an AIMessage with content and metadata
  • .stream(input) — yields AIMessageChunks as they arrive, token by token, for real-time UX
  • .batch([inputs]) — processes multiple inputs in parallel, returns a list of AIMessages
  • .ainvoke(input) — async version of .invoke(), non-blocking for async applications
  • .astream(input) — async streaming, yields chunks without blocking the event loop