Agent Architecture/Beyond LangChain
★ OverviewIntermediate11 min

OpenAI Function Calling & Assistants API

Using OpenAI's native function calling and Assistants API directly — no framework required. Function definitions, parallel tool calls, structured outputs, and when to skip the LangChain wrapper.

Quick Reference

  • Define tools as JSON Schema objects in the `tools` array — OpenAI validates the schema at call time
  • Use `tool_choice: 'auto'` for most cases; use `tool_choice: { type: 'function', function: { name: '...' } }` to force a specific tool
  • Parallel tool calls: OpenAI can return multiple tool_calls in a single response — handle all of them before sending results back
  • Structured outputs with `response_format: { type: 'json_schema' }` guarantee valid JSON matching your schema
  • Assistants API manages threads and state server-side — useful for stateful conversations without your own persistence layer
  • Use native API when you need full control; use LangChain when you need provider-switching or complex chains

Function Calling Fundamentals

OpenAI function calling lets the model decide when to call tools and what arguments to pass. Instead of parsing free-text responses, you get structured JSON that maps directly to your function signatures. This is the foundation of every OpenAI-based agent.

Complete function calling example with tool execution loop
Parallel tool calls save latency

When the model needs multiple pieces of data, it returns several tool_calls in one response. Always execute them all before sending results back — this is why we loop over msg.tool_calls rather than assuming there's just one.