Advanced9 min
Double Texting & Concurrency
Handling concurrent user messages: reject, rollback, interrupt, or enqueue strategies. Built-in LangGraph Platform support.
Quick Reference
- →Double texting occurs when a user sends a new message while the agent is still processing the previous one
- →Four strategies: reject (return error), rollback (cancel current and restart), interrupt (inject new message), enqueue (queue and process sequentially)
- →LangGraph Platform provides built-in multitask_strategy configuration: 'reject', 'rollback', 'interrupt', or 'enqueue'
- →Rollback cancels the in-flight run, reverts state to the last checkpoint, and starts a new run with the latest message
- →Interrupt injects the new message into the running graph's state without canceling — best for conversational agents that can adapt mid-stream
The Problem
Users do not wait for your agent to finish
Double texting occurs when a user sends a new message while the agent is still processing the previous one. Without a concurrency strategy, both runs race on the same state, causing corruption, duplicate responses, or lost messages.
In traditional APIs, concurrent requests are stateless -- each request is independent. Agent conversations are stateful: the agent reads from and writes to a shared thread. Two concurrent runs reading the same checkpoint and writing different updates creates a last-write-wins race condition that silently corrupts conversation state.
- ▸User sends 'Book a flight to NYC' then immediately sends 'Actually, make that Boston'
- ▸Agent A is mid-execution searching flights to NYC when Agent B starts on the Boston request
- ▸Both read the same checkpoint; both write back -- one update is lost
- ▸Result: user gets a NYC booking confirmation despite correcting to Boston
- ▸This is not an edge case -- analytics show 15-25% of chat sessions include double-texts