Auto resuming challenges in langgraph

hi @rasoul111000

I prepared something in response to your post.

Auto-resuming in LangGraph: handling multiple user interrupts

TL;DR

  • Use dynamic interrupt(...) with a descriptive payload per node. The returned interrupt objects tell you exactly what paused and where; show interrupt.value to the user and use its metadata (id, and on Platform also ns) to route UI.
  • When resuming, pass the user’s answer via Command(resume=...). Inside the node, interrupt(...) returns that value, so you can update the correct slice of state (e.g., address, budget).
  • For multiple simultaneous interrupts, pass a dict mapping {interrupt_id: value} to Command(resume=...) to resume all in one shot.
  • There’s no “auto-resume without calling Command” (human input is required), but you can implement a small client loop that runs, detects interrupts, collects user input, and resumes- making it feel automatic.

References:

  • Enable human intervention (dynamic interrupts) - LangGraph OSS guide
  • Resume with Command and resume many at once - see “Resume using the Command primitive” and “Resume multiple interrupts” in the guide above
  • Persistence and threads (why interrupts can pause indefinitely) - Persistence
  • Platform API (server-side loop with SDK) - Human-in-the-loop using server API

1) Show the correct interrupt message depending on where the pause occurred

  • In each node that needs human input, call interrupt(payload) with a descriptive payload that your UI can render, e.g. {kind: 'approval', question: 'Do you approve it?'} or {kind: 'address', question: "What's your address?"}.
  • When the graph pauses, you’ll get interrupt objects. In OSS you can read them via result["__interrupt__"] or graph.get_state(config).interrupts. Each interrupt has an id and a value (your payload). On Platform you also get a ns (namespace path) you can use to label where it came from.

Minimal Python (OSS) example to surface the right prompt:

from langgraph.types import Interrupt

interrupts: tuple[Interrupt, ...] = graph.get_state(config).interrupts
for intr in interrupts:
    # intr.value is what you passed to interrupt(...)
    # Show this to the user verbatim in your UI
    render_prompt_to_user(intr.value)  # e.g. {'kind': 'address', 'question': "What's your address?"}

Tip: include a kind or field key in the payload so your UI can render different widgets (approval toggle vs. text box) automatically.

2) Map the user’s reply to the right part of the state

  • After you collect the user’s input, call Command(resume=...).
  • Inside the node, interrupt(...) returns the resume value. Use it to update the right key in your state and return that update.

Sequential nodes example (approval → address → budget):

from typing import Optional, TypedDict
import uuid
from langgraph.graph import StateGraph, START, END
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import InMemorySaver

class State(TypedDict):
    approval: Optional[bool]
    address: Optional[str]
    budget: Optional[str]

def ask_approval(state: State):
    is_approved = interrupt({"kind": "approval", "question": "Do you approve it?"})
    return {"approval": bool(is_approved)}

def ask_address(state: State):
    value = interrupt({"kind": "address", "question": "What's your address?"})
    return {"address": value}

def ask_budget(state: State):
    value = interrupt({"kind": "budget", "question": "What's your budget?"})
    return {"budget": value}

builder = StateGraph(State)
builder.add_node("ask_approval", ask_approval)
builder.add_node("ask_address", ask_address)
builder.add_node("ask_budget", ask_budget)
builder.set_entry_point("ask_approval")
builder.add_edge("ask_approval", "ask_address")
builder.add_edge("ask_address", "ask_budget")
builder.add_edge("ask_budget", END)

graph = builder.compile(checkpointer=InMemorySaver())
config = {"configurable": {"thread_id": str(uuid.uuid4())}}

# Run until first interrupt
result = graph.invoke({"approval": None, "address": None, "budget": None}, config=config)
print(result["__interrupt__"])  # shows approval question

# Resume with approval
graph.invoke(Command(resume=True), config=config)
print(graph.get_state(config).values)  # state['approval'] is True

# Next interrupt (address)
ints = graph.get_state(config).interrupts
resume_map = {ints[0].id: "221B Baker Street"}
graph.invoke(Command(resume=resume_map), config=config)

# Next interrupt (budget)
graph.invoke(Command(resume="2000"), config=config)
print(graph.get_state(config).values)  # address and budget set

Parallel questions (address and budget at the same time):

# Build edges so both nodes start from START to run concurrently
builder = StateGraph(State)
builder.add_node("ask_address", ask_address)
builder.add_node("ask_budget", ask_budget)
builder.add_edge(START, "ask_address")
builder.add_edge(START, "ask_budget")
graph = builder.compile(checkpointer=InMemorySaver())

result = graph.invoke({"address": None, "budget": None}, config=config)
interrupts = graph.get_state(config).interrupts
resume_map = {}
for intr in interrupts:
    if intr.value.get("kind") == "address":
        resume_map[intr.id] = "742 Evergreen Terrace"
    elif intr.value.get("kind") == "budget":
        resume_map[intr.id] = "1500"

graph.invoke(Command(resume=resume_map), config=config)

Notes:

  • When multiple interrupts come from the same node, matching is index-based within that node; keep the order stable (see docs under “Using multiple interrupts in a single node”).
  • Side-effects: place them after the interrupt(...) (or in another node) since the node is re-run from the top on resume.

3) Make resuming feel automatic (without manually calling resume each time)

You still call Command(resume=...) under the hood, but you can wrap it in a tiny loop so your app handles it automatically:

Python (OSS) client loop:

from langgraph.types import Command

def run_until_done(graph, config, inputs, get_user_responses):
    result = graph.invoke(inputs, config=config)
    while result.get("__interrupt__"):
        interrupts = graph.get_state(config).interrupts
        # Show prompts to user and collect answers. Return either a single value
        # or a dict {interrupt_id: value} when there are multiple.
        resume_value = get_user_responses(interrupts)
        result = graph.invoke(Command(resume=resume_value), config=config)
    return graph.get_state(config).values

Platform SDK (server) has the same pattern using runs.wait(...) and then passing command={"resume": ...} to continue - see Human-in-the-loop using server API.

Key takeaways

  • Identify the pause: read interrupts and render interrupt.value.
  • Route correctly: include a kind/field in your interrupt payload; use id (and ns on Platform) to bind replies.
  • Map to state: inside each node, use the returned value from interrupt(...) to update the right key.
  • Batch resumes: if multiple interrupts are pending, pass {id: value} to Command(resume=...).
  • Automate UX: implement a small “resume loop” so the app continuously runs → prompts → resumes until done.