Auto resuming challenges in langgraph

One thing I’d like to clarify is how to implement automatic resuming in practice, especially when there are multiple different interrupts in the same graph. For example:

  • In one node I may ask the user “Do you approve it?”

  • In another place, I may ask “What’s your address?” and update state["address"]

  • Later, maybe I ask “What’s your budget?” and update state["budget"]

I understand the pattern for tool execution (adding ToolMessage and resuming), but in this case all of these points involve user intervention.

So my questions are:

  1. How can I show the correct interrupt message to the user, depending on where the pause occurred?

  2. When the user responds, how do I map that answer to the right part of the state (e.g. address, budget, etc.)?

  3. Finally, is there a way to handle this in a more automatic way — so that resuming doesn’t require me to manually call Command(resume) each time, but instead flows naturally depending on which interrupt was triggered?

hi @rasoul111000

I prepared something in response to your post.

Auto-resuming in LangGraph: handling multiple user interrupts

TL;DR

  • Use dynamic interrupt(...) with a descriptive payload per node. The returned interrupt objects tell you exactly what paused and where; show interrupt.value to the user and use its metadata (id, and on Platform also ns) to route UI.
  • When resuming, pass the user’s answer via Command(resume=...). Inside the node, interrupt(...) returns that value, so you can update the correct slice of state (e.g., address, budget).
  • For multiple simultaneous interrupts, pass a dict mapping {interrupt_id: value} to Command(resume=...) to resume all in one shot.
  • There’s no “auto-resume without calling Command” (human input is required), but you can implement a small client loop that runs, detects interrupts, collects user input, and resumes- making it feel automatic.

References:

  • Enable human intervention (dynamic interrupts) - LangGraph OSS guide
  • Resume with Command and resume many at once - see “Resume using the Command primitive” and “Resume multiple interrupts” in the guide above
  • Persistence and threads (why interrupts can pause indefinitely) - Persistence
  • Platform API (server-side loop with SDK) - Human-in-the-loop using server API

1) Show the correct interrupt message depending on where the pause occurred

  • In each node that needs human input, call interrupt(payload) with a descriptive payload that your UI can render, e.g. {kind: 'approval', question: 'Do you approve it?'} or {kind: 'address', question: "What's your address?"}.
  • When the graph pauses, you’ll get interrupt objects. In OSS you can read them via result["__interrupt__"] or graph.get_state(config).interrupts. Each interrupt has an id and a value (your payload). On Platform you also get a ns (namespace path) you can use to label where it came from.

Minimal Python (OSS) example to surface the right prompt:

from langgraph.types import Interrupt

interrupts: tuple[Interrupt, ...] = graph.get_state(config).interrupts
for intr in interrupts:
    # intr.value is what you passed to interrupt(...)
    # Show this to the user verbatim in your UI
    render_prompt_to_user(intr.value)  # e.g. {'kind': 'address', 'question': "What's your address?"}

Tip: include a kind or field key in the payload so your UI can render different widgets (approval toggle vs. text box) automatically.

2) Map the user’s reply to the right part of the state

  • After you collect the user’s input, call Command(resume=...).
  • Inside the node, interrupt(...) returns the resume value. Use it to update the right key in your state and return that update.

Sequential nodes example (approval → address → budget):

from typing import Optional, TypedDict
import uuid
from langgraph.graph import StateGraph, START, END
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import InMemorySaver

class State(TypedDict):
    approval: Optional[bool]
    address: Optional[str]
    budget: Optional[str]

def ask_approval(state: State):
    is_approved = interrupt({"kind": "approval", "question": "Do you approve it?"})
    return {"approval": bool(is_approved)}

def ask_address(state: State):
    value = interrupt({"kind": "address", "question": "What's your address?"})
    return {"address": value}

def ask_budget(state: State):
    value = interrupt({"kind": "budget", "question": "What's your budget?"})
    return {"budget": value}

builder = StateGraph(State)
builder.add_node("ask_approval", ask_approval)
builder.add_node("ask_address", ask_address)
builder.add_node("ask_budget", ask_budget)
builder.set_entry_point("ask_approval")
builder.add_edge("ask_approval", "ask_address")
builder.add_edge("ask_address", "ask_budget")
builder.add_edge("ask_budget", END)

graph = builder.compile(checkpointer=InMemorySaver())
config = {"configurable": {"thread_id": str(uuid.uuid4())}}

# Run until first interrupt
result = graph.invoke({"approval": None, "address": None, "budget": None}, config=config)
print(result["__interrupt__"])  # shows approval question

# Resume with approval
graph.invoke(Command(resume=True), config=config)
print(graph.get_state(config).values)  # state['approval'] is True

# Next interrupt (address)
ints = graph.get_state(config).interrupts
resume_map = {ints[0].id: "221B Baker Street"}
graph.invoke(Command(resume=resume_map), config=config)

# Next interrupt (budget)
graph.invoke(Command(resume="2000"), config=config)
print(graph.get_state(config).values)  # address and budget set

Parallel questions (address and budget at the same time):

# Build edges so both nodes start from START to run concurrently
builder = StateGraph(State)
builder.add_node("ask_address", ask_address)
builder.add_node("ask_budget", ask_budget)
builder.add_edge(START, "ask_address")
builder.add_edge(START, "ask_budget")
graph = builder.compile(checkpointer=InMemorySaver())

result = graph.invoke({"address": None, "budget": None}, config=config)
interrupts = graph.get_state(config).interrupts
resume_map = {}
for intr in interrupts:
    if intr.value.get("kind") == "address":
        resume_map[intr.id] = "742 Evergreen Terrace"
    elif intr.value.get("kind") == "budget":
        resume_map[intr.id] = "1500"

graph.invoke(Command(resume=resume_map), config=config)

Notes:

  • When multiple interrupts come from the same node, matching is index-based within that node; keep the order stable (see docs under “Using multiple interrupts in a single node”).
  • Side-effects: place them after the interrupt(...) (or in another node) since the node is re-run from the top on resume.

3) Make resuming feel automatic (without manually calling resume each time)

You still call Command(resume=...) under the hood, but you can wrap it in a tiny loop so your app handles it automatically:

Python (OSS) client loop:

from langgraph.types import Command

def run_until_done(graph, config, inputs, get_user_responses):
    result = graph.invoke(inputs, config=config)
    while result.get("__interrupt__"):
        interrupts = graph.get_state(config).interrupts
        # Show prompts to user and collect answers. Return either a single value
        # or a dict {interrupt_id: value} when there are multiple.
        resume_value = get_user_responses(interrupts)
        result = graph.invoke(Command(resume=resume_value), config=config)
    return graph.get_state(config).values

Platform SDK (server) has the same pattern using runs.wait(...) and then passing command={"resume": ...} to continue - see Human-in-the-loop using server API.

Key takeaways

  • Identify the pause: read interrupts and render interrupt.value.
  • Route correctly: include a kind/field in your interrupt payload; use id (and ns on Platform) to bind replies.
  • Map to state: inside each node, use the returned value from interrupt(...) to update the right key.
  • Batch resumes: if multiple interrupts are pending, pass {id: value} to Command(resume=...).
  • Automate UX: implement a small “resume loop” so the app continuously runs → prompts → resumes until done.

pawel-twardziak Thanks so much for the detailed explanation and examples — this clears up my confusion.

Two quick follow-ups:

  1. How would you recommend implementing the same interrupt/resume pattern inside tools instead of nodes?

  2. In the examples, could you clarify a bit how functions like render_prompt_to_user and get_user_response might look in practice?

    If you have any additional hints or best practices for working with Command and interrupt, I’d love to hear them.

Hi @rasoul111000

In while result.get("__interrupt__"): I should have used INTERRUPT instead of "__interrupt__") - it is always better to directly use the API.

Regarding your question:

1) Using interrupts inside tools

  • Pattern is the same as in nodes: call interrupt(...) early in the tool (before side-effects). On resume, interrupt(...) returns the provided value so you can accept/edit/reject and then perform the side-effect.
  • If many tools require review, wrap them with a decorator/wrapper that calls interrupt(...) before execution so your UI gets a consistent payload shape. See “Review tool calls” in the docs (Redirecting... and Use the functional API - Docs by LangChain).
from langgraph.types import interrupt

def book_hotel(hotel_name: str) -> str:
    review = interrupt({
        "kind": "tool-approval",
        "tool": "book_hotel",
        "args": {"hotel_name": hotel_name},
        "question": f"Approve booking for {hotel_name}?",
    })

    if isinstance(review, dict) and review.get("type") == "reject":
        return "Booking skipped."
    if isinstance(review, dict) and review.get("type") == "edit":
        hotel_name = review["args"].get("hotel_name", hotel_name)

    # Safe to execute the side-effect after resume
    # external_client.book(hotel_name)
    return f"Booked {hotel_name}"

2) Practical CLI helpers (render_prompt_to_user, get_user_responses)

def render_prompt_to_user(interrupt):
    payload = interrupt.value
    message = payload.get("question", str(payload))
    print(f"\n[{interrupt.id}] {message}")

def get_user_responses(interrupts):
    if len(interrupts) == 1:
        render_prompt_to_user(interrupts[0])
        return input("Your answer: ")

    resume_map = {}
    for intr in interrupts:
        render_prompt_to_user(intr)
        resume_map[intr.id] = input("Your answer: ")
    return resume_map

Some good practices (quick, might be repeated :slight_smile: ):

  • Put side-effects after interrupt(...) (or in a downstream node) since the node/tool re-runs from the top on resume
def collect_email(state):
    email = interrupt({"kind": "email", "question": "Your email?"})
    return {"email": email}

def send_welcome_email(state):
    # Runs only after resume; safe place for side-effects
    # mailer.send(to=state["email"], template="welcome")
    return {}
  • Include a kind (and/or field) in interrupt payloads so your UI can render appropriate inputs automatically
def ask_shipping(state):
    return {"address": interrupt({
        "kind": "address",
        "question": "Shipping address",
        "fields": ["street", "city", "zip"],
    })}
  • For concurrent questions/tools, resume with {interrupt_id: value}

# Two nodes start from START

builder.add_edge(START, "ask_address")

builder.add_edge(START, "ask_budget")

# Client collects both answers, then:

resume_map = {addr_intr.id: "742 Evergreen Terrace", budg_intr.id: "1500"}

graph.invoke(Command(resume=resume_map), config=config)

  • If you use multiple interrupts inside a single node, keep their order stable (matching is index-based within that node).
def ask_profile(state):
    # Keep order fixed between runs
    name = interrupt({"kind": "name", "question": "Your name?"})
    age = interrupt({"kind": "age", "question": "Your age?"})
    return {"name": name, "age": age}

@pawel-twardziak Thanks a lot for your clear explanation