Request for Guidance: When using the Interrupts mechanism in LangGraph, how should the graph utilize ainvoke? When multiple interrupts exist, how to integrate (the Interrupts mechanism) with other frameworks (e.g., the FastAPI framework)?

  1. When using the Interrupts mechanism in LangGraph, how should the graph utilize ainvoke?
  2. When multiple interrupts exist, how to integrate (the Interrupts mechanism) with other frameworks (e.g., the FastAPI framework)? How can the backend obtain the result of the user’s click?

Below is the error I encountered after converting the official example to use asynchronous calls… I don’t quite understand why it raises a RuntimeError: Called get_config outside of a runnable context, even though I clearly passed the config.

from typing import Literal, Optional, TypedDict
import asyncio  

from langgraph.checkpoint.memory import MemorySaver, InMemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.types import Command, interrupt


class ApprovalState(TypedDict):
    action_details: str
    status: Optional[Literal["pending", "approved", "rejected"]]


def approval_node(state: ApprovalState) -> Command[Literal["proceed", "cancel"]]:
    # Expose details so the caller can render them in a UI
    decision = interrupt({
        "question": "Approve this action?",
        "details": state["action_details"],
    })

    # Route to the appropriate node after resume
    return Command(goto="proceed" if decision else "cancel")


def proceed_node(state: ApprovalState):
    return {"status": "approved"}


def cancel_node(state: ApprovalState):
    return {"status": "rejected"}





async def main():
    builder = StateGraph(ApprovalState)
    builder.add_node("approval", approval_node)
    builder.add_node("proceed", proceed_node)
    builder.add_node("cancel", cancel_node)
    builder.add_edge(START, "approval")
    builder.add_edge("proceed", END)
    builder.add_edge("cancel", END)

    # Use a more durable checkpointer in production
    checkpointer = InMemorySaver()
    graph = builder.compile(checkpointer=checkpointer)

    config = {"configurable": {"thread_id": "approval-123"}}

    initial = await graph.ainvoke(  # 改为ainvoke
        {"action_details": "Transfer $500", "status": "pending"},
        config=config,
    )
    print(initial["__interrupt__"])  # -> [Interrupt(value={'question': ..., 'details': ...})]


    resumed = await graph.ainvoke(Command(resume=True), config=config)  # 改为ainvoke
    print(resumed["status"])  # -> "approved"


if __name__ == '__main__':
    asyncio.run(main())

File “D:\anaconda3\envs\first-pro\lib\site-packages\langgraph\types.py”, line 500, in interrupt
conf = get_config()[“configurable”]

File “D:\anaconda3\envs\first-pro\lib\site-packages\langgraph\config.py”, line 29, in get_config
raise RuntimeError(“Called get_config outside of a runnable context”)

RuntimeError: Called get_config outside of a runnable context

hi @LLLzxx

what’s your Python version?

hi @pawel-twardziak

Python 3.10.19

langchain 1.1.3
langchain-core 1.2.6
langchain-mcp-adapters 0.2.1
langchain-openai 1.1.6
langgraph 1.0.5
langgraph-checkpoint 3.0.1
langgraph-checkpoint-postgres 3.0.2
langgraph-prebuilt 1.0.5
langgraph-sdk 0.3.1
langsmith 0.4.59

hi @LLLzxx

python version should be at least 3.11 (I recommend 3.12). Upgrade it and try again, tell if the issue is still there.

@pawel-twardziak Thanks, problem solved already!! this is really helpful!!

However, I’m strill comfusing about how to integrate (the Interrupts mechanism) with other frameworks (e.g., the FastAPI framework)? How can the backend obtain the result of the user’s click or user’s input?

where can i get them? (maybe a new interface)

      interrupt_result = await agent.ainvoke(initial_state, config=config)
    resumed1 = await agent.ainvoke(Command(resume=True), config=config)
    resumed2 = await agent.ainvoke(Command(resume={
      'success_dict': {},
      'failure_dict': {}
    }), config=config)

    return resumed2

hi @LLLzxx

do you want the example with FastAPI? What’s your stack for the frontend?

hi @pawel-twardziak

I would greatly appreciate it if you wouldn’t mind offering some thoughts on this.

You don’t have to go through the trouble of providing code demonstrations, though I would be extremely grateful if you could include them.

I’m wondering whether implementing WebSocket is a feasible approach for obtaining user input after the program is interrupted. (Should there be any other more optimal solutions, I would also be very grateful if you could share them with me.)

Alternatively, does LangGraph offer a standard solution for handling and retrieving user input following an interruption?(but i did’t read anything about it……)

The frontend has not yet been developed, and we will probably be using Vue.js for it.

1 Like

hi @LLLzxx

LangGraph Interrupts are intentionally a protocol:

  • The graph stops and returns an interrupt payload to the caller (exposed in __interrupt__), so your app/UI can render a prompt.
  • Later, your app calls the graph again with Command(resume=<user_input>), and that <user_input> becomes the return value of interrupt() inside the node.
    Source: LangGraph “Interrupts” docs (interrupt payload in __interrupt__, resume via Command(resume=...)). Interrupts - Docs by LangChain

So the backend “obtains the click” exactly the same way any backend obtains a click: the frontend sends it to the backend (HTTP POST / WebSocket message), and the backend uses it as the resume value.

There is no extra “LangGraph interface” that magically pulls user input from the browser.

Regarding this part:

interrupt_result = await agent.ainvoke(initial_state, config=config)
resumed1 = await agent.ainvoke(Command(resume=True), config=config)

In a real app, you normally do not resume immediately in the same request / same function, because you don’t have the human’s answer yet.

Instead you do:

  1. Run graph until it interrupts → return thread_id + __interrupt__ to the client.
  2. Wait for user action in UI.
  3. Client calls a /resume endpoint with that value.
  4. Server resumes the graph by calling ainvoke(Command(resume=...)) with the same thread_id.

The docs emphasize that thread_id is the “persistent cursor” that lets you resume the same execution.

What exactly do you “return to the frontend” when interrupted?

Return:

  • thread_id (your conversation/workflow ID)
  • the interrupt payload(s) from result["__interrupt__"]
    • each element is an Interrupt containing .value (your payload) and .id (an identifier)

The Interrupt type and its id field are in the reference.
Source: Types | LangChain Reference

HTTP (FastAPI) is usually the simplest “standard solution”

Pattern A: two endpoints (works great for most UIs):

  • POST /start → runs until done or interrupted
  • POST /resume → resumes with user input
    Pseudo-code sketch:
from fastapi import FastAPI
from pydantic import BaseModel
from langgraph.types import Command

app = FastAPI()

class StartReq(BaseModel):
    thread_id: str
    action_details: str

class ResumeReq(BaseModel):
    thread_id: str
    value: object
    interrupt_id: str | None = None

@app.post("/start")
async def start(req: StartReq):
    config = {"configurable": {"thread_id": req.thread_id}}
    result = await graph.ainvoke(
        {"action_details": req.action_details, "status": "pending"},
        config=config,
    )
    if "__interrupt__" in result:
        return {"status": "interrupted", "thread_id": req.thread_id, "interrupts": result["__interrupt__"]}
    return {"status": "done", "thread_id": req.thread_id, "result": result}

@app.post("/resume")
async def resume(req: ResumeReq):
    config = {"configurable": {"thread_id": req.thread_id}}

    # Resume “next” interrupt OR resume a specific interrupt by ID (useful with parallel interrupts)
    command = (
        Command(resume={req.interrupt_id: req.value})
        if req.interrupt_id
        else Command(resume=req.value)
    )
    result = await graph.ainvoke(command, config=config)
    if "__interrupt__" in result:
        return {"status": "interrupted", "thread_id": req.thread_id, "interrupts": result["__interrupt__"]}
    return {"status": "done", "thread_id": req.thread_id, "result": result}

This matches the contract described in the Interrupts docs: interrupt returns a payload and the resume value is provided using Command(resume=...).

Is WebSocket feasible? Yes - use it if you want “push” UX

WebSocket (or Server-Sent Events) becomes useful when you want:

  • streaming tokens/messages

  • immediate server “push” when the graph hits an interrupt

But it’s not required. You can still do good UX with plain HTTP:

  • client calls /start
  • gets status=interrupted
  • user clicks
  • client calls /resume

“Multiple interrupts” and Command(resume={...}) (important correction)

Your snippet:

Command(resume={
  'success_dict': {},
  'failure_dict': {}
})

This dict is not a special LangGraph schema.

Command.resume supports two different meanings (per reference docs):

  1. A single resume value (any JSON-serializable object): it resumes “the next” pending interrupt.
  2. A mapping of interrupt IDs to resume values: { "<interrupt_id>": <value>, ... }, which lets you resume multiple interrupts (or resume out-of-order).
    Source: Command.resume reference. Types | LangChain Reference

So if you pass a dict with keys like "success_dict", LangGraph will just treat that as “the resume value” (case 1) unless the runtime is specifically expecting an id→value mapping. The robust approach is:

  • Read interrupts = result["__interrupt__"]
  • if you need to target a specific one, use interrupts[i].id as the key:
interrupts = result["__interrupt__"]
interrupt_id = interrupts[0].id
result2 = await graph.ainvoke(Command(resume={interrupt_id: {"approved": True}}), config=config)

Final checklist for a production FastAPI integration

  • Use a durable checkpointer (SQLite/Postgres/etc.) so interrupts survive process restarts.
    Source: interrupts require checkpointing. Interrupts - Docs by LangChain
  • Always include a stable thread_id in config={"configurable": {"thread_id": ...}}.
  • Treat interrupts as “return a prompt now; resume later”, not “block the request forever”.

@pawel-twardziak

Thanks! Your answer has been extremely helpful and has perfectly resolved my doubts.

I appreciate your time and assistance once again.

1 Like

You’re welcome @LLLzxx

huge favor, if you find this post solved, please mark it as Solved for the others so they can benefit from it :slight_smile: And feel free to drop a new post any time you need.