[LangSmith Studio Issue] when resuming from an interrupt inside a subgraph. it doesn't properly resume, instead restarts

We have a simple customer support agent composed of a supervisor agent and subagents (technical, general, billing…)

We’re testing out using LangSmith with a test workflow.

  1. customer has a billing issue
  2. supervisor graph routes it to the billing subgraph
  3. billing subgraph asks for the invoice ID
  4. billing subgraph handles the case
  5. billing subgraph passes back control to the supervisor.

The issue we’re seeing is in LangSmith Studio specific.

Resuming from an interrupt in the root graph works fine, but resuming from an interrupt inside the billing subgraph (ie. asking for the order ID) is not working. It seems like LangSmith Studio might be dispatching resume differently when in a subgraph which prevents the process from moving forward.

The resuming behavior works fine locally when we tested using a script.

Let me show what happens on LangSmith Studio first.

LangSmith Studio screenshot

(I wanted to share more screenshot to show the earlier steps before we hit the subgraph interrupt point, but i cannot upload more than 1 image :frowning: )

This is when we get stuck.

After the Supervisor graph calls billing subgraph, it asks for order ID, but here is where things break. We keep looping on the same step even though I type in the order ID and hit “resume”.

Langgraph dev server logs

Here is what I see on the terminal running langgraph dev on the billing_decision node (where we loop over and over)

(venv) hyunkyojung@Hyuns-MacBook-Air customer_support % langgraph dev
INFO:langgraph_api.cli:

        Welcome to

╦  ┌─┐┌┐┌┌─┐╔═╗┬─┐┌─┐┌─┐┬ ┬
║  ├─┤││││ ┬║ ╦├┬┘├─┤├─┘├─┤
╩═╝┴ ┴┘└┘└─┘╚═╝┴└─┴ ┴┴  ┴ ┴

- 🚀 API: http://127.0.0.1:2024
- 🎨 Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- 📚 API Docs: http://127.0.0.1:2024/docs

This in-memory server is designed for development and testing.
For production use, please use LangSmith Deployment.


2026-03-13T14:03:04.997364Z [info     ] Starting dev persistence flush loop [langgraph_runtime_inmem._persistence] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:03:05.010259Z [info     ] Using langgraph_runtime_inmem  [langgraph_runtime] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:03:05.016428Z [info     ] Using auth of type=noop        [langgraph_api.auth.middleware] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:03:05.017965Z [info     ] Starting In-Memory runtime with langgraph-api=0.7.70 and in-memory runtime=0.26.0 [langgraph_runtime_inmem.lifespan] api_variant=local_dev langgraph_api_version=0.7.70 langgraph_runtime_inmem_version=0.26.0 thread_name=asyncio_0 version=0.7.70
2026-03-13T14:03:05.032133Z [info     ] No license key or control plane API key set, skipping metadata loop [langgraph_api.metadata] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:03:05.042098Z [info     ] Importing graph with id customer_support [langgraph_api.timing.timer] api_variant=local_dev elapsed_seconds=0.009984208038076758 graph_id=customer_support langgraph_api_version=0.7.70 module=None name=_graph_from_spec path=./main.py thread_name=asyncio_0
2026-03-13T14:03:05.043652Z [info     ] Application started up in 0.150s [langgraph_api.timing.timer] api_variant=local_dev elapsed=0.14988683303818107 langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:03:05.054103Z [info     ] Starting cron scheduler        [langgraph_api.cron_scheduler] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:03:05.054242Z [info     ] Starting queue with shared loop [langgraph_runtime_inmem.queue] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=asyncio_0
2026-03-13T14:03:05.054596Z [info     ] Starting 1 background workers  [langgraph_runtime_inmem.queue] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=asyncio_0
2026-03-13T14:03:05.054682Z [info     ] Worker stats                   [langgraph_runtime_inmem.queue] active=0 api_variant=local_dev available=1 langgraph_api_version=0.7.70 max=1 thread_name=asyncio_0
Server started in 0.74s
2026-03-13T14:03:05.248434Z [info     ] Server started in 0.74s        [browser_opener] api_variant=local_dev langgraph_api_version=0.7.70 message='Server started in 0.74s' thread_name='Thread-2 (_open_browser)'
🎨 Opening Studio in your browser...
2026-03-13T14:03:05.248677Z [info     ] 🎨 Opening Studio in your browser... [browser_opener] api_variant=local_dev langgraph_api_version=0.7.70 message='🎨 Opening Studio in your browser...' thread_name='Thread-2 (_open_browser)'
URL: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
2026-03-13T14:03:05.248791Z [info     ] URL: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024 [browser_opener] api_variant=local_dev langgraph_api_version=0.7.70 message='URL: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024' thread_name='Thread-2 (_open_browser)'
2026-03-13T14:03:05.556523Z [info     ] Queue stats                    [langgraph_runtime_inmem.queue] api_variant=local_dev langgraph_api_version=0.7.70 n_pending=0 n_running=0 pending_runs_wait_time_max_secs=None pending_runs_wait_time_med_secs=None pending_unblocked_runs_wait_time_max_secs=None thread_name=asyncio_0
2026-03-13T14:03:05.989168Z [info     ] Getting auth instance: None    [langgraph_api.auth.custom] api_variant=local_dev langgraph_api_version=0.7.70 langgraph_auth=None method=GET path=/assistants/{assistant_id} thread_name=MainThread
2026-03-13T14:03:15.131598Z [info     ] 18 changes detected            [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:03:25.146287Z [info     ] 18 changes detected            [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:03:35.165205Z [info     ] 18 changes detected            [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:03:45.176504Z [info     ] 18 changes detected            [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:03:55.163931Z [info     ] 18 changes detected            [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:04:05.206612Z [info     ] Worker stats                   [langgraph_runtime_inmem.queue] active=0 api_variant=local_dev available=1 langgraph_api_version=0.7.70 max=1 thread_name=asyncio_0
2026-03-13T14:04:05.219667Z [info     ] 18 changes detected            [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:04:05.709068Z [info     ] Queue stats                    [langgraph_runtime_inmem.queue] api_variant=local_dev langgraph_api_version=0.7.70 n_pending=0 n_running=0 pending_runs_wait_time_max_secs=None pending_runs_wait_time_med_secs=None pending_unblocked_runs_wait_time_max_secs=None thread_name=asyncio_0
2026-03-13T14:04:15.228169Z [info     ] 18 changes detected            [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:04:25.235768Z [info     ] 18 changes detected            [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:04:35.266162Z [info     ] 18 changes detected            [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:04:45.263966Z [info     ] 18 changes detected            [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:04:55.319641Z [info     ] 18 changes detected            [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:04:56.832060Z [info     ] Created run                    [langgraph_api.models.run] after_seconds=0 api_variant=local_dev assistant_id=159a96dd-089f-5e76-a3e7-a9203ccd8548 checkpoint_id=None if_not_exists=reject langgraph_api_version=0.7.70 method=POST multitask_strategy=rollback path=/threads/{thread_id}/runs/stream request_id=889b40d3-613d-4692-b40c-5460e392b72e run_create_ms=2 run_id=019ce783-be7f-7453-b96c-fde8e38f3986 run_put_ms=0 stream_mode=['debug', 'messages'] stream_resumable=False temporary=False thread_id=dfa1129d-2a4b-4d9a-b210-151992bed954 thread_name=MainThread
2026-03-13T14:04:57.335464Z [info     ] Starting background run        [langgraph_api.worker] api_variant=local_dev assistant_id=159a96dd-089f-5e76-a3e7-a9203ccd8548 graph_id=customer_support langgraph_api_version=0.7.70 request_id=889b40d3-613d-4692-b40c-5460e392b72e resumable=False run_attempt=1 run_creation_ms=1 run_id=019ce783-be7f-7453-b96c-fde8e38f3986 run_queue_ms=503 run_started_at=2026-03-13T14:04:57.335205+00:00 run_stream_start_ms=0 temporary=False thread_id=dfa1129d-2a4b-4d9a-b210-151992bed954 thread_name=asyncio_0
2026-03-13T14:04:57.365076Z [info     ] Background run succeeded       [langgraph_api.worker] api_variant=local_dev assistant_id=159a96dd-089f-5e76-a3e7-a9203ccd8548 graph_id=customer_support langgraph_api_version=0.7.70 request_id=889b40d3-613d-4692-b40c-5460e392b72e run_attempt=1 run_completed_in_ms=534 run_created_at=2026-03-13T14:04:56.831977+00:00 run_ended_at=2026-03-13T14:04:57.364926+00:00 run_exec_ms=29 run_id=019ce783-be7f-7453-b96c-fde8e38f3986 run_started_at=2026-03-13T14:04:57.335205+00:00 run_wait_time_ms=503 thread_id=dfa1129d-2a4b-4d9a-b210-151992bed954 thread_name=asyncio_0
2026-03-13T14:05:05.337537Z [info     ] 9 changes detected             [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:05:05.385526Z [info     ] Worker stats                   [langgraph_runtime_inmem.queue] active=0 api_variant=local_dev available=1 langgraph_api_version=0.7.70 max=1 thread_name=asyncio_0
2026-03-13T14:05:05.886806Z [info     ] Queue stats                    [langgraph_runtime_inmem.queue] api_variant=local_dev langgraph_api_version=0.7.70 n_pending=0 n_running=0 pending_runs_wait_time_max_secs=None pending_runs_wait_time_med_secs=None pending_unblocked_runs_wait_time_max_secs=None thread_name=asyncio_1
2026-03-13T14:05:15.357487Z [info     ] 9 changes detected             [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:05:25.334715Z [info     ] 9 changes detected             [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:05:35.371068Z [info     ] 9 changes detected             [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:05:38.966489Z [info     ] Created run                    [langgraph_api.models.run] after_seconds=0 api_variant=local_dev assistant_id=159a96dd-089f-5e76-a3e7-a9203ccd8548 checkpoint_id=None if_not_exists=reject langgraph_api_version=0.7.70 method=POST multitask_strategy=rollback path=/threads/{thread_id}/runs/stream request_id=5ee3af68-5389-4195-bdd6-e11fa5a4b69a run_create_ms=2 run_id=019ce784-6316-7701-8b9e-6b18ae20c786 run_put_ms=0 stream_mode=['debug', 'messages'] stream_resumable=False temporary=False thread_id=dfa1129d-2a4b-4d9a-b210-151992bed954 thread_name=MainThread
2026-03-13T14:05:39.968739Z [info     ] Starting background run        [langgraph_api.worker] api_variant=local_dev assistant_id=159a96dd-089f-5e76-a3e7-a9203ccd8548 graph_id=customer_support langgraph_api_version=0.7.70 request_id=5ee3af68-5389-4195-bdd6-e11fa5a4b69a resumable=False run_attempt=1 run_creation_ms=2 run_id=019ce784-6316-7701-8b9e-6b18ae20c786 run_queue_ms=1001 run_started_at=2026-03-13T14:05:39.968426+00:00 run_stream_start_ms=0 temporary=False thread_id=dfa1129d-2a4b-4d9a-b210-151992bed954 thread_name=asyncio_0
2026-03-13T10:05:40.715 [BAML INFO] Function ClassifyTicket:
    Client: ClaudeHaiku (claude-haiku-4-5-20251001) - 721ms. StopReason: end_turn. Tokens(in/out): 123/30
    ---PROMPT---
    user: You are a support ticket classifier.
    Classify the issue into exactly one of: billing, technical, general.
    Provide a one-sentence summary of the ticket.
    user: Reply with the structured format below. No markdown, no code fences.
    user: hi I'm having some billing issue.

    Answer in JSON using this schema:
    {
      // Exactly one of: billing, technical, general
      category: "billing" or "technical" or "general",
      // One sentence summary of the support ticket
      summary: string,
    }

    ---LLM REPLY---
    {
      "category": "billing",
      "summary": "Customer is experiencing a billing issue and needs assistance."
    }
    ---Parsed Response (class TicketClassification)---
    {
      "category": "billing",
      "summary": "Customer is experiencing a billing issue and needs assistance."
    }

[Supervisor] Classified as: 'billing' — routing to billing_agent

[BillingAgent] Starting — need to clarify the order number.
2026-03-13T14:05:40.728152Z [info     ] Background run succeeded       [langgraph_api.worker] api_variant=local_dev assistant_id=159a96dd-089f-5e76-a3e7-a9203ccd8548 graph_id=customer_support langgraph_api_version=0.7.70 request_id=5ee3af68-5389-4195-bdd6-e11fa5a4b69a run_attempt=1 run_completed_in_ms=1764 run_created_at=2026-03-13T14:05:38.966431+00:00 run_ended_at=2026-03-13T14:05:40.728032+00:00 run_exec_ms=759 run_id=019ce784-6316-7701-8b9e-6b18ae20c786 run_started_at=2026-03-13T14:05:39.968426+00:00 run_wait_time_ms=1001 thread_id=dfa1129d-2a4b-4d9a-b210-151992bed954 thread_name=asyncio_1
2026-03-13T14:05:45.395726Z [info     ] 9 changes detected             [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:05:55.391764Z [info     ] 9 changes detected             [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:06:05.412023Z [info     ] 9 changes detected             [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:06:05.791472Z [info     ] Worker stats                   [langgraph_runtime_inmem.queue] active=0 api_variant=local_dev available=1 langgraph_api_version=0.7.70 max=1 thread_name=asyncio_1
2026-03-13T14:06:06.293840Z [info     ] Queue stats                    [langgraph_runtime_inmem.queue] api_variant=local_dev langgraph_api_version=0.7.70 n_pending=0 n_running=0 pending_runs_wait_time_max_secs=None pending_runs_wait_time_med_secs=None pending_unblocked_runs_wait_time_max_secs=None thread_name=asyncio_0
2026-03-13T14:06:15.232566Z [info     ] Created run                    [langgraph_api.models.run] after_seconds=0 api_variant=local_dev assistant_id=159a96dd-089f-5e76-a3e7-a9203ccd8548 checkpoint_id=None if_not_exists=reject langgraph_api_version=0.7.70 method=POST multitask_strategy=rollback path=/threads/{thread_id}/runs/stream request_id=0b1547bb-7a20-492c-a5da-13fd579c13cb run_create_ms=2 run_id=019ce784-f0c0-70e3-b40b-49354783799c run_put_ms=0 stream_mode=['debug', 'messages'] stream_resumable=False temporary=False thread_id=dfa1129d-2a4b-4d9a-b210-151992bed954 thread_name=MainThread
2026-03-13T14:06:15.424435Z [info     ] 9 changes detected             [watchfiles.main] api_variant=local_dev langgraph_api_version=0.7.70 thread_name=MainThread
2026-03-13T14:06:15.819677Z [info     ] Starting background run        [langgraph_api.worker] api_variant=local_dev assistant_id=159a96dd-089f-5e76-a3e7-a9203ccd8548 graph_id=customer_support langgraph_api_version=0.7.70 request_id=0b1547bb-7a20-492c-a5da-13fd579c13cb resumable=False run_attempt=1 run_creation_ms=2 run_id=019ce784-f0c0-70e3-b40b-49354783799c run_queue_ms=586 run_started_at=2026-03-13T14:06:15.819331+00:00 run_stream_start_ms=0 temporary=False thread_id=dfa1129d-2a4b-4d9a-b210-151992bed954 thread_name=asyncio_0

[BillingAgent] Starting — need to clarify the order number.
2026-03-13T14:06:15.839911Z [info     ] Background run succeeded       [langgraph_api.worker] api_variant=local_dev assistant_id=159a96dd-089f-5e76-a3e7-a9203ccd8548 graph_id=customer_support langgraph_api_version=0.7.70 request_id=0b1547bb-7a20-492c-a5da-13fd579c13cb run_attempt=1 run_completed_in_ms=609 run_created_at=2026-03-13T14:06:15.232350+00:00 run_ended_at=2026-03-13T14:06:15.839784+00:00 run_exec_ms=20 run_id=019ce784-f0c0-70e3-b40b-49354783799c run_started_at=2026-03-13T14:06:15.819331+00:00 run_wait_time_ms=586 thread_id=dfa1129d-2a4b-4d9a-b210-151992bed954 thread_name=asyncio_1

What we noticed is that thread_id and thread_name stay the same, but request_id is different across the instances of billing_decision node, but maybe that’s natural? Also, run_id changes. Also, notice flags like resumable=Falseor checkpoint_id=None

Lastly, here is how we define our graph in code.

Subgraph

def build_billing_graph() -> StateGraph:
    """Build billing graph as a top-level agent for Studio."""
    builder = StateGraph(SupportState)

    builder.add_node("billing_decision", billing_decision)
    builder.add_node("reimbursement", reimbursement)
    builder.add_node("payment", payment)
    builder.add_node("other", other)

    builder.add_edge(START, "billing_decision")
    builder.add_edge("reimbursement", END)
    builder.add_edge("payment", END)
    builder.add_edge("other", END)

    # return builder.compile()
    # return builder.compile(checkpointer=None)
    return builder.compile(checkpointer=True)
    # return builder.compile(checkpointer=MemorySaver())


def build_billing_subgraph():
    """Compatibility helper: used by main supervisor graph as a subgraph."""
    return build_billing_graph()

Supervisor graph

def build_graph() -> StateGraph:
    billing_subgraph = build_billing_subgraph()

    builder = StateGraph(SupportState)

    builder.add_node("supervisor", supervisor)
    builder.add_node("billing_agent", billing_subgraph)
    builder.add_node("technical_agent", technical_agent)
    builder.add_node("general_agent", general_agent)
    builder.add_node("supervisor_after_billing", supervisor_after_billing)

    builder.add_edge(START, "supervisor")
    builder.add_edge("billing_agent", "supervisor_after_billing")
    builder.add_edge("supervisor_after_billing", END)

    return builder.compile(checkpointer=MemorySaver())

How interrupts are called

1/ from the root graph (from supervisor node)

def supervisor(
    state: SupportState,
) -> Command[Literal["billing_agent", "technical_agent", "general_agent"]]:
    raw = next(
        (m.content for m in reversed(state["messages"]) if isinstance(m, HumanMessage)),
        "No message provided",
    )
    # Agent Chat UI sends content as a list of blocks; extract plain text.
    if isinstance(raw, list):
        user_msg = " ".join(block["text"] for block in raw if block.get("type") == "text")
    else:
        user_msg = raw

    name = interrupt("What's your name?")

    category, summary = _classify(user_msg)
    print(f"\n[Supervisor] Classified as: '{category}' — routing to {category}_agent")

    return Command(
        update={
            "issue_category": category,
            "issue_summary": summary,
            "messages": [
                {
                    "role": "assistant",
                    "content": f"[Supervisor] Routing to {category} support team.",
                }
            ],
        },
        goto=f"{category}_agent",
    )

2/ from the billing subgraph (from the billing_decision node)

def billing_decision(
    state: SupportState,
) -> Command[Literal["reimbursement", "payment", "other"]]:
    """Interrupt to gather clarification, then decide which billing action to take."""
    print("\n[BillingAgent] Starting — need to clarify the order number.")

    clarification = interrupt(
        "Could you please provide your order number or invoice ID so we can look up your account?"
    )
    print(f"[BillingAgent] Received clarification: '{clarification}'")

    action = random.choice(("reimbursement", "payment", "other"))
    print(f"[BillingAgent] Decision: '{action}'")

    return Command(
        update={
            "billing_action": action,
            "messages": [
                {"role": "user", "content": f"[User clarification] {clarification}"}
            ],
        },
        goto=action,
    )

Any guidance would be really appreciated! Thank you.

Hello @hyunkyo,

The Issue

You’re using checkpointer=True for your billing subgraph:

def build_billing_graph() -> StateGraph:
    builder = StateGraph(SupportState)
    # ... nodes ...
    return builder.compile(checkpointer=True)

While checkpointer=True technically supports interrupts, it enables per-thread persistence, which means the subgraph’s state accumulates across multiple invocations on the same thread. According to the Subgraph Persistence docs:

Per-thread mode (checkpointer=True): State accumulates across calls on the same thread. Each call picks up where the last one left off.

This mode is designed for subagents that need multi-turn conversation memory (like a research assistant building context over several exchanges), but it’s not appropriate for your use case where each billing request should be independent.

Why It’s Causing Problems

The docs explicitly state:

“Per-invocation is the right choice for most applications, including multi-agent systems where subagents handle independent requests.”

Your billing subagent handles independent customer requests ,. you do not want it to remember the previous customer’s invoice ID when handling a new issue. More importantly, per-thread persistence can cause checkpoint namespace conflicts, especially with Studio’s streaming API and resume mechanism. The resumable=False and checkpoint_id=None in your logs are symptoms of this conflict.

The Solution

Change your billing subgraph to use per-invocation persistence (the default):

def build_billing_graph() -> StateGraph:
    """Build billing graph as a top-level agent for Studio."""
    builder = StateGraph(SupportState)
    
    builder.add_node("billing_decision", billing_decision)
    builder.add_node("reimbursement", reimbursement)
    builder.add_node("payment", payment)
    builder.add_node("other", other)
    
    builder.add_edge(START, "billing_decision")
    builder.add_edge("reimbursement", END)
    builder.add_edge("payment", END)
    builder.add_edge("other", END)
    
    # Use per-invocation persistence (default)
    return builder.compile()  # or explicitly: checkpointer=None

With checkpointer=None:

  • Each billing request starts fresh (appropriate for independent customer issues)
  • Interrupts still work perfectly (inherits your parent’s MemorySaver())
  • No checkpoint namespace conflicts
  • Follows the recommended pattern for multi-agent systems

Why It Works Locally But Not in Studio

Your local script likely works because it’s a clean, single-execution test. Studio’s streaming API manages checkpoint state across multiple resume attempts, which can expose edge cases with per-thread persistence that did not appear in simple local tests.

References

Let me know if this resolves the issue!

1 Like

@keenborder786 Thanks for the response! Using per-invocation persistence on the subagent with return builder.compile()does not resolve the issue. The symptom is exactly the same. In LangSmith Studio, the billing subgraph loops on billing_decision step asking for the same interrupt question. :sad_but_relieved_face:

Okay @hyunkyo I would then recommend waiting for a response from an official team member, as they’ll be better positioned to help you out. @mdrxy

@mdrxy Sorry to spam but since you were tagged, do you mind taking a look at this issue? This is limiting our ability to use the LangSmith Studio by a lot.

1 Like

Hi @hyunkyo

Have you tried the solution from this PR Subgraph (using interrupt) restarts instead of resuming from internal breakpoint · Issue #4796 · langchain-ai/langgraph · GitHub ?

I have changed the subgraph compilation by changing checkpointer=checkpointer to checkpointer=True

@pawel-twardziak Yup, using checkpointer=True when compiling the subgraph does not resolve this UI issue on the LangSmith Studio.

Alright, will try to reproduce it tomorrow.
Any hints I should take into account @hyunkyo ?

Thanks for the help! @pawel-twardziak

Other than what I shared in the original question, I don’t think so.

1 Like

hi @hyunkyo

how do you run your graph? Via langgraph dev or something else?

@pawel-twardziak Via langgraph dev