Tool function return Command with goto variable cause parallel running

Description

I’m building the supervisor multi-agent gragh by myself. And I’m trying to use command message in the tool to handoff the task to another node(SOP_executor)

I expect the process will directly go to SOP_executor and WOULD NOT execute the supervisor again

But SOP_executor and supervisor is running in parallel.
Why?

Example Code

from typing import Annotated, List
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage, ToolMessage, AIMessageChunk, BaseMessage
from langchain.tools import tool, ToolRuntime
from langgraph.graph import MessagesState, StateGraph, START, END
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langgraph.types import Command

class AgentState(MessagesState):
    pass

@tool
def handoff_job_to_SOP_executor(handoffback_msg: Annotated[str, "call SOP executor to execute the job"], 
                                    runtime: ToolRuntime)-> Command:
    """
    call SOP executor to execute the job
    """
    tool_message = ToolMessage(
                    content=f"success to handoff job to SOP executor",
                    tool_call_id=runtime.tool_call_id
                )
    return Command(
        goto="SOP_executor",  
        update={**runtime.state, "messages": runtime.state["messages"] + [tool_message]},  
        # graph=Command.PARENT,  
    )

def get_qwen_model():

    return ChatOpenAI(
        model="qwen3-max",
        temperature=0.5,            
        max_tokens=5000,            
        api_key = mykey,
        base_url='https://dashscope.aliyuncs.com/compatible-mode/v1',
        streaming=False,
        
    )

tools = [handoff_job_to_SOP_executor]

def main_super(state: AgentState):
    llm = get_qwen_model()
    print("\n\n[[arrive supervisor]]\n\n")
    llm_with_tools = llm.bind_tools(tools, parallel_tool_calls=True)

    llm_msg = [SystemMessage(content="hand off any job to SOP executor using handoff_job_to_SOP_executor tool")] + state["messages"]
    response = llm_with_tools.invoke(llm_msg)
    state["messages"].append(response)
    print("\n\ndo something\n\n")
    print("\n\n[[bye supervisor]]\n\n")
    return state

def SOP_executor(state: AgentState):

    print("\n\n[[arrive SOP_executor]]\n\n")
    print("\n\ndo something\n\n")
    print("\n\n[[bye SOP_executor]]\n\n")
    return state

def should_continue(state: AgentState):
    messages = state["messages"]
    last_message = messages[-1]

    if hasattr(last_message, 'tool_calls') and last_message.tool_calls:
        return "tools"

    return END


workflow = StateGraph(AgentState)
workflow.add_node("supervisor", main_super)
workflow.add_node("tools", ToolNode(tools))
workflow.add_node("SOP_executor", SOP_executor, destinations=["supervisor"])

workflow.add_edge(START, "supervisor")
workflow.add_edge("SOP_executor", END)
workflow.add_conditional_edges(
    "supervisor",
    should_continue,
    {
        "tools": "tools",
        END: END,
    }
)


workflow.add_edge("tools", "supervisor")

supervisor = workflow.compile()

for _, chunk in supervisor.stream(
    {
        "messages": [
            {
                "role": "user",
                "content": "find US and New York state GDP in 2024. what % of US GDP was New York state?",
            }
        ]
    },
    subgraphs=True
):
    for k, v in chunk.items():
        print()
        print(type(v['messages'][-1]))
        if isinstance(v['messages'][-1], dict):
            print(v['messages'][-1])
        else:
            v['messages'][-1].pretty_print()
        
        print()

System output


[[arrive supervisor]]


do something


[[bye supervisor]]

<class 'langchain_core.messages.ai.AIMessage'>
================================== Ai Message ==================================
Tool Calls:
  handoff_job_to_SOP_executor (call_86f09b6a65ab4ec69f55ea87)
 Call ID: call_86f09b6a65ab4ec69f55ea87
  Args:
    handoffback_msg: find US and New York state GDP in 2024. what % of US GDP was New York state?


<class 'langchain_core.messages.tool.ToolMessage'>
================================= Tool Message =================================
Name: handoff_job_to_SOP_executor

success to handoff job to SOP executor

[[arrive supervisor]]


[[arrive SOP_executor]]


do something


[[bye SOP_executor]]

<class 'langchain_core.messages.tool.ToolMessage'>
================================= Tool Message =================================
Name: handoff_job_to_SOP_executor

success to handoff job to SOP executor

do something


[[bye supervisor]]

<class 'langchain_core.messages.ai.AIMessage'>
================================== Ai Message ==================================

The job has been successfully handed off to the SOP executor for processing. Please wait for the results regarding the US and New York state GDP in 2024, as well as the percentage of US GDP attributed to New York state.

hi @padanes

The parallel run happens because both routes are being scheduled:

  • Your tool returns a Command(goto="SOP_executor", ...), which dynamically routes to SOP_executor.

  • You also have a static edge workflow.add_edge("tools", "supervisor"). Static edges are written every time the tools node finishes, so the graph also schedules supervisor again. Result: SOP_executor and supervisor run in parallel on the next step.

Two changes should fix this:

  1. Do not keep an unconditional edge from tools to supervisor when you want a handoff.
  • Remove workflow.add_edge("tools", "supervisor"). The goto="SOP_executor" will take control and send execution there, so supervisor will not be scheduled again.
  • If you also want the normal “tool → supervisor” loop for other tools, wrap the ToolNode to default back to supervisor only when the tool returns a regular ToolMessage (not a Command):
from langgraph.types import Command
from langchain_core.messages import ToolMessage

def wrap_tool_call(request, execute):
    result = execute(request)
    if isinstance(result, ToolMessage):
        return Command(
            update={"messages": [result]},
            goto="supervisor",
            graph=Command.PARENT,
        )
    return result

workflow.add_node("tools", ToolNode(tools, wrap_tool_call=wrap_tool_call))
# Note: keep the tools->supervisor static edge removed.
  1. OPTIONAL: When returning a Command from a tool (inside a subgraph), target the parent graph explicitly.
return Command(
    goto="SOP_executor",
    update={**runtime.state, "messages": runtime.state["messages"] + [tool_message]},
    graph=Command.PARENT,  # important when running from within a subgraph
)

Why this is expected in LangGraph

  • A Command.goto adds dynamic routing; it does not implicitly disable existing static edges. With both in place, both branches are scheduled.

  • Returning graph=Command.PARENT from a tool is the documented way to navigate in the parent graph from within a tool execution inside a subgraph.

1 Like

Brilliant solutions ! Thanks a lot!

In my case, my SOP_executor node is not in the subgragh. Does that means that I need to choose Option 1 to fix this problom?

BTW, just for kindly remind, when will the guide of handoff implementation come out?link I’m looking forward to more example about multi-agent implementation :slight_smile:

1 Like

Yes, option 1 should be sufficient for your case. Try option 2 only.
I have no idea when the docs will finally cover all crucial topics - but I can help develop them. :slight_smile:

I’ve gotten another error when using Command with Command.PARENT

here are the error msg

Command.__init__() got an unexpected keyword argument 'gragh', from:@/Users/lyx/Desktop/Workplace/grtn-tools/live_quality/crontab/ding_stream_chat.py:88 DingTalkAiChat@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/live_quality/Agent/agent.py:118 run_agent@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/pregel/main.py:2956 astream@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/pregel/_runner.py:410 atick@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/pregel/_runner.py:520 _panic_or_proceed@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/pregel/_retry.py:137 arun_with_retry@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/_internal/_runnable.py:705 ainvoke@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/pregel/main.py:3137 ainvoke@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/pregel/main.py:2956 astream@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/pregel/_runner.py:304 atick@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/pregel/_retry.py:137 arun_with_retry@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/_internal/_runnable.py:705 ainvoke@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/_internal/_runnable.py:473 ainvoke@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/prebuilt/tool_node.py:749 _afunc@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/prebuilt/tool_node.py:1088 _arun_one@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/prebuilt/tool_node.py:1037 _execute_tool_async@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/prebuilt/tool_node.py:401 _handle_tool_error@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/prebuilt/tool_node.py:358 _default_handle_tool_errors@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langgraph/prebuilt/tool_node.py:990 _execute_tool_async@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langchain_core/tools/structured.py:63 ainvoke@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langchain_core/tools/base.py:608 ainvoke@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langchain_core/tools/base.py:1036 arun@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langchain_core/tools/base.py:1002 arun@    
--> /Users/lyx/Desktop/Workplace/grtn-tools/venv312/lib/python3.12/site-packages/langchain_core/tools/structured.py:117 _arun@   
 --> /Users/lyx/Desktop/Workplace/grtn-tools/live_quality/Agent/SOP_executor.py:51 handoffback_job_to_supervisor

I’m using the SOP_executor in a subgragh

This is my handoff tool code

@tool
async def handoffback_job_to_supervisor(handoffback_msg: Annotated[str, "response to the supervisor"], 
                                        runtime: ToolRuntime)-> Command:
    """
    if you can't handle this task. give it back to the supervisor
    """
    return Command(update={
            "messages": [
                ToolMessage(
                    content=f"go back to supervisor! reason: {handoffback_msg}",
                    tool_call_id=runtime.tool_call_id
                )
            ]
        },
        gragh=Command.PARENT,
        goto="supervisor"
    )

This is the gragh code

        async def awrap_tool_call(request, execute): 
            result = await execute(request)
            if isinstance(result, ToolMessage):
                return Command(
                    update={"messages": [result]},
                    goto="supervisor",
                )
            return result

        workflow.add_node("supervisor", main_super)
        workflow.add_node("tools", ToolNode(tools, awrap_tool_call=awrap_tool_call))
        # SOP_executor_agent.agent is another agent and contains handoff back tool
        workflow.add_node("SOP_executor", SOP_executor_agent.agent)

        workflow.add_edge(START, "supervisor")
        # workflow.add_edge("SOP_executor", END)
        workflow.add_conditional_edges(
            "supervisor",
            should_continue,
            {
                "tools": "tools",
                END: END,
            }
        )

hi @padanes graph instead of gragh :slight_smile:

sorry :smiling_face_with_tear: