Command in ToolNode behaves unexpectedly

Hi everyone :waving_hand:

I’m encountering an unexpected behavior with the ToolNode prebuilt, specifically with tools using Command, here’s the relevant tool definition:

@tool
def send_rejection(
    feedback: str,
    state: Annotated[dict, InjectedState] = None,
    tool_call_id: Annotated[str, InjectedToolCallId] = None
):
    """
    Send rejection feedback for the curation process in case of unsatisfactory results.

    Args:
        feedback (str): The critique and requested changes for the curation process.
    """
    tool_message = ToolMessage(
        content="Curation process rejected by evaluator, needs re-processing.",
        tool_call_id=tool_call_id
    )

    human_message = HumanMessage(
        content=f"""
        After evaluating your previous response, we determined that the task was **not performed satisfactorily**. 
        Please retry using the same data and table, while taking the following feedback into careful consideration:
        ```
        {feedback}
        ```
        """
    )

    state: AgentState = state
    doc_checkpointer = DocumentCheckPointer(
        root=Path(state.get("document_id")),
        page_id=state.get("page_id"),
        table_id=state.get("table_id")
    )

    # 1. Clear existing artifacts before restarting the curation process
    doc_checkpointer.clear_artifacts()
    logger.info(f"Rejection feedback received: {feedback}")

    # 2. Redirect back to the refinement node for re-processing
    return Command(
        goto="refine_node",
        update={
            "messages": [tool_message, human_message],
            "attempt": state.get("attempts", 1) + 1
        }
    )

However, instead of navigating to the specified node (refine_node), the behavior I’m observing is that the runtime orchestrates two parallel nodes’ executions – one for refine_node and another for reflect_node. (Screenshot attached below for context.)

Could it be a potential bug in the current langgraph version? Before implementing a workaround, I’d prefer some confirmation.

Here’s the corresponding graph configuration for reference:

self.graph = (
    StateGraph(AgentState, context_schema=AgentContext)
    .add_node("__detachable_init_hook__", self.pre_model_hook)
    .add_node("retrieve_node", self.retrieve_node)
    .add_node("refine_node", self.refine_node)
    .add_node("refine_tools_node", ToolNode(tools=REFINE_TOOLS))
    .add_node("reflect_node", self.reflect_node)
    .add_node("reflect_tools_node", ToolNode(tools=REFLECT_TOOLS))
    .add_node("checkpoint_node", self.checkpoint_node)
    .add_edge(START, "__detachable_init_hook__")
    .add_edge("__detachable_init_hook__", "retrieve_node")
    .add_edge("refine_tools_node", "refine_node")
    .add_edge("reflect_tools_node", "reflect_node")
)

Environment details:

langchain                                1.0.1
langchain-core                           1.0.0
langchain-google-genai                   3.0.0
langchain-groq                           1.0.0
langgraph                                1.0.1
langgraph-api                            0.4.44
langgraph-checkpoint                     3.0.0
langgraph-checkpoint-postgres            3.0.0
langgraph-cli                            0.4.4
langgraph-prebuilt                       1.0.1
langgraph-runtime-inmem                  0.14.1
langgraph-sdk                            0.2.9
langgraph-supervisor                     0.0.28
langsmith                                0.4.37

I haven’t received any feedback regarding this issue, so I decided to implement a workaround for this use case. I’ve included it below for reference:

# 1. Check for rejection tool calls
ai_message = cast(AIMessage, response)
for tool_call in ai_message.tool_calls:
    # Handle rejection tool call with reflection attempt limit
    if tool_call.get("name") == "send_rejection" and state.get("attempts", 0) < MAX_REFLECTION_ATTEMPTS:
        call_args = tool_call.get("args", {})
        feedback = call_args.get("feedback", "No feedback provided.")
        # Mock Human response with feedback for re-curation
        feedback_message = HumanMessage(
            content=FEEDBACK_REQUEST_PROMPT(feedback=feedback)
        )
        # Goto refine node with feedback
        return Command(
            goto="refine_node",
            update={
                "messages": [feedback_message],
                "attempts": state.get("attempts", 0) + 1,
            }
        )
# 2. Route to appropriate tool node if needed
if route_to_tool_node(response, REFLECT_TOOLS_NAMES):
    return Command(
        goto="reflect_tools_node",
        update={
            "messages": [response],
        }
    )
# 3. Proceed to checkpoint if no further action is needed
return Command(
    goto="checkpoint_node",
    update={
        "messages": [response],
    }
)

Hi @codeonym

I am very interested in your issue, but I’m short on time and on sick leave now. I will investigate your case when I’m better since it demands more time to spend on it.

1 Like

hi @pawel-twardziak thanks for your interest! Get well soon

1 Like

Thanks @codeonym ! :orange_heart: