I have read multiple articles related to the topic of how to update the state while being inside the tool, and the solution is Command method. However, all the solutions in those articles are using the ToolNode which requires the list of tools before building the graph. Apart from the ToolNode, it also works with other standard pre-built components like create_react_agent etc.
In my application, there are multiple users who can use the agent with different set of tools, they can enable or disable tools as per their choice. In addition to that, the user has flexibility to use different personas with different LLM model. Because the main structure is same, and difference between tools, persona and LLM models, I am creating the graph object at the beginning of the service lifespan. When the service receives a request from user whose configuration is not available for the graph execution, I prepare it, that includes the LLM instances, Runnable for certain nodes, and tool object. I pass this prepared object as config while invoking the graph object, and accessing the runnable, tool object using config variable.
I am not using any prebuilt component like create_react_agent, ToolNode or AgentState as my application requires some complex flow, therefore, I am using the custom tool_node which works similar to ToolNode and as for the agent node. As of now, my current implementation of ToolNode supports updation of state similar to artifect style. My tools return a str feedback and dictionary containing the state_update values, which I use to update the state.
This setup works fine, however, it makes the tool implementation complex. Therefore, I want to use the Command method inside the tool which is not possible in my current tool node.
The Command methods transfer the control of the graph from one node to âgotoâ node. The pre-built ToolNode transfer the control to respective tool, but itâs not the case in my custom tool node. The control over the graph always stays with my tool node and inside it, the tools are called.
I have tried to use ToolNode, but it is used during the graph building phase, and it requires the list of tools that are only available during the arrival of a new request, and my graph object is built way before that point of time.
There are 2 pre-built components in LangMem: SummarizationNode & summarize_messages. Both works the same way, only difference is the prior is attached as a node in the graph, whereas the latter one is used as a function inside a node. Is there any similar kind of implementation of ToolNode as well?
What should I do so that, I can use Command in tools in my current setup of agent?
Could you share your custom tool node implementation? Specifically how youâre invoking the tools and handling their results? I think the issue is that ToolNode has some built-in logic for Command objects that your custom node might be missing.
Would help to see the code to figure out what needs to be tweaked.
To support Command in your custom tool node, you need to check if the tool returns a Command object and handle it differently from your tuple pattern. Check out the example below:
from langgraph.types import Command
async def tool_node(state: GraphState, config: RunnableConfig | None):
try:
tool_calls = state.get("messages", [])[-1].tool_calls
new_messages = []
state_updates = {} # Collect state updates instead of mutating directly
for tool_call in tool_calls:
tool_args = tool_call['args'].copy()
tool_args['state'] = state
tool_args['config'] = config
try:
tool = config.get("configurable").get("tools").get(tool_call['name'])
result = await tool.ainvoke(tool_args)
# Check if tool returned a Command object
if isinstance(result, Command):
# Command.update contains the state changes
state_updates.update(result.update)
# You can also check result.goto for control flow if needed
elif isinstance(result, tuple):
# Your existing tuple pattern (backward compatible)
response, tool_state_update = result
state_updates.update(tool_state_update)
new_messages.append(ToolMessage(content=response, tool_call_id=tool_call['id']))
else:
# Handle simple string responses
new_messages.append(ToolMessage(content=str(result), tool_call_id=tool_call['id']))
except Exception as e:
logger.exception(f"Exception in tool_node: {e}")
response = str(e) + "\nPlease fix this error if possible."
new_messages.append(ToolMessage(content=response, tool_call_id=tool_call['id']))
# Merge messages from Command with other tool messages
all_messages = new_messages
if "messages" in state_updates:
command_messages = state_updates.pop("messages")
all_messages = command_messages + new_messages
# Build the return dict with all updates
updates = {"messages": all_messages}
updates.update(state_updates)
return updates
except Exception as e:
logger.exception(f"Exception in tool_node: {e}")
raise
Now your tools can return Command:
from langgraph.types import Command
from langchain_core.messages import ToolMessage
async def my_tool(query: str, state: dict, config: dict):
# Access tool_call_id if needed
tool_call_id = config.get("tool_call_id")
# Do your work
result = await some_operation(query)
# Return Command with state updates
return Command(
update={
"custom_field": result,
"messages": [ToolMessage("Done!", tool_call_id=tool_call_id)]
}
)
This should keep your existing (response, dict) tools working while adding Command support. Give it a try and let me know if this helps!
If you want to suport Command-driven control flow, maybe this would help. However @von-development code looks great for your needs.
from typing import Dict, Any
from langgraph.types import Command
from langchain_core.messages import ToolMessage, AnyMessage
from langchain_core.runnables import RunnableConfig
def custom_tool_node(state: Dict[str, Any], config: RunnableConfig):
# Get tools dynamically at runtime
tools: Dict[str, Any] = (config.get("configurable", {}) or {}).get("tools", {})
last = state["messages"][-1]
tool_calls = getattr(last, "tool_calls", None) or []
if not tool_calls:
return {}
new_messages: list[AnyMessage] = []
for call in tool_calls:
name = call["name"]
args = call.get("args", {})
tool = tools[name]
result = tool.invoke(args, config=config) if hasattr(tool, "invoke") else tool(**args)
# If a tool returns a Command, immediately return it from the NODE
if isinstance(result, Command):
# Optionally merge any messages accumulated so far
merged_update = {"messages": state["messages"] + new_messages}
if getattr(result, "update", None):
merged_update.update(result.update)
return Command(update=merged_update, goto=getattr(result, "goto", None))
# Otherwise, treat it as a normal observation
new_messages.append(
ToolMessage(content=str(result), tool_call_id=call["id"]) # type: ignore[arg-type]
)
# No tool returned a Command â just update state (messages)
return {"messages": new_messages}
# Example tool that requests a jump to another node by returning a Command
def escalate_to_review(ticket_id: str) -> Command:
summary = f"Escalating ticket {ticket_id} for human review"
return Command(update={"review_summary": summary}, goto="human_review")
If you prefer âflattenedâ artifact-style updates behavior, merge Command.update into a dict (@von-developmentâs code) and set a routing flag (e.g., state["next_node"]) and use conditional edges instead of returning Command
You could also implement your own class-based node wrapper for tools, like ToolExecutorNode (similar to SummarizationNode from LangMem) that would detect and re-emit Command at the node boundary so the runtime honors goto/update and support sync/async tools and dynamic tool resolution.
But that might be kinda overengineering in your case
@von-development@pawel-twardziak
Thanks for the code snippet. I actually wanted to use the feature of Command to change the control flow of the graph, and I can merge your solutions to use the command to update the state and navigate to another node. As of now, my requirements involve executing a single node or tool at a time, so I think it will work as I expect.
Would like to verify with you guys, my currently implemented custom tool node will invoke multiple tools in a loop (sequentially, not simultaneously) without handing over the control. If I include the Command snippet code suggested by @pawel-twardziak after updating the state_updates dictionary, and letâs say, the agent calls multiple tools (all of them returns Command object), wouldnât this logic handover the control to the first toolâs goto node? What happens in case when I use the original implementation of tool node? does it call all the tools simultaneously, and handover the control to the goto nodes assigned inside the Command of each tool? And if itâs true, then how does these multiple control branches are merged into one single control? I am just curious, because what I have learned about the Command is that it is an upgrade to Send method, which can create multiple control branches.
thanks for you reply and feedback. This is how I understant what you are asking about:
Q1 - Confirm custom node behavior: âMy custom tool node invokes multiple tools sequentially (loop) without handing over control during execution - correct?â
Yes. Your custom node processes tool calls in sequence and only returns control when the node returns. In your snippet, state is updated and ToolMessages are collected, then a single update is returned. Control flow changes only when the node returns a Command (node-level). See Command semantics:
Q2 - Sequential node with Command short-circuit: âIf I add the Command-based snippet and multiple tools each return a Command, will control hand over to the first toolâs goto (shortâcircuit)?â
Yes, if you immediately return Command(...) upon the first Command, the node shortâcircuits and hands control to that goto. Example branch inside your node:
if isinstance(result, Command):
merged = {"messages": state["messages"] + new_messages}
if result.update:
merged.update(result.update)
return Command(update=merged, goto=result.goto)
To run all tools then decide, collect commands and choose a winner after the loop.
Q3 - Native ToolNode execution model: âDoes the original ToolNode call all tools simultaneously, and hand over control to each toolâs goto?â
Yes, ToolNode executes tool calls in parallel (async: asyncio.gather | sync: thread/executor map) and then combines outputs. If tools return Command, ToolNode returns a list that can include Commands and/or ToolMessages. The runtime honors Command.goto (potentially fanning out). Source excerpts:
Q4 - How do multiple control branches merge into one?
They donât auto-merge. Branches converge only where your graph topology brings them together, and your state reducers define how updates combine (e.g., messages channel using list-add). Use conditional edges or a join node to converge.
With âmergingâ, you may be referring to BSP (Bulk Synchronous Parallel).
Relation to BSP/Pregel: LangGraphâs execution is step-based with fan-out and fan-in. Commands (with multiple Send targets) spawn parallel tasks. The runtime applies updates and advances at step boundaries, similar to BSP supersteps. The codebase even uses Pregel terminology.
Important differences: BSP implies a strict global barrier each superstep. LangGraph enforces step boundaries but doesnât auto-merge branches- you must converge them via graph topology and channel reducers. ToolNodeâs concurrent tool-calls are a âmini-superstepâ inside the node, then results are combined.
Fan-in example (map-reduce style)
Fan-out with Send to process items in parallel, then fan-in by returning to a join node whose reducer appends results.
from typing import Annotated, TypedDict
import operator
from langgraph.types import Send
from langgraph.graph import StateGraph, START, END
class OverallState(TypedDict):
subjects: list[str]
# Fan-in happens via list-add reducer on "jokes"
jokes: Annotated[list[str], operator.add]
def router(state: OverallState):
# Fan-out: create one Send per subject
return [Send("generate_joke", {"subject": s}) for s in state["subjects"]]
def generate_joke(state: dict):
subj = state["subject"]
return {"jokes": [f"Joke about {subj}"]}
builder = StateGraph(OverallState)
builder.add_conditional_edges(START, router)
builder.add_node("generate_joke", generate_joke)
builder.add_edge("generate_joke", END) # Fan-in at END; list-add reducer appends
graph = builder.compile()
# Invoking with two subjects results in both jokes appended
graph.invoke({"subjects": ["cats", "dogs"], "jokes": []})
# => {"subjects": ["cats", "dogs"], "jokes": ["Joke about cats", "Joke about dogs"]}
Optional: Fan-in with Command.goto
from langgraph.types import Command
def router_with_command(state: OverallState):
sends = [Send("generate_joke", {"subject": s}) for s in state["subjects"]]
return Command(goto=sends) # multiple branches; same fan-in behavior via reducer
Q5 - Is Command an upgrade to Send that can create multiple control branches?
Command generalizes Send. Send sends a message to a node. Command can include update and goto, where goto may be a single node, a sequence of nodes, a Send, or a sequence of Sends. Returning multiple Sends via Command.goto can create multiple branches. See local sources:
ToolNode validates and combines Commands across tool calls, including merging parent-graph Sends into a single parent Command: tool_node.py combine logic.
@neel as for me, I would never create my own custom tool node implementation unless it implements all the goodies that the native tool node implements.
I would rather create multiple tool nodes with different tool sets and a router for them.
Thanks for the clarification and advise @pawel-twardziak. I know, itâs to chaotic to implement all these features while being flexible in receiving inputs , and I am not interested either. When I faced this issue of using Command with my custom tool node, I had explored the ToolNode implementation, but I couldnât follow the implementation after the below line.
I was guessing that multiple control branches might be created if multiple tool calls return Command. I just wanted to confirm that only, thatâs all.
Thatâs a good idea, but instead of making multiple nodes and routing to the relevant node, I think I should change my initialization flow instead.
Regardless, thanks a lot for your help with my custom implementation and efforts for solving my doubts.