How to Return Tool Output and Update State Simultaneously Using Command in LangChain?

Can a tool return both its actual result and a state update when using Command? If so, what is the recommended pattern to handle both outputs?

Hi @bharathiselvan

If I understand your point correctly, you want to update the state and return the tool result itself separately, right?

Based on the signature (Types | LangChain Reference) - you can’t sparate those data - it all must be sent via state imho

A tool or node can return a Command that both updates the graph state and controls flow (e.g., route to the next node)

See those:

Thanks @pawel-twardziak , Let me check these links..

Hi @pawel-twardziak , I have a multi-agent system built using LangChain’s create_agent.

Inside one of my tools, I’m updating a custom state variable using a Command (i.e., returning state updates from the tool). The issue I’m seeing is:

  • The custom state update is available only for the current turn

  • On the next user turn, the updated state is no longer present

Question:
Are custom state updates via Command expected to persist only when using LangGraph (specifically a tool node), and not when using LangChain’s create_agent?

In other words, does create_agent not support persistent custom state updates across turns in the same way LangGraph does? Is that the expected behavior?

Hi @bharathiselvan

No, create_agent is not ā€œstateless-onlyā€ and it does support persistent custom state updates via Command in the same way as raw LangGraph.

Two questions:

  • Do you use checkpointer for persistence?
  • Have you defined your custom state variable in state_schema?
from typing import Annotated
from typing_extensions import TypedDict, NotRequired
from langchain.agents.middleware.types import AgentState, OmitFromInput

class MyState(AgentState):
    # visible & required input:
    user_id: str

    # only appears in output / internal, not required on input:
    computed_score: Annotated[NotRequired[float], OmitFromInput]

agent = create_agent(
    model="openai:gpt-4o",
    tools=[...],
    state_schema=MyState,  # you custom variable must be there
    checkpointer=checkpointer,  # critical for persistence
)

huge favor @bharathiselvan - if this helps you, could you mark the post as Solved?
This would prevent open-ended posts and hanging posts, as well as multi-thread posts.

Here is the simplest setup I could figure to update and then read a state variable, using 2 tools and 2 calls rather than one as OP expected:

from langgraph.types import Command
from langchain.agents import create_agent, AgentState
from langchain.messages import ToolMessage

class CustomAgentState(AgentState):  
    like_chocolate: bool

@tool
def update_state(runtime:ToolRuntime[Context, CustomAgentState], new_state:CustomAgentState):
"""Update state with user preferences. Only include modified keys in new_state."""
return Command(update={
  **new_state,
  # Don't forget to add the proper ToolMessage
  "messages": [
            ToolMessage(
"Successfully looked up user information",
                tool_call_id=runtime.tool_call_id
            )
        ]})

@tool
def read_state(runtime: ToolRuntime[Context, CustomAgentState]):
"""Read current state with user preferences"""
return runtime.state

And to call it:

agent_with_state=create_agent(
    system_prompt="Remember user preferences in your state",
    model=model,
    state_schema=CustomAgentState,
    checkpointer=InMemorySaver(),
    tools=[read_state, update_state]
    )
for res in agent_with_state.stream(
    {"messages": "Remember that I love chocolate"},
    config={"configurable": {"thread_id": 1}},
    ):
  pretty_chunk(res)
for res in agent_with_state.stream(
    {"messages": "Remember that I love chocolate"},
    config={"configurable": {"thread_id": 1}},
    ):
  pretty_chunk(res)

This works but feel quite convoluted, am I missing something? Luckily it’s quite reusable.

Another variation of this question, the following code from the doc seems to handle the case where you want to either return a value or update state (but not both):

@tool
def greet(
    runtime: ToolRuntime[CustomContext, CustomState]
) -> str | Command:
    """Use this to greet the user once you found their info."""
    user_name = runtime.state.get("user_name", None)
    if user_name is None:
       return Command(update={
            "messages": [
                ToolMessage(
                    "Please call the 'update_user_info' tool it will get and update the user's name.",
                    tool_call_id=runtime.tool_call_id
                )
            ]
        })
    return f"Hello {user_name}!"

It’s not clear what the Command does here, I suspect that if we returned ā€œPlease call the ā€˜update_user_info’ tool it will get and update the user’s name.ā€ directly as a string, the agent loop would stop there and not call the tool (it will literally say to the user ā€œPlease call the toolā€). While with a command, the agent loop keeps going and thus trigger a new tool call as expected, updating user info, and then finally greets the user.

Related documentation: Short-term memory - Docs by LangChain

hi @eric-burel

since this topic is already resolved and it would be better for organizing the content here and for the users searching for topics, please create a new post :slight_smile: You can also tag me so I’ll pick up your matter.