Hey,
I’m currently facing an issue with a supervisor agent that handoffs work to another agent which ultimately calls a tool. I would like this tool to have a set output structure, since the output is fairly large and structured, if I return the tool output back to the agent, it always modifies it in some way.
Here is a simplified example of my problem:
from langchain.agents import AgentState, create_agent
from langchain.chat_models import init_chat_model
from langchain_core.messages import HumanMessage, ToolMessage
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.graph import END, START, StateGraph
from langgraph.prebuilt import ToolRuntime
from langgraph.types import Command
model = init_chat_model("openai:gpt-4.1")
@tool
def weather_tool(city: str) -> str:
"""
Returns what's the weather in the specified city
"""
return f"It's sunny in {city}"
@tool
def end_weather_tool(runtime: ToolRuntime, city: str) -> Command:
"""
Returns what's the weather in the specified city
"""
return Command(
graph=Command.PARENT,
goto=END,
update={
"messages": [
ToolMessage(f"It's sunny in {city}", tool_call_id=runtime.tool_call_id)
]
},
)
def create_handoff_tool(*, agent_name: str, description: str | None = None):
name = f"transfer_to_{agent_name}"
description = description or f"Ask {agent_name} for help."
@tool(name, description=description)
def handoff_tool(runtime: ToolRuntime) -> Command:
tool_message = {
"role": "tool",
"content": f"Successfully transferred to {agent_name}",
"name": name,
"tool_call_id": runtime.tool_call_id,
}
return Command(
goto=agent_name,
update={
"messages": runtime.state["messages"] + [tool_message],
},
graph=Command.PARENT,
)
return handoff_tool
weather_agent = create_agent(
model=model,
system_prompt="You are a weather agent",
tools=[
# end_weather_tool,
weather_tool,
],
name="weather_agent",
)
supervisor_agent = create_agent(
model=model,
system_prompt="You are a supervisor, handoff weather questions to the weather_agent",
tools=[
create_handoff_tool(agent_name="weather_agent"),
],
name="supervisor_agent",
)
graph = (
StateGraph(state_schema=AgentState)
.add_node(supervisor_agent, destinations=(END, "weather_agent"))
.add_node(weather_agent, destinations=(END,))
.add_edge(START, "supervisor_agent")
)
compiled_graph = graph.compile(
checkpointer=InMemorySaver(),
name="graph",
)
out = compiled_graph.invoke(
{"messages": [HumanMessage(content="What's the weather in Sydney?")]},
config=RunnableConfig({"configurable": {"thread_id": "thread1"}}),
)
for msg in out["messages"]:
msg.pretty_print()
If the agent calls the weather_tool, the history looks correct but the message gets rewritten by the agent → supervisor does the handoff, weather_agent calls the tool and the tool call is recorded in the history
WITH weather_tool
================================ Human Message =================================
What’s the weather in Sydney?
================================== Ai Message ==================================
Tool Calls:
transfer_to_weather_agent (call_H67T0KRvBs8q6EB2ZjFWyEBd)
Call ID: call_H67T0KRvBs8q6EB2ZjFWyEBd
Args:
================================= Tool Message =================================
Name: transfer_to_weather_agent
Successfully transferred to weather_agent
================================== Ai Message ==================================
Tool Calls:
weather_tool (call_a3KFnu3lAG0YU4I85rmJ7iZL)
Call ID: call_a3KFnu3lAG0YU4I85rmJ7iZL
Args:
city: Sydney
================================= Tool Message =================================
Name: weather_tool
It’s sunny in Sydney
================================== Ai Message ==================================
It is currently sunny in Sydney. If you need more details or a forecast, let me know!
If the agent calls the end_weather_tool, that uses Command with goto=END and graph=Command.PARENT the history doesn’t have the AIMessage that contains the tool call
WITH end_weather_tool
================================ Human Message =================================
What’s the weather in Sydney?
================================== Ai Message ==================================
Tool Calls:
transfer_to_weather_agent (call_PS0Vphj20wWp0RooZYWu8umC)
Call ID: call_PS0Vphj20wWp0RooZYWu8umC
Args:
================================= Tool Message =================================
Name: transfer_to_weather_agent
Successfully transferred to weather_agent
================================= Tool Message =================================
Name: end_weather_tool
It’s sunny in Sydney
The next call with this history will fail with following error:
openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “Invalid parameter: messages with role ‘tool’ must be a response to a preceeding message with ‘tool_calls’.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘messages.[12].role’, ‘code’: None}}
Am I doing something that I’m not supposed to?
Is it fine to return AIMessage from the tool instead of ToolMessage as this doesn’t corrupt the history?
Would appreciate if someone proposed a better approach if there is any, thank you
!
EDIT: I’m not using return_direct=True because when I do the exceptions are not handled by my@wrap_tool_call middleware which is omitted from the example.