Hi everyone! I wanted to try and build a fairly simple agent using LangGraph that does some Tool Calls. But I’ve been running into some Issues where the Response of the .invoke function i would get would be missing Data. I haven’t found anything Online yet regarding that so i wanted to try my luck here!
For Context:
I wanted to use Kimi-K2.5 hosted on Azure Foundry. I believe that Model is OpenAI API Compliant so i reused the AzureChatOpenAI Class from previous Testing which is working most of the time to send Requests to the LLM and get an Answer back:
# chat_model.py
llm = AzureChatOpenAI(
api_key=settings.azure_foundry_api_key,
azure_endpoint=settings.azure_foundry_endpoint,
azure_deployment=settings.azure_foundry_chat_deployment,
api_version=settings.azure_foundry_api_version,
)
coordinator_agent = llm.bind_tools(TOOLS)
My Graph is fairly basic, i decided to switch from using create_agent to just binding the tools to the llm directly in an attempt to get closer to the Issue I’m facing by doing some more logging:
# graph.py
def build_graph():
"""Build and compile the SPAIC coordinator graph."""
graph = StateGraph(State, config_schema=Configuration)
# Nodes
graph.add_node("require_runtime_config_node", require_runtime_config_node)
graph.add_node("coordinator", coordinator_node)
graph.add_node("tools", ToolNode(TOOLS))
# Edges
graph.add_edge(START, "require_runtime_config_node")
graph.add_edge("require_runtime_config_node", "coordinator")
graph.add_conditional_edges("coordinator", tools_condition)
graph.add_edge("tools", "coordinator")
return graph.compile()
As for the Node that’s invoking the the LLM it looks something like this:
# coordinator_node.py
def coordinator_node(state: State) -> dict:
"""Run one coordinator model step and append the generated AI message."""
input_messages = state.get("messages", [])
model_input = [SystemMessage(content=_SYSTEM_PROMPT), *input_messages]
try:
result = coordinator_agent.invoke(model_input)
except Exception as e:
logger.error(
"[coordinator_node] Agent invocation failed: %s: %s",
type(e).__name__,
e,
exc_info=True,
)
raise
if result is None:
logger.warning("[coordinator_node] Model returned no message.")
return {}
# Warn if the model returned empty content — helps spot silent model failures.
final = result
if (
isinstance(final, AIMessage)
and not getattr(final, "content", "").strip()
and not getattr(final, "tool_calls", None)
):
if getattr(final, "response_metadata", None) in (None, {}, []):
logger.error(
"[coordinator_node] Model returned an empty message with no response metadata, indicating a likely silent failure. Check next logs for details:"
)
logger.warning(
"[coordinator_node] invoke result: type=%s id=%s content=%r tool_calls=%s response_metadata=%s additional_kwargs=%s",
type(result).__name__,
getattr(result, "id", None),
getattr(result, "content", None),
getattr(result, "tool_calls", None),
getattr(result, "response_metadata", {}),
getattr(result, "additional_kwargs", {}),
)
return {"messages": [result]}
The Logs I’m seeing in the Console look something like that:
2026-03-16T11:36:21.255952Z [error ] [coordinator_node] Model returned an empty message with no response metadata, indicating a likely silent failure. Check next logs for details: [nodes.coordinator_node] api_variant=local_dev assistant_id=2690b4e4-0442-45f6-9f92-18bbf0793538 graph_id=spaic_core langgraph_api_version=0.7.65 langgraph_node=coordinator request_id=47cbdd8b-1bf1-4055-bb57-b440d0f50bfc run_attempt=1 run_id=019cf66e-565d-71e3-97da-f9f4db533ff0 thread_id=9cf311b8-5606-475c-87b6-1a6f9c745baf thread_name=MainThread
2026-03-16T11:36:21.256184Z [warning ] [coordinator_node] invoke result: type=AIMessage id=lc_run--019cf66e-c249-7371-99e6-45f9201be563 content='' tool_calls=[] response_metadata={} additional_kwargs={} [nodes.coordinator_node] api_variant=local_dev assistant_id=2690b4e4-0442-45f6-9f92-18bbf0793538 graph_id=spaic_core langgraph_api_version=0.7.65 langgraph_node=coordinator request_id=47cbdd8b-1bf1-4055-bb57-b440d0f50bfc run_attempt=1 run_id=019cf66e-565d-71e3-97da-f9f4db533ff0 thread_id=9cf311b8-5606-475c-87b6-1a6f9c745baf thread_name=MainThread
This is where i’m seeing the empty response_metadata object and in LangSmith Studio “no data.”
Am I doing something completely wrong? I believe that if the Model itself would have answered with no Message i should at least be seing response metadata. The Issue i’m seeing is happening sporadically, i can’t see a pattern.
Regarding my Versions. I’m running
“langchain>=1.2.10”,
“langchain-openai>=1.1.10”,
“langgraph>=1.0.0”,
on Python 3.12.12
Thank you all!