Make a llm.with_structured_output call a tool

I’m implementing a workflow where one of the nodes invokes an LLM that:

  1. Returns a structured output (written to state), and
  2. Conditionally calls a tool via a conditional edge, based on that output, and comes back to the same node in a React-like architecture.

I wrote something like this:

llm_with_tools = llm.bind_tools(tools)
structured_llm = llm_with_tools.with_structured_output(A_given_class)

The issue:
When the LLM tries to call a tool, it mistakenly uses fields from the structured output class (A_given_class) as tool arguments—instead of using the correct arguments expected by the tool.

How can I ensure the LLM uses the proper tool input schema rather than mixing it with the structured output class?

Thanks!

Hey Ignacio! Its best not to combine structured output and tool calling simultaneously.

So combining bind_tools() and with_structured_output() creates schema confusion - the LLM sees multiple competing function schemas and gets confused about which one to use for which purpose. The best approach would be to use separate nodes - one with llm.with_structured_output(A_given_class) for structured data, another with llm.bind_tools(tools) for tool calling, connected via conditional edges.

Sequential processing in one node would also work (which I feel is what you’re aiming for), first call llm.with_structured_output(), then conditionally call llm.bind_tools() based on the structured result but this would still require two LLM calls to achieve reliable results.

Hey niilooy,
It’s actually the other way around, I have a node that first conditionally calls a tools, and then that data needs to be passed to the next node in a structured format. The red arrow below depicts where the structure output is expected.

1 Like

This is a common challenge when combining structured output with tool calling in LangGraph. The issue is that with_structured_output() constrains the LLM’s output schema, which can conflict with tool argument schemas.

The solution is to use the tools parameter within with_structured_output() itself, rather than chaining bind_tools() separately. This allows the LLM to understand both schemas independently:

from langgraph.graph import StateGraph, END

# Don't do this:
# llm_with_tools = llm.bind_tools(tools)
# structured_llm = llm_with_tools.with_structured_output(A_given_class)

# Do this instead:
unified_model = llm.with_structured_output(
    A_given_class,
    method="json_schema",
    include_raw=True,
    strict=True,
    tools=tools,  # Pass tools here, not via bind_tools
)

Then in your chatbot node, check what the LLM returned:

def chatbot_node(state):
    messages = state.get("messages", [])
    result = unified_model.invoke(messages)
    
    raw_msg = result.get('raw')
    parsed_output = result.get('parsed')
    
    if parsed_output:
        # Got structured output - store it and signal completion
        return {
            "messages": messages + [raw_msg],
            "final_structured_output": parsed_output
        }
    else:
        # Got tool calls - continue the loop
        return {"messages": messages + [raw_msg]}

The key insight: when you pass tools directly to with_structured_output(), the LLM treats them as separate decision paths. It will either return a structured response matching your class OR make tool calls with proper tool schemas—never mixing the two.

This creates a clean ReAct-style loop where the LLM can call tools as needed, then produce the final structured output when it has enough information.