How to pass runtime to dynamically bind tools?

Hi All,

I’m implementing a custom graph with dynamic tool binding. To implement ReAct loop, I’m handing messages from LLM in a separate node to execute tool calls. The related code looks like the following:

class SessionState(TypedDict):
    messages: List[langchain.messages.AnyMessage]
    current_tool_list: List[langchain_core.runnables.base.Runnable]


@tool
async def sample_tool(ordinary_param_1: str, runtime: ToolRuntime) -> int:
    """returns 10"""
    return 10


async def _lg_node_tool_call(state: SessionState, runtime: Runtime) -> dict:

    tool_result_list = []
    for tool_call in state["messages"][-1].tool_calls:
        tool_name = tool_call["name"]
        tool_name_list = [t.get_name() for t in state["current_tool_list"]]
        if tool_name in tool_name_list:
            tool_func = cast(
                langchain_core.tools.structured.StructuredTool,
                state["current_tool_list"][tool_name_list.index(tool_name)],
            )

            observation = None
            call_args = tool_call["args"]

            if tool_func.coroutine is not None:
                observation = await tool_func.ainvoke(call_args)

            if tool_func.func is not None:
                observation = tool_func.invoke(call_args)

            tool_result_list.append(
                langchain.messages.ToolMessage(
                    content=observation, tool_call_id=tool_call["id"]
                )
            )

    return {"messages": state["messages"] + tool_result_list}


graph_builder = langgraph.graph.StateGraph(
	state_schema=SessionState,
)
# other nodes and edges are defined here
self._agent_graph_builder.add_node("tool_node", _lg_node_tool_call)
graph = graph_builder.compile()
graph.invoke(
    {
        "messages": [],
        "current_tool_list": [sample_tool],
    }
)

This works good with tools that accept only ordinary parameters. However if tool accepts a runtime, like sample_tool() above, I’m getting the Pydantic exception that runtime is missed:

pydantic_core._pydantic_core.ValidationError: 1 validation error for sample_tool
runtime
  Field required [type=missing, input_value={'ordinary_param_1': 'test_input'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.12/v/missing

I tried to create an instance of langchain.tools.ToolRuntime object manually inside _lg_node_tool_call(), but Pydantic starts to complain on its fields (e.g. context must be None and stream_writer must be callable, etc.), so I’m not sure if this manual construction is supported.

Is there a way to construct ToolRuntime properly or at least get all Pydantic rules for it?

hi @yury

The ToolRuntime parameter is not automatically injected when you call tool.ainvoke() / tool.invoke() directly. The injection of ToolRuntime (along with InjectedState, InjectedStore, etc.) is handled internally by ToolNode._inject_tool_args() - a method on the prebuilt ToolNode class.

When you bypass ToolNode by writing a custom tool execution node and calling tool.ainvoke(call_args) directly, the runtime parameter is never injected into call_args, so Pydantic validation fails because it’s a required field.

Source: langgraph/prebuilt/tool_node.py - _inject_tool_args()

Either use ToolNode Instead of custom tool execution

The simplest and most robust solution is to use the built-in ToolNode, which automatically handles ToolRuntime injection, error handling, parallel execution, and state/store injection.

ToolNode takes care of:

  • Constructing a ToolRuntime instance from the graph Runtime (including context, store, stream_writer)
  • Injecting it (along with InjectedState, InjectedStore) into the tool call arguments before invocation
  • Filtering out validation errors for injected arguments
  • Parallel tool execution
from langchain_core.tools import tool
from langchain.tools import ToolNode, ToolRuntime

@tool
async def sample_tool(ordinary_param_1: str, runtime: ToolRuntime) -> int:
    """returns 10"""
    return 10

# ToolNode handles runtime injection automatically
tool_node = ToolNode([sample_tool])

If you need to dynamically change the tools available at runtime, you can still use ToolNode. Pass the full set of tools to ToolNode at construction time, and control which tools the LLM can call by dynamically binding tools on the model side (using model.bind_tools()). ToolNode will only execute the tool calls that the model actually makes.

from typing import TypedDict
from langchain_core.messages import AnyMessage
from langchain_core.tools import tool
from langchain.tools import ToolNode, ToolRuntime
from langgraph.graph import StateGraph
from langgraph.runtime import Runtime

class SessionState(TypedDict):
    messages: list[AnyMessage]

@tool
async def tool_a(x: str, runtime: ToolRuntime) -> str:
    """Tool A that uses runtime."""
    user = runtime.context.user_id
    return f"A({x}) for {user}"


@tool
async def tool_b(y: int) -> str:
    """Tool B."""
    return f"B({y})"


all_tools = [tool_a, tool_b]


async def call_model(state: SessionState, runtime: Runtime) -> dict:
    # Dynamically decide which tools to bind based on state/context
    available_tools = all_tools  # or filter based on runtime.context, state, etc.
    model_with_tools = model.bind_tools(available_tools)
    response = await model_with_tools.ainvoke(state["messages"])
    return {"messages": [response]}


# Pass ALL possible tools to ToolNode - it only runs what the model calls
tool_node = ToolNode(all_tools)

graph = (
    StateGraph(SessionState)
    .add_node("agent", call_model)
    .add_node("tools", tool_node)  # ToolNode handles ToolRuntime injection
    .add_edge("tools", "agent")
    .set_entry_point("agent")
    .compile()
)

Source: Official docs - Runtime > Inside tools

Or manually inject ToolRuntime in your custom node

If you have a specific reason to keep a custom tool execution node, you need to manually construct and inject the ToolRuntime before calling the tool, similar to what ToolNode._inject_tool_args() does internally.

from langchain.tools import ToolRuntime
from langgraph.runtime import Runtime


async def _lg_node_tool_call(state: SessionState, runtime: Runtime) -> dict:
    tool_result_list = []
    for tool_call in state["messages"][-1].tool_calls:
        tool_name = tool_call["name"]
        tool_name_list = [t.get_name() for t in state["current_tool_list"]]
        if tool_name in tool_name_list:
            tool_func = cast(
                langchain_core.tools.structured.StructuredTool,
                state["current_tool_list"][tool_name_list.index(tool_name)],
            )

            call_args = tool_call["args"]

            # Manually construct and inject ToolRuntime
            tool_runtime = ToolRuntime(
                state=state,
                tool_call_id=tool_call["id"],
                config={},  # or pass the actual RunnableConfig
                context=runtime.context,
                store=runtime.store,
                stream_writer=runtime.stream_writer,
            )
            call_args = {**call_args, "runtime": tool_runtime}

            observation = None
            if tool_func.coroutine is not None:
                observation = await tool_func.ainvoke(call_args)
            elif tool_func.func is not None:
                observation = tool_func.invoke(call_args)

            tool_result_list.append(
                langchain.messages.ToolMessage(
                    content=observation, tool_call_id=tool_call["id"]
                )
            )
    return {"messages": state["messages"] + tool_result_list}

The key line is:

tool_runtime = ToolRuntime(
    state=state,
    tool_call_id=tool_call["id"],
    config={},
    context=runtime.context,
    store=runtime.store,
    stream_writer=runtime.stream_writer,
)
call_args = {**call_args, "runtime": tool_runtime}

This mirrors what ToolNode does internally in its _func and _afunc methods (see source code).

**Alternatively, use create_agent with Middleware :slight_smile:


hopefully it helps :flexed_biceps: If you have any further questions, hit me up :slight_smile:

Hi Pawel, sorry for the delay and thank you very much for the comprehensive answer! :handshake:

I tried the manual way and it worked. I also added parsing of LangGraph commands if they are returned from tools.

There is a small follow-up question on command handling, if you allow. When a tool answers with some command, this entire command is included in tool reply by default. The documentation states that command is for “update the graph’s state”. This is true, but there is a side effect: the tool caller (LLM) by default sees all updates that were made with the command. This may not be desirable, it we want to split graph states from LLM replies: e.g. in my case I store unrelated service messages in graph state, and the LLM sees every non-related update of graph state this way. This can be solved with custom handling on manual level, as we discussed above. Would it be better to add a note in the documentation that the tool caller will be able to analyze all graph state updates by default?

hi @yury

This is another reason to use ToolNode, which properly separates concerns:

  • ToolMessage returns go to the messages channel (LLM sees them)
  • Command returns are processed by LangGraph - only the ToolMessage inside Command.update["messages"] is visible to the LLM - other state fields are applied silently to their channels

For custom nodes, you must explicitly check isinstance(result, Command) and extract the ToolMessage separately from the state updates which is exactly what ToolNode._execute_tool_sync() and _validate_tool_command() do internally.