How to use InjectedToolArg in AgentExecutor

I want to pass the runtime arguments of Langchain’s InjectedToolArg using AgentExecutor. Here is the code example:

  1. My tool definitions:
from langchain_core.tools import InjectedToolArg, tool

@tool(parse_docstring=True)
async def get_layer_features(
    layer_name: str,
    user_token: Annotated[str | None, InjectedToolArg],  # 👈 I want to have the InjectedToolArg arguments
) -> list[LayerFeature | ErrorData]:
    '''
    Fetch and return layer features (fields) for a given layer name.

    Args:
        layer_name (str): The name of the layer to fetch features for.
        user_token (str | None): Optional user token for accessing private layers.
    '''
  1. My Agent Executor definition:
async def get_agent(
    mcp_servers: list[str] | None = None,
) -> AgentExecutor:
    # 1. Fetch MCP Servers (Optional)
    mcp_client = MCPClientProvider(mcp_servers)
    mcp_tools = await mcp_client.get_tools()

    # 2. Define Langchain Components
    tools = [
        get_layer_features,  # 👈 I put my tool here
        *mcp_tools
    ]
    llm = ChatOpenAI(**AppConfig.OPENAI_MODEL_CONFIGS)

    # 3. Create the Agent
    agent = create_tool_calling_agent(
        llm=llm,
        tools=tools,
        prompt=prompt_template
    )
    return AgentExecutor(
        agent=agent,
        tools=tools
    )
  1. I invoked it:
response = await agent.ainvoke(
    input={
        "input": request.message,
        "user_token": request.user_token,  # Put here and not work
        "chat_history": format_chat_history(request.histories),
        "tool_runtime": {
            "user_token": request.user_token  # Put here and not work
        }
    },
    config={
       "configurable": {"user_token": request.user_token}  # Put here and not work
    },
    user_token: request.user_token  # Put here and not work
)

I have tried serveral way to put the variable into it, but it not worked. It raise Validation Error in Pydantic:

pydantic_core._pydantic_core.ValidationError: 1 validation error for get_layer_features
user_token
  Field required [type=missing, input_value={}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.11/v/missing

I want to have a way to put the user_token into ainvoke in AgentExecutor in the convinence way. How can I do that

Thank you!

hi @Ming-doan

have you tried this?

from typing_extensions import Annotated
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import InjectedToolArg, tool

@tool(parse_docstring=True)
async def get_layer_features(
    layer_name: str,
    user_token: Annotated[str | None, InjectedToolArg] = None,  # optional so validation passes
    cfg: RunnableConfig = None,  # config is auto-injected by the tool runtime
) -> list[LayerFeature | ErrorData]:
    """
    Fetch and return layer features (fields) for a given layer name.

    Args:
        layer_name: The layer name to fetch features for.
        user_token: Optional user token for accessing private layers.
    """
    token = user_token or (cfg or {}).get("configurable", {}).get("user_token")
    # ... use `token` to fetch and return features ...
response = await agent.ainvoke(
    {
        "input": request.message,
        "chat_history": format_chat_history(request.histories),
        # do not include user_token here; the LLM tool-call won't pass it
    },
    config={"configurable": {"user_token": request.user_token}},
)

No, that won’t work since there is no config and context injection in AgentExecutor + create_tool_calling_agent.

You can try this (closure):

from langchain_core.tools import tool

def build_get_layer_features_tool(user_token: str):
    @tool(parse_docstring=True)
    async def get_layer_features(layer_name: str) -> list[LayerFeature | ErrorData]:
        # use the captured user_token here
        return await fetch_features(layer_name, user_token)
    return get_layer_features

# In get_agent(...)
tools = [build_get_layer_features_tool(request.user_token), *mcp_tools]
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt_template)
return AgentExecutor(agent=agent, tools=tools)

Or just move to LangGraph’s ToolNode for first-class injection.
In this case I would rather pass user_token via context like this:

@tool(parse_docstring=True)
async def get_layer_features(
    layer_name: str,
) -> list[LayerFeature | ErrorData]:
    """
    Fetch and return layer features (fields) for a given layer name.

    Args:
        layer_name: The layer name to fetch features for.
        user_token: Optional user token for accessing private layers.
    """
    if runtime and getattr(runtime, "context", None):
        user_token = runtime.context.get("user_token")
    # ... use `token` to fetch and return features ...
response = await agent.ainvoke(
    {
        "input": request.message,
        "chat_history": format_chat_history(request.histories),
        # do not include user_token here; the LLM tool-call won't pass it
    },
    context={"user_token": request.user_token},
)

Hi @pawel-twardziak

Thank you for suggesting the solution. I have worked around with several approaches such as:

  • Try migrate my current AgentExecutor to LangGraph Workflow to use runtime context, but LangGraph not validate my context_schema.
  • Try to update to the pre-release version of LangChain (1.0.0rc2) and its following packages, I change the code to utilize the ToolRuntime but it still not worked.

Finally, I use the wrapped function to the tool as below and it finally worked:

def get_layer_features(user_token: str | None = None):
    @tool
    async def _get_layer_features(
        type: LayerType,
        layer_name: str,
    ) -> list[LayerFeature | ErrorData]:
        '''
        Fetch and return layer features (fields) for a given layer name.
        Must use `get_layer_features` tool first to get field details.
        If user is asking for each layer's details, use this tool to get features for future queries.
        '''
        nonlocal user_token
        ...
    return _get_layer_features

However, I have some concern about the performance when python always create a new instance of AgentExecutor on every requests. I hope it not affect too much the performance.

Hope this topic is useful for whose want to deal with tools in LangChain.

Hi @Ming-doan

that is interesting, could you show the code when “LangGraph not validate my context_schema”? I am curious since it works for me.