I want to pass the runtime arguments of Langchain’s InjectedToolArg using AgentExecutor. Here is the code example:
- My tool definitions:
from langchain_core.tools import InjectedToolArg, tool
@tool(parse_docstring=True)
async def get_layer_features(
layer_name: str,
user_token: Annotated[str | None, InjectedToolArg], # 👈 I want to have the InjectedToolArg arguments
) -> list[LayerFeature | ErrorData]:
'''
Fetch and return layer features (fields) for a given layer name.
Args:
layer_name (str): The name of the layer to fetch features for.
user_token (str | None): Optional user token for accessing private layers.
'''
- My Agent Executor definition:
async def get_agent(
mcp_servers: list[str] | None = None,
) -> AgentExecutor:
# 1. Fetch MCP Servers (Optional)
mcp_client = MCPClientProvider(mcp_servers)
mcp_tools = await mcp_client.get_tools()
# 2. Define Langchain Components
tools = [
get_layer_features, # 👈 I put my tool here
*mcp_tools
]
llm = ChatOpenAI(**AppConfig.OPENAI_MODEL_CONFIGS)
# 3. Create the Agent
agent = create_tool_calling_agent(
llm=llm,
tools=tools,
prompt=prompt_template
)
return AgentExecutor(
agent=agent,
tools=tools
)
- I invoked it:
response = await agent.ainvoke(
input={
"input": request.message,
"user_token": request.user_token, # Put here and not work
"chat_history": format_chat_history(request.histories),
"tool_runtime": {
"user_token": request.user_token # Put here and not work
}
},
config={
"configurable": {"user_token": request.user_token} # Put here and not work
},
user_token: request.user_token # Put here and not work
)
I have tried serveral way to put the variable into it, but it not worked. It raise Validation Error in Pydantic:
pydantic_core._pydantic_core.ValidationError: 1 validation error for get_layer_features
user_token
Field required [type=missing, input_value={}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.11/v/missing
I want to have a way to put the user_token into ainvoke in AgentExecutor in the convinence way. How can I do that
Thank you!
hi @Ming-doan
have you tried this?
from typing_extensions import Annotated
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import InjectedToolArg, tool
@tool(parse_docstring=True)
async def get_layer_features(
layer_name: str,
user_token: Annotated[str | None, InjectedToolArg] = None, # optional so validation passes
cfg: RunnableConfig = None, # config is auto-injected by the tool runtime
) -> list[LayerFeature | ErrorData]:
"""
Fetch and return layer features (fields) for a given layer name.
Args:
layer_name: The layer name to fetch features for.
user_token: Optional user token for accessing private layers.
"""
token = user_token or (cfg or {}).get("configurable", {}).get("user_token")
# ... use `token` to fetch and return features ...
response = await agent.ainvoke(
{
"input": request.message,
"chat_history": format_chat_history(request.histories),
# do not include user_token here; the LLM tool-call won't pass it
},
config={"configurable": {"user_token": request.user_token}},
)
No, that won’t work since there is no config and context injection in AgentExecutor + create_tool_calling_agent.
You can try this (closure):
from langchain_core.tools import tool
def build_get_layer_features_tool(user_token: str):
@tool(parse_docstring=True)
async def get_layer_features(layer_name: str) -> list[LayerFeature | ErrorData]:
# use the captured user_token here
return await fetch_features(layer_name, user_token)
return get_layer_features
# In get_agent(...)
tools = [build_get_layer_features_tool(request.user_token), *mcp_tools]
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt_template)
return AgentExecutor(agent=agent, tools=tools)
Or just move to LangGraph’s ToolNode for first-class injection.
In this case I would rather pass user_token via context like this:
@tool(parse_docstring=True)
async def get_layer_features(
layer_name: str,
) -> list[LayerFeature | ErrorData]:
"""
Fetch and return layer features (fields) for a given layer name.
Args:
layer_name: The layer name to fetch features for.
user_token: Optional user token for accessing private layers.
"""
if runtime and getattr(runtime, "context", None):
user_token = runtime.context.get("user_token")
# ... use `token` to fetch and return features ...
response = await agent.ainvoke(
{
"input": request.message,
"chat_history": format_chat_history(request.histories),
# do not include user_token here; the LLM tool-call won't pass it
},
context={"user_token": request.user_token},
)
Hi @pawel-twardziak
Thank you for suggesting the solution. I have worked around with several approaches such as:
- Try migrate my current AgentExecutor to LangGraph Workflow to use runtime context, but LangGraph not validate my
context_schema.
- Try to update to the pre-release version of LangChain (1.0.0rc2) and its following packages, I change the code to utilize the
ToolRuntime but it still not worked.
Finally, I use the wrapped function to the tool as below and it finally worked:
def get_layer_features(user_token: str | None = None):
@tool
async def _get_layer_features(
type: LayerType,
layer_name: str,
) -> list[LayerFeature | ErrorData]:
'''
Fetch and return layer features (fields) for a given layer name.
Must use `get_layer_features` tool first to get field details.
If user is asking for each layer's details, use this tool to get features for future queries.
'''
nonlocal user_token
...
return _get_layer_features
However, I have some concern about the performance when python always create a new instance of AgentExecutor on every requests. I hope it not affect too much the performance.
Hope this topic is useful for whose want to deal with tools in LangChain.
1 Like
Hi @Ming-doan
that is interesting, could you show the code when “LangGraph not validate my context_schema”? I am curious since it works for me.
Hey guys, reviving the thread.
Does it mean “InjectedToolArg” is useless if there’s no way to pass an argument without unnecessary machinery like high order functions and context?
Why arguments marked with “InjectedToolArg” are even included in the pydantic schema?
I feel like the most obvious behaviour would be and what I’ve tried immediately:
- Params marked with “InjectedToolArg” are excluded from the pydantic schema, why would you validate them anyway as they aren’t generated by an LLM?
tool.ainvoke(tool_call, some_injected_param=my_argument) – passes unknown kwargs to the tool implementation.
Is it implementation’s oversight or am I missing something?
My langchain version is 1.2.6 at the time of writing.
1 Like
Hi there,
I’ve just done some investigation around the topic. And the conclusions are as follows:
- injection mechanizm is not part of the
AgentExecutor - InjectedToolArg and the others will not work
- so if you need injection (config/context/runtime) in tools, you generally want LangGraph (StateGraph/ToolNode) or a custom executor (a subclass of the
AgentExecutor) that forwards config/context/runtime into tool invocation.
And refering to your questions @eddienubes:
- Is
InjectedToolArg useless? No when you work with StateGraph - it’s a marker that means “this value is not controlled by the model”. LLM-facing tool-call schema excludes injected args (so the model cannot set them).
- Why are injected args still in the Pydantic input schema? Because LangChain distinguishes:
tool_call_schema: what the LLM is allowed/expected to generate (excludes injected args)
get_input_schema() / args_schema: what runtime / programmatic callers can pass and should be validated (includes injected args)
This is explicit in schema construction: injected args are included by default when building the validation model (include_injected=True) “when validating tool inputs”.
1 Like
Thank you for a detailed answer, that makes much more sense!
I feel like I should’ve included some context. The argument I was passing to the tool manually was another subgraph, that’s I when saw that validating such a value is unnecessary as it was just a regular function parameter.
However, now I see that universal pydantic validation, both from LLM and manual invocation perspectives, provides more flexibility in terms of allowed input no matter how we call the tool.
Btw, passing an argument like tool.ainvoke({**call, "subgraph": subgraph}) worked for me.
1 Like