Are dynamic tool lists allowed when using create_agent?

Hi,

I’m trying to make create_agent work a bit like Claude Skills, where the agent can discover tools when loading skill_tools (which are basically a long string with instructions + add new tools to its list). But it seems like I need to define the tools list right from the start.

skills_tools = [skill_search, skill_math]

agent = create_agent(tools = skills_tools)

@tool
def skill_search():
  """Loads the context you need to search well"""
  
  # This part I don't know how to code ^^
  parent.tool_list += [fast_search, rag, deep_search]

  # This is clear
  instructions = """long text explaining how to search"""
  return instructions

Is that the case ? Any tips to make it work ?

Thanks

hi @Batiste

I think you need something like this:

BTW, the example is buggy — you should take these issues/PRs into account:

and

It should look something like this:

request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse],
) → ModelResponse:

Also, make sure to import ModelResponse from langchain.agents.middleware.types

1 Like

I received the following tips from a AI doc search. Will try this afternoon.

from langchain.agents import create_agent  
from langchain.agents.middleware import AgentMiddleware  
from langchain_core.tools import tool  
  
# Define ALL possible tools upfront  
@tool  
def fast_search(query: str) -> str:  
    """Fast search tool"""  
    return f"Fast results for: {query}"  
  
@tool  
def rag(query: str) -> str:  
    """RAG search tool"""  
    return f"RAG results for: {query}"  
  
@tool  
def deep_search(query: str) -> str:  
    """Deep search tool"""  
    return f"Deep results for: {query}"  
  
# Middleware that conditionally exposes tools  
class SkillMiddleware(AgentMiddleware):  
    # Register all possible skill tools  
    tools = [fast_search, rag, deep_search]  
      
    def wrap_model_call(self, request, handler):  
        # Filter tools based on state/context  
        if request.state.get("skill_search_loaded"):  
            # Keep all search tools  
            pass  
        else:  
            # Remove skill-specific tools initially  
            request.tools = [t for t in request.tools   
                           if t.name not in ["fast_search", "rag", "deep_search"]]  
          
        return handler(request)  
  
@tool  
def skill_search() -> str:  
    """Loads the context you need to search well"""  
    instructions = """long text explaining how to search"""  
    # Signal that skill is loaded via state  
    return instructions  
  
agent = create_agent(  
    model="openai:gpt-4",  
    tools=[skill_search],  # Base tools  
    middleware=[SkillMiddleware()]  # Adds conditional tools  
)

Here is a working example:

from dataclasses import dataclass
from langchain.agents import create_agent

from langchain.agents.middleware import (
    AgentMiddleware,
    ModelRequest,
)
from langchain.agents.middleware.types import ModelResponse
from typing import Callable

def get_weather(city: str) -> str:
    """Get weather for a given city."""
    return f"It's always sunny in {city}!"

def get_capital(country: str) -> str:
    """Get capital for a given country."""
    return f"Ankara is the capital of {country}!"

@dataclass
class Context:
    user_expertise: str

class ExpertiseBasedToolMiddleware(AgentMiddleware):
    def wrap_model_call(
        self,
        request: ModelRequest,
        handler: Callable[[ModelRequest], ModelResponse],
    ) -> ModelResponse:
        user_level = request.runtime.context.user_expertise
        if user_level == "expert":
            # More powerful model
            tools = [get_weather,]
        else:
            # Less powerful model
            tools = [get_capital,]
        request.tools = tools
        return handler(request)			


agent = create_agent(
    model="ollama:qwen3:latest",
    middleware=[ExpertiseBasedToolMiddleware()],
    context_schema=Context
)


agent.invoke(
    {"messages": "What's the weather in Paris?"},
    context={"user_expertise": "expert"},
)

agent.invoke(
    {"messages": "What's the capital of France?"},
    context={"user_expertise": "expert"},
)

agent.invoke(
    {"messages": "What's the weather in Paris?"},
    context={"user_expertise": "beginner"},
)

agent.invoke(
    {"messages": "What's the capital of France?"},
    context={"user_expertise": "beginner"},
)

Here’s the result:

>>> agent.invoke(
...     {"messages": "What's the weather in Paris?"},
...     context={"user_expertise": "expert"},
... )
{'messages': [HumanMessage(content="What's the weather in Paris?", additional_kwargs={}, response_metadata={}, id='fd32ef39-9991-48b0-ad4b-43618b686102'), AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'qwen3:latest', 'created_at': '2025-10-24T11:29:31.675869818Z', 'done': True, 'done_reason': 'stop', 'total_duration': 6002301618, 'load_duration': 2260224250, 'prompt_eval_count': 146, 'prompt_eval_duration': 301365613, 'eval_count': 108, 'eval_duration': 3411260963, 'model_name': 'qwen3:latest', 'model_provider': 'ollama'}, id='lc_run--581ca80b-166a-41ee-b88b-9250db22af77-0', tool_calls=[{'name': 'get_weather', 'args': {'city': 'Paris'}, 'id': '4d383d1a-b471-4eb2-8c19-ace4376cd214', 'type': 'tool_call'}], usage_metadata={'input_tokens': 146, 'output_tokens': 108, 'total_tokens': 254})]}
>>>
>>> agent.invoke(
...     {"messages": "What's the capital of France?"},
...     context={"user_expertise": "expert"},
... )
{'messages': [HumanMessage(content="What's the capital of France?", additional_kwargs={}, response_metadata={}, id='e4a1d803-ef90-48fe-83fc-fcfba0e0a400'), AIMessage(content='The function provided is for retrieving weather information, which is not applicable to determining the capital of France. I cannot answer this query using the available tools. However, I can help you check the weather for a specific city if you need that information!', additional_kwargs={}, response_metadata={'model': 'qwen3:latest', 'created_at': '2025-10-24T11:29:36.771514846Z', 'done': True, 'done_reason': 'stop', 'total_duration': 5091181725, 'load_duration': 148051402, 'prompt_eval_count': 146, 'prompt_eval_duration': 66713003, 'eval_count': 152, 'eval_duration': 4828947310, 'model_name': 'qwen3:latest', 'model_provider': 'ollama'}, id='lc_run--9821a56b-f534-4378-b4a6-6bef958aeb20-0', usage_metadata={'input_tokens': 146, 'output_tokens': 152, 'total_tokens': 298})]}
>>>
>>> agent.invoke(
...     {"messages": "What's the weather in Paris?"},
...     context={"user_expertise": "beginner"},
... )
{'messages': [HumanMessage(content="What's the weather in Paris?", additional_kwargs={}, response_metadata={}, id='2a6d7967-4aa5-4755-8e27-53dff88ef6d5'), AIMessage(content="I don't have access to real-time weather data. However, you can check the current weather in Paris using a weather service or app. Would you like me to help you find one?", additional_kwargs={}, response_metadata={'model': 'qwen3:latest', 'created_at': '2025-10-24T11:29:41.473963567Z', 'done': True, 'done_reason': 'stop', 'total_duration': 4695013262, 'load_duration': 183956333, 'prompt_eval_count': 147, 'prompt_eval_duration': 195127970, 'eval_count': 135, 'eval_duration': 4280626766, 'model_name': 'qwen3:latest', 'model_provider': 'ollama'}, id='lc_run--0827dc43-5bcd-46a1-a4b4-e59ffbf998f8-0', usage_metadata={'input_tokens': 147, 'output_tokens': 135, 'total_tokens': 282})]}
>>>
>>> agent.invoke(
...     {"messages": "What's the capital of France?"},
...     context={"user_expertise": "beginner"},
... )
{'messages': [HumanMessage(content="What's the capital of France?", additional_kwargs={}, response_metadata={}, id='ba4fe7dc-bf88-41d6-aec1-07ebb057797e'), AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'qwen3:latest', 'created_at': '2025-10-24T11:29:57.330753915Z', 'done': True, 'done_reason': 'stop', 'total_duration': 3192402879, 'load_duration': 187762416, 'prompt_eval_count': 147, 'prompt_eval_duration': 66327797, 'eval_count': 92, 'eval_duration': 2903858309, 'model_name': 'qwen3:latest', 'model_provider': 'ollama'}, id='lc_run--debcdff2-0c74-42ed-b692-8edca87102d4-0', tool_calls=[{'name': 'get_capital', 'args': {'country': 'France'}, 'id': '8f8e86c7-a1ea-44f8-b682-a853e5b1c932', 'type': 'tool_call'}], usage_metadata={'input_tokens': 147, 'output_tokens': 92, 'total_tokens': 239})]}

2 Likes

Awesome thanks !

Side question. I’m surprised you chose to write the user_expertise into a context instead of a state. Is that better for some reason ?

No, no. It was just for the sake of the old example. I guess you can use anything you want: context, state, long-term memory.

1 Like

Registering dynamic tools via Middleware which are not dict or functions throws error because of this logic. It suggests to assign tools via middleware.tools attribute. However it is a class level attribute and makes the middleware stateful.

If this logic is ignored for dict/functions as tools, does it need to be applied for BaseTool? Should this validation be configurable to support dynamic tools?

        # Check if any requested tools are unknown CLIENT-SIDE tools
        unknown_tool_names = []
        for t in request.tools:
            # Only validate BaseTool instances (skip built-in dict tools)
            if isinstance(t, dict):
                continue
            if isinstance(t, BaseTool) and t.name not in available_tools_by_name:
                unknown_tool_names.append(t.name)

        if unknown_tool_names:
            available_tool_names = sorted(available_tools_by_name.keys())
            msg = (
                f"Middleware returned unknown tool names: {unknown_tool_names}\n\n"
                f"Available client-side tools: {available_tool_names}\n\n"
                "To fix this issue:\n"
                "1. Ensure the tools are passed to create_agent() via "
                "the 'tools' parameter\n"
                "2. If using custom middleware with tools, ensure "
                "they're registered via middleware.tools attribute\n"
                "3. Verify that tool names in ModelRequest.tools match "
                "the actual tool.name values\n"
                "Note: Built-in provider tools (dict format) can be added dynamically."
            )
            raise ValueError(msg)

Use Case:

  • Agent is created without any tools
  • Middleware gets the tools from MCP server which are dependent on runtime.context.user_name.
  • MCPServer.getTools() requires user header (and hence these are dynamic)
1 Like

@rhlarora84 I’m facing the same challenge trying to dynamically set tools from an MCP during runtime, were you able to find a solution for this?

@rhlarora84 @Oscar-Umana did you find a workaround for this?

Facing this too when trying to load mcp tools dynamically

How to use agent skills in Langgraph? What if we're customizing the Langgraph graph without using LangChain's `create_agent` method?

hi @josiahcoad @rhlarora84 @Oscar-Umana @clafoutis

You’ve probably already seen this example, but could you please share a basic version of the code so we can all try to implement it together?

I’m not sure whether we can do it or not, but let’s try.

Finally, I created a working example:

client.py file:

import asyncio

from langchain.agents import create_agent
from langchain.agents.middleware import AgentMiddleware, ModelRequest
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_ollama import ChatOllama


class MCPMiddleware(AgentMiddleware):
    def __init__(self, client: MultiServerMCPClient):
        self.client = client

    async def awrap_model_call(self, request: ModelRequest, handler):
        mcp_tools = await self.client.get_tools()
        updated_request = request.override(
            tools=[*request.tools, *mcp_tools]
        )
        return await handler(updated_request)

    async def awrap_tool_call(self, request: ModelRequest, handler):
        mcp_tools = await self.client.get_tools()
        tool_name = request.tool_call["name"]
        tool_map = {tool.name: tool for tool in mcp_tools}
        if tool_name not in tool_map:
            raise ValueError(f"Unknown MCP tool: {tool_name}")
        return await handler(
            request.override(tool=tool_map[tool_name])
        )



client = MultiServerMCPClient(
    {
        "weather": {
            "transport": "stdio",
            "command": "python",
            "args": ["/home/mc/mcp_lang/weather.py"],
        }
    }
)

model = ChatOllama(
    model="qwen3"
)


async def main():
    agent = create_agent(model, middleware=[MCPMiddleware(client)])

    response = await agent.ainvoke(
        {
            "messages": [
                {
                    "role": "user",
                    "content": "what is the weather in 40°45′19.80″ North, longitude 73°58′26.04″ West?"
                }
            ]
        }
    )
    print(response)


if __name__ == "__main__":
    asyncio.run(main())

the weather.py MCP server example from Build an MCP server - Model Context Protocol

from typing import Any

import httpx
from mcp.server.fastmcp import FastMCP

# Initialize FastMCP server
mcp = FastMCP("weather")

# Constants
NWS_API_BASE = "https://api.weather.gov"
USER_AGENT = "weather-app/1.0"


async def make_nws_request(url: str) -> dict[str, Any] | None:
    """Make a request to the NWS API with proper error handling."""
    headers = {"User-Agent": USER_AGENT, "Accept": "application/geo+json"}
    async with httpx.AsyncClient() as client:
        try:
            response = await client.get(url, headers=headers, timeout=30.0)
            response.raise_for_status()
            return response.json()
        except Exception:
            return None


def format_alert(feature: dict) -> str:
    """Format an alert feature into a readable string."""
    props = feature["properties"]
    return f"""
Event: {props.get("event", "Unknown")}
Area: {props.get("areaDesc", "Unknown")}
Severity: {props.get("severity", "Unknown")}
Description: {props.get("description", "No description available")}
Instructions: {props.get("instruction", "No specific instructions provided")}
"""


@mcp.tool()
async def get_alerts(state: str) -> str:
    """Get weather alerts for a US state.

    Args:
        state: Two-letter US state code (e.g. CA, NY)
    """
    url = f"{NWS_API_BASE}/alerts/active/area/{state}"
    data = await make_nws_request(url)

    if not data or "features" not in data:
        return "Unable to fetch alerts or no alerts found."

    if not data["features"]:
        return "No active alerts for this state."

    alerts = [format_alert(feature) for feature in data["features"]]
    return "\n---\n".join(alerts)


@mcp.tool()
async def get_forecast(latitude: float, longitude: float) -> str:
    """Get weather forecast for a location.

    Args:
        latitude: Latitude of the location
        longitude: Longitude of the location
    """
    # First get the forecast grid endpoint
    points_url = f"{NWS_API_BASE}/points/{latitude},{longitude}"
    points_data = await make_nws_request(points_url)

    if not points_data:
        return "Unable to fetch forecast data for this location."

    # Get the forecast URL from the points response
    forecast_url = points_data["properties"]["forecast"]
    forecast_data = await make_nws_request(forecast_url)

    if not forecast_data:
        return "Unable to fetch detailed forecast."

    # Format the periods into a readable forecast
    periods = forecast_data["properties"]["periods"]
    forecasts = []
    for period in periods[:5]:  # Only show next 5 periods
        forecast = f"""
{period["name"]}:
Temperature: {period["temperature"]}°{period["temperatureUnit"]}
Wind: {period["windSpeed"]} {period["windDirection"]}
Forecast: {period["detailedForecast"]}
"""
        forecasts.append(forecast)

    return "\n---\n".join(forecasts)


def main():
    # Initialize and run the server
    mcp.run(transport="stdio")


if __name__ == "__main__":
    main()

Here’s the output of python client.py:

[01/26/26 20:46:51] INFO     Processing request of type ListToolsRequest                                                                                                        server.py:720
[01/26/26 20:47:14] INFO     Processing request of type ListToolsRequest                                                                                                        server.py:720
[01/26/26 20:47:14] INFO     Processing request of type CallToolRequest                                                                                                         server.py:720
[01/26/26 20:47:15] INFO     HTTP Request: GET https://api.weather.gov/points/40.7555,-73.9739 "HTTP/1.1 200 OK"                                                              _client.py:1740
[01/26/26 20:47:16] INFO     HTTP Request: GET https://api.weather.gov/gridpoints/OKX/34,37/forecast "HTTP/1.1 200 OK"                                                        _client.py:1740
                    INFO     Processing request of type ListToolsRequest                                                                                                        server.py:720
[01/26/26 20:47:16] INFO     Processing request of type ListToolsRequest                                                                                                        server.py:720
{'messages': [HumanMessage(content='what is the weather in 40°45′19.80″ North, longitude 73°58′26.04″ West?', additional_kwargs={}, response_metadata={}, id='770dcdb6-709e-4326-833c-33598a2c950e'), AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'qwen3', 'created_at': '2026-01-26T20:47:14.107438871Z', 'done': True, 'done_reason': 'stop', 'total_duration': 22060807123, 'load_duration': 1750807166, 'prompt_eval_count': 252, 'prompt_eval_duration': 429394739, 'eval_count': 637, 'eval_duration': 19633961136, 'logprobs': None, 'model_name': 'qwen3', 'model_provider': 'ollama'}, id='lc_run--019bfc0f-2e2b-76c0-9f5a-d49ab76d87e5-0', tool_calls=[{'name': 'get_forecast', 'args': {'latitude': 40.7555, 'longitude': -73.9739}, 'id': 'a42b42e7-3ee1-475c-a8db-4d40080c8839', 'type': 'tool_call'}], invalid_tool_calls=[], usage_metadata={'input_tokens': 252, 'output_tokens': 637, 'total_tokens': 889}), ToolMessage(content=[{'type': 'text', 'text': '\nThis Afternoon:\nTemperature: 29°F\nWind: 14 mph NW\nForecast: A chance of snow showers. Partly sunny. High near 29, with temperatures falling to around 24 in the afternoon. Wind chill values as low as 12. Northwest wind around 14 mph.\n\n---\n\nTonight:\nTemperature: 8°F\nWind: 10 to 14 mph W\nForecast: Mostly clear. Low around 8, with temperatures rising to around 10 overnight. Wind chill values as low as -4. West wind 10 to 14 mph, with gusts as high as 25 mph.\n\n---\n\nTuesday:\nTemperature: 20°F\nWind: 10 to 14 mph W\nForecast: A chance of snow showers after noon. Mostly sunny. High near 20, with temperatures falling to around 18 in the afternoon. Wind chill values as low as -5. West wind 10 to 14 mph.\n\n---\n\nTuesday Night:\nTemperature: 9°F\nWind: 10 mph W\nForecast: Mostly clear, with a low around 9. Wind chill values as low as -4. West wind around 10 mph.\n\n---\n\nWednesday:\nTemperature: 22°F\nWind: 10 mph W\nForecast: Mostly sunny. High near 22, with temperatures falling to around 19 in the afternoon. West wind around 10 mph.\n', 'id': 'lc_0abbb188-4c1b-4920-9913-bdaa19aff27a'}], name='get_forecast', id='b2aa65fe-dada-4946-976f-6a862b54ac6f', tool_call_id='a42b42e7-3ee1-475c-a8db-4d40080c8839', artifact={'structured_content': {'result': '\nThis Afternoon:\nTemperature: 29°F\nWind: 14 mph NW\nForecast: A chance of snow showers. Partly sunny. High near 29, with temperatures falling to around 24 in the afternoon. Wind chill values as low as 12. Northwest wind around 14 mph.\n\n---\n\nTonight:\nTemperature: 8°F\nWind: 10 to 14 mph W\nForecast: Mostly clear. Low around 8, with temperatures rising to around 10 overnight. Wind chill values as low as -4. West wind 10 to 14 mph, with gusts as high as 25 mph.\n\n---\n\nTuesday:\nTemperature: 20°F\nWind: 10 to 14 mph W\nForecast: A chance of snow showers after noon. Mostly sunny. High near 20, with temperatures falling to around 18 in the afternoon. Wind chill values as low as -5. West wind 10 to 14 mph.\n\n---\n\nTuesday Night:\nTemperature: 9°F\nWind: 10 mph W\nForecast: Mostly clear, with a low around 9. Wind chill values as low as -4. West wind around 10 mph.\n\n---\n\nWednesday:\nTemperature: 22°F\nWind: 10 mph W\nForecast: Mostly sunny. High near 22, with temperatures falling to around 19 in the afternoon. West wind around 10 mph.\n'}}), AIMessage(content="Here's the weather forecast for the location **40.7555° N, 73.9739° W** (New York City area):\n\n---\n\n### 🌤️ **This Afternoon**\n- **Temperature:** 29°F  \n- **Wind:** 14 mph from the northwest  \n- **Conditions:**  \n  - Chance of snow showers  \n  - Partly sunny  \n  - High near 29°F, with temperatures dropping to **24°F** by afternoon  \n  - **Wind chill:** As low as **12°F**  \n\n---\n\n### 🌙 **Tonight**\n- **Temperature:** 8°F  \n- **Wind:** 10–14 mph west, gusting to **25 mph**  \n- **Conditions:**  \n  - Mostly clear  \n  - Low around 8°F, rising to **10°F** overnight  \n  - **Wind chill:** As low as **-4°F**  \n\n---\n\n### 📅 **Tuesday**\n- **Temperature:** 20°F  \n- **Wind:** 10–14 mph west  \n- **Conditions:**  \n  - Chance of snow showers after noon  \n  - Mostly sunny  \n  - High near 20°F, dropping to **18°F** in the afternoon  \n  - **Wind chill:** As low as **-5°F**  \n\n---\n\n### 🌑 **Tuesday Night**\n- **Temperature:** 9°F  \n- **Wind:** 10 mph west  \n- **Conditions:**  \n  - Mostly clear  \n  - Low around 9°F  \n  - **Wind chill:** As low as **-4°F**  \n\n---\n\n### 📅 **Wednesday**\n- **Temperature:** 22°F  \n- **Wind:** 10 mph west  \n- **Conditions:**  \n  - Mostly sunny  \n  - High near 22°F, dropping to **19°F** in the afternoon  \n\n---\n\n⚠️ **Key Notes:**  \n- **Snow showers** are possible later this week.  \n- **Wind chills** will make it feel significantly colder than actual temperatures.  \n- **Wind gusts** could reach **25 mph** tonight, increasing wind chill risks.  \n\nLet me know if you need further details! ❄️🌬️", additional_kwargs={}, response_metadata={'model': 'qwen3', 'created_at': '2026-01-26T20:47:54.112464225Z', 'done': True, 'done_reason': 'stop', 'total_duration': 37482982317, 'load_duration': 167317848, 'prompt_eval_count': 608, 'prompt_eval_duration': 669783136, 'eval_count': 1156, 'eval_duration': 36237397175, 'logprobs': None, 'model_name': 'qwen3', 'model_provider': 'ollama'}, id='lc_run--019bfc0f-8e8c-7771-aade-d6ff1c89e00f-0', tool_calls=[], invalid_tool_calls=[], usage_metadata={'input_tokens': 608, 'output_tokens': 1156, 'total_tokens': 1764})]}

I hope this is what you all need. :slight_smile:

Here are the pip install commands:

pip install langchain-mcp-adapters
pip install "mcp[cli]" httpx

I don’t see how your code can work, since the snippet that @rhlarora84 signalled (_get_bound_model @ langchain/agents/factory.py) checks that tools are registered at agent creation.

Only tools passed as dict are not verified. The error I get is the following:

ValueError: Middleware returned unknown tool names: [‘SNOWFLAKE-SUPPLY-AREA’, ‘SNOWFLAKE-SQL-Execution-Tool’, ‘SNOWFLAKE-SALES_AREA_BILLING’]

Available client-side tools: [‘calculer_loyer_actualise’, ‘get_latest_indice_values’, ‘read_document’, ‘search’, ‘search_in_document’]

To fix this issue:

1. Ensure the tools are passed to create_agent() via the ‘tools’ parameter

2. If using custom middleware with tools, ensure they’re registered via middleware.tools attribute

3. Verify that tool names in ModelRequest.tools match the actual tool.name valuesNote: Built-in provider tools (dict format) can be added dynamically.During task with name ‘model’ and id ‘c3ee6954-672a-63b3-0b5e-57dad053d18c’

hi,

this is a result of my investigation:

1) How tools are “registered”

In LangGraph, tools aren’t registered in some global registry as part of graph compilation; they’re typically embedded into the graph as a node.

  • In the standard agent loop created by create_agent, the “tools” step is a ToolNode constructed from the tool list. ToolNode.__init__ immediately builds an internal name -> tool mapping (self._tools_by_name) and also precomputes injection metadata once (self._injected_args[tool_name] = ...) for runtime efficiency. That’s the practical “registration” moment. See langgraph/libs/prebuilt/langgraph/prebuilt/tool_node.py where _InjectedArgs is “built once during ToolNode initialization” and ToolNode.__init__ populates _tools_by_name / _injected_args.

Sources:

  • langgraph/libs/prebuilt/langgraph/prebuilt/tool_node.py (ToolNode init builds self._tools_by_name; injected args “built once during ToolNode initialization”).

LangChain’s create_agent is explicitly graph-based and uses LangGraph under the hood.

Sources:

  • LangChain Agents docs: Agents - Docs by LangChain (notes create_agent builds a graph-based runtime on LangGraph)
  • langchain/libs/langchain_v1/langchain/agents/factory.py (imports StateGraph + ToolNode, and constructs the agent graph)

2) Is it compile-time or runtime?

There are two different “tool” concerns:

  1. What the graph can execute (the executor side)

    • This is determined when you build the graph (when you create ToolNode([...]) / create_agent(..., tools=[...])). When you call graph.compile(), it packages the already-constructed nodes; it doesn’t dynamically discover tools.
    • Practically: changing the Python list you originally passed later won’t update the compiled graph, because ToolNode already copied the tools into its own mapping at initialization.
  2. What the model is allowed to call (the schema/binding side)

    • Models that support tool calling need tool schemas bound (usually via .bind_tools(...)). In the LangGraph prebuilt agent implementation, the model may be bound to tools up-front (static model) and must be a subset of the tools passed to the agent. See the create_react_agent docstring in langgraph/libs/prebuilt/langgraph/prebuilt/chat_agent_executor.py describing .bind_tools() and the “subset” requirement.
    • In the newer LangChain create_agent, tool exposure can also be modified via middleware (see below).

Sources:

  • langgraph/libs/prebuilt/langgraph/prebuilt/chat_agent_executor.py (dynamic model section: bind tools; bound tools must be subset of tools parameter)
  • LangChain Agents docs: Agents - Docs by LangChain

3) Can you modify the tool list at runtime?

Yes - but only if you distinguish “tools visible to the model” vs “tools executable by the graph”.

A) Runtime filtering of pre-registered tools (recommended)

If you know all tools ahead of time, register them once (pass them to create_agent(...)) and then filter which ones are exposed to the model per request via middleware that overrides request.tools.

This is explicitly documented as a supported pattern (“Filtering pre-registered tools”).

Source:

B) Runtime addition of brand-new tools (possible, but you must also handle execution)

If middleware adds tools that were not included in create_agent(tools=[...]), the agent graph’s ToolNode won’t know how to execute them by default.

  • The LangChain implementation even ships an explicit error template explaining this failure mode and the two fixes:
    • Register tools at creation time (create_agent(tools=[...]) or middleware.tools), or
    • Implement wrap_tool_call to execute/override the dynamically-added tool.

Sources:

  • langchain/libs/langchain_v1/langchain/agents/factory.py (see DYNAMIC_TOOL_ERROR_TEMPLATE, especially the guidance: middleware modifying request.tools must either pre-register tools or handle them in wrap_tool_call)
  • LangChain Agents docs, “Runtime tool registration”: Agents - Docs by LangChain

Minimal sketch (conceptual):

from langchain.agents import create_agent
from langchain.agents.middleware import AgentMiddleware

class DynamicToolMiddleware(AgentMiddleware):
    def wrap_model_call(self, request, handler):
        # expose the new tool to the model
        return handler(request.override(tools=[*request.tools, my_dynamic_tool]))

    def wrap_tool_call(self, request, handler):
        # teach the graph how to execute it
        if request.tool_call["name"] == my_dynamic_tool.name:
            return handler(request.override(tool=my_dynamic_tool))
        return handler(request)

agent = create_agent(model, tools=[some_static_tool], middleware=[DynamicToolMiddleware()])

C) If you’re using raw LangGraph StateGraph (no create_agent)

You can always build your own “tools node” callable that looks up tools from runtime.context / state and executes them dynamically. That’s a custom architecture choice; it’s not what ToolNode does out of the box.

Practical guidance / gotchas

  • Compiled graphs are best treated as immutable: if the executable tool set changes, rebuild a new ToolNode / new graph.
  • If you dynamically change tool exposure to the model, keep the executor in sync:
    • Filtering pre-registered tools is easy.
    • Adding new tools requires wrap_tool_call (or a custom tool execution node), otherwise you’ll hit the “unknown tools” error described in DYNAMIC_TOOL_ERROR_TEMPLATE.
2 Likes