Are dynamic tool lists allowed when using create_agent?

Hi,

I’m trying to make create_agent work a bit like Claude Skills, where the agent can discover tools when loading skill_tools (which are basically a long string with instructions + add new tools to its list). But it seems like I need to define the tools list right from the start.

skills_tools = [skill_search, skill_math]

agent = create_agent(tools = skills_tools)

@tool
def skill_search():
  """Loads the context you need to search well"""
  
  # This part I don't know how to code ^^
  parent.tool_list += [fast_search, rag, deep_search]

  # This is clear
  instructions = """long text explaining how to search"""
  return instructions

Is that the case ? Any tips to make it work ?

Thanks

hi @Batiste

I think you need something like this:

BTW, the example is buggy — you should take these issues/PRs into account:

and

It should look something like this:

request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse],
) → ModelResponse:

Also, make sure to import ModelResponse from langchain.agents.middleware.types

1 Like

I received the following tips from a AI doc search. Will try this afternoon.

from langchain.agents import create_agent  
from langchain.agents.middleware import AgentMiddleware  
from langchain_core.tools import tool  
  
# Define ALL possible tools upfront  
@tool  
def fast_search(query: str) -> str:  
    """Fast search tool"""  
    return f"Fast results for: {query}"  
  
@tool  
def rag(query: str) -> str:  
    """RAG search tool"""  
    return f"RAG results for: {query}"  
  
@tool  
def deep_search(query: str) -> str:  
    """Deep search tool"""  
    return f"Deep results for: {query}"  
  
# Middleware that conditionally exposes tools  
class SkillMiddleware(AgentMiddleware):  
    # Register all possible skill tools  
    tools = [fast_search, rag, deep_search]  
      
    def wrap_model_call(self, request, handler):  
        # Filter tools based on state/context  
        if request.state.get("skill_search_loaded"):  
            # Keep all search tools  
            pass  
        else:  
            # Remove skill-specific tools initially  
            request.tools = [t for t in request.tools   
                           if t.name not in ["fast_search", "rag", "deep_search"]]  
          
        return handler(request)  
  
@tool  
def skill_search() -> str:  
    """Loads the context you need to search well"""  
    instructions = """long text explaining how to search"""  
    # Signal that skill is loaded via state  
    return instructions  
  
agent = create_agent(  
    model="openai:gpt-4",  
    tools=[skill_search],  # Base tools  
    middleware=[SkillMiddleware()]  # Adds conditional tools  
)

Here is a working example:

from dataclasses import dataclass
from langchain.agents import create_agent

from langchain.agents.middleware import (
    AgentMiddleware,
    ModelRequest,
)
from langchain.agents.middleware.types import ModelResponse
from typing import Callable

def get_weather(city: str) -> str:
    """Get weather for a given city."""
    return f"It's always sunny in {city}!"

def get_capital(country: str) -> str:
    """Get capital for a given country."""
    return f"Ankara is the capital of {country}!"

@dataclass
class Context:
    user_expertise: str

class ExpertiseBasedToolMiddleware(AgentMiddleware):
    def wrap_model_call(
        self,
        request: ModelRequest,
        handler: Callable[[ModelRequest], ModelResponse],
    ) -> ModelResponse:
        user_level = request.runtime.context.user_expertise
        if user_level == "expert":
            # More powerful model
            tools = [get_weather,]
        else:
            # Less powerful model
            tools = [get_capital,]
        request.tools = tools
        return handler(request)			


agent = create_agent(
    model="ollama:qwen3:latest",
    middleware=[ExpertiseBasedToolMiddleware()],
    context_schema=Context
)


agent.invoke(
    {"messages": "What's the weather in Paris?"},
    context={"user_expertise": "expert"},
)

agent.invoke(
    {"messages": "What's the capital of France?"},
    context={"user_expertise": "expert"},
)

agent.invoke(
    {"messages": "What's the weather in Paris?"},
    context={"user_expertise": "beginner"},
)

agent.invoke(
    {"messages": "What's the capital of France?"},
    context={"user_expertise": "beginner"},
)

Here’s the result:

>>> agent.invoke(
...     {"messages": "What's the weather in Paris?"},
...     context={"user_expertise": "expert"},
... )
{'messages': [HumanMessage(content="What's the weather in Paris?", additional_kwargs={}, response_metadata={}, id='fd32ef39-9991-48b0-ad4b-43618b686102'), AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'qwen3:latest', 'created_at': '2025-10-24T11:29:31.675869818Z', 'done': True, 'done_reason': 'stop', 'total_duration': 6002301618, 'load_duration': 2260224250, 'prompt_eval_count': 146, 'prompt_eval_duration': 301365613, 'eval_count': 108, 'eval_duration': 3411260963, 'model_name': 'qwen3:latest', 'model_provider': 'ollama'}, id='lc_run--581ca80b-166a-41ee-b88b-9250db22af77-0', tool_calls=[{'name': 'get_weather', 'args': {'city': 'Paris'}, 'id': '4d383d1a-b471-4eb2-8c19-ace4376cd214', 'type': 'tool_call'}], usage_metadata={'input_tokens': 146, 'output_tokens': 108, 'total_tokens': 254})]}
>>>
>>> agent.invoke(
...     {"messages": "What's the capital of France?"},
...     context={"user_expertise": "expert"},
... )
{'messages': [HumanMessage(content="What's the capital of France?", additional_kwargs={}, response_metadata={}, id='e4a1d803-ef90-48fe-83fc-fcfba0e0a400'), AIMessage(content='The function provided is for retrieving weather information, which is not applicable to determining the capital of France. I cannot answer this query using the available tools. However, I can help you check the weather for a specific city if you need that information!', additional_kwargs={}, response_metadata={'model': 'qwen3:latest', 'created_at': '2025-10-24T11:29:36.771514846Z', 'done': True, 'done_reason': 'stop', 'total_duration': 5091181725, 'load_duration': 148051402, 'prompt_eval_count': 146, 'prompt_eval_duration': 66713003, 'eval_count': 152, 'eval_duration': 4828947310, 'model_name': 'qwen3:latest', 'model_provider': 'ollama'}, id='lc_run--9821a56b-f534-4378-b4a6-6bef958aeb20-0', usage_metadata={'input_tokens': 146, 'output_tokens': 152, 'total_tokens': 298})]}
>>>
>>> agent.invoke(
...     {"messages": "What's the weather in Paris?"},
...     context={"user_expertise": "beginner"},
... )
{'messages': [HumanMessage(content="What's the weather in Paris?", additional_kwargs={}, response_metadata={}, id='2a6d7967-4aa5-4755-8e27-53dff88ef6d5'), AIMessage(content="I don't have access to real-time weather data. However, you can check the current weather in Paris using a weather service or app. Would you like me to help you find one?", additional_kwargs={}, response_metadata={'model': 'qwen3:latest', 'created_at': '2025-10-24T11:29:41.473963567Z', 'done': True, 'done_reason': 'stop', 'total_duration': 4695013262, 'load_duration': 183956333, 'prompt_eval_count': 147, 'prompt_eval_duration': 195127970, 'eval_count': 135, 'eval_duration': 4280626766, 'model_name': 'qwen3:latest', 'model_provider': 'ollama'}, id='lc_run--0827dc43-5bcd-46a1-a4b4-e59ffbf998f8-0', usage_metadata={'input_tokens': 147, 'output_tokens': 135, 'total_tokens': 282})]}
>>>
>>> agent.invoke(
...     {"messages": "What's the capital of France?"},
...     context={"user_expertise": "beginner"},
... )
{'messages': [HumanMessage(content="What's the capital of France?", additional_kwargs={}, response_metadata={}, id='ba4fe7dc-bf88-41d6-aec1-07ebb057797e'), AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'qwen3:latest', 'created_at': '2025-10-24T11:29:57.330753915Z', 'done': True, 'done_reason': 'stop', 'total_duration': 3192402879, 'load_duration': 187762416, 'prompt_eval_count': 147, 'prompt_eval_duration': 66327797, 'eval_count': 92, 'eval_duration': 2903858309, 'model_name': 'qwen3:latest', 'model_provider': 'ollama'}, id='lc_run--debcdff2-0c74-42ed-b692-8edca87102d4-0', tool_calls=[{'name': 'get_capital', 'args': {'country': 'France'}, 'id': '8f8e86c7-a1ea-44f8-b682-a853e5b1c932', 'type': 'tool_call'}], usage_metadata={'input_tokens': 147, 'output_tokens': 92, 'total_tokens': 239})]}

2 Likes

Awesome thanks !

Side question. I’m surprised you chose to write the user_expertise into a context instead of a state. Is that better for some reason ?

No, no. It was just for the sake of the old example. I guess you can use anything you want: context, state, long-term memory.

1 Like

Registering dynamic tools via Middleware which are not dict or functions throws error because of this logic. It suggests to assign tools via middleware.tools attribute. However it is a class level attribute and makes the middleware stateful.

If this logic is ignored for dict/functions as tools, does it need to be applied for BaseTool? Should this validation be configurable to support dynamic tools?

        # Check if any requested tools are unknown CLIENT-SIDE tools
        unknown_tool_names = []
        for t in request.tools:
            # Only validate BaseTool instances (skip built-in dict tools)
            if isinstance(t, dict):
                continue
            if isinstance(t, BaseTool) and t.name not in available_tools_by_name:
                unknown_tool_names.append(t.name)

        if unknown_tool_names:
            available_tool_names = sorted(available_tools_by_name.keys())
            msg = (
                f"Middleware returned unknown tool names: {unknown_tool_names}\n\n"
                f"Available client-side tools: {available_tool_names}\n\n"
                "To fix this issue:\n"
                "1. Ensure the tools are passed to create_agent() via "
                "the 'tools' parameter\n"
                "2. If using custom middleware with tools, ensure "
                "they're registered via middleware.tools attribute\n"
                "3. Verify that tool names in ModelRequest.tools match "
                "the actual tool.name values\n"
                "Note: Built-in provider tools (dict format) can be added dynamically."
            )
            raise ValueError(msg)

Use Case:

  • Agent is created without any tools
  • Middleware gets the tools from MCP server which are dependent on runtime.context.user_name.
  • MCPServer.getTools() requires user header (and hence these are dynamic)