Error - LangGraph Assistants: Building Configurable AI Agents

I’ve deployed the repo shared in this tutorial about Configurable AI Agents, where I should be able to create new Assistants with different input. However when I deploy the repo only my supervisor_prebuilt agent is configurable. The simple react agent (configurable one) doesn’t show me the fields to adjust when creating a new assistant, like Prompt, Model and Tools. Do you know if something is missing in this tutorial repo ? Or do I need to adjust something ? I only need a single configurable agent for now, not a supervisor, and im struggling to replicate it like in the tutorial.

The relevant piece of code should be

agents/react_agent/graph.py:

from agents.react_agent.tools import get_tools
from langgraph.prebuilt import create_react_agent
from agents.utils import load_chat_model

from agents.react_agent.configuration import Configuration
from langchain_core.runnables import RunnableConfig



async def make_graph(config: RunnableConfig):
    
    # Get name from config or use default
    configurable = config.get("configurable", {})

    # get values from configuration
    llm = configurable.get("model", "openai/gpt-4.1")
    selected_tools = configurable.get("selected_tools", ["get_todays_date"])
    prompt = configurable.get("system_prompt", "You are a helpful assistant.")
    
    # specify the name for use in supervisor architecture
    name = configurable.get("name", "react_agent")

    # Compile the builder into an executable graph
    # You can customize this by adding interrupt points for state updates
    graph = create_react_agent(
        model=load_chat_model(llm), 
        tools=get_tools(selected_tools),
        prompt=prompt, 
        config_schema=Configuration,
        name=name
    )


    return graph

Run code snippet

Copy Expand

agents/react_agent/configuration.py:

from typing import Annotated, Literal
from pydantic import BaseModel, Field


class Configuration(BaseModel):
    """The configuration for the agent."""

    system_prompt: str = Field(
        default="You are a helpful AI assistant.",
        description="The system prompt to use for the agent's interactions. "
        "This prompt sets the context and behavior for the agent."
    )

    model: Annotated[
            Literal[
                "anthropic/claude-sonnet-4-20250514",
                "anthropic/claude-3-5-sonnet-latest",
                "openai/gpt-4.1",
                "openai/gpt-4.1-mini"
            ],
            {"__template_metadata__": {"kind": "llm"}},
        ] = Field(
            default="anthropic/claude-3-5-sonnet-latest",
            description="The name of the language model to use for the agent's main interactions. "
        "Should be in the form: provider/model-name."
    )

    selected_tools: list[Literal["finance_research", "advanced_research_tool", "basic_research_tool", "get_todays_date"]] = Field(
        default = ["get_todays_date"],
        description="The list of tools to use for the agent's interactions. "
        "This list should contain the names of the tools to use."
    )

Run code snippet

Copy Expand

agents/react_agent/tools.py:

from typing import Callable, Optional, cast, Any
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_community.tools.yahoo_finance_news import YahooFinanceNewsTool
from langchain_core.tools import tool
from datetime import datetime

@tool
async def finance_research(ticker_symbol: str) -> Optional[list[dict[str, Any]]]:
    """Search for finance research, must be a ticker symbol."""
    wrapped = YahooFinanceNewsTool()
    result = await wrapped.ainvoke({"query": ticker_symbol})
    return cast(list[dict[str, Any]], result)

@tool   
async def advanced_research_tool(query: str) -> Optional[list[dict[str, Any]]]:
    """Perform in-depth research for blog content.
    
    This tool conducts comprehensive web searches with higher result limits and
    deeper analysis, ideal for creating well-researched blog posts backed by
    authoritative sources.
    """
    # Using Tavily with higher result count for more comprehensive research
    wrapped = TavilySearchResults(
        max_results=10,  # Default to 10 if not specified
        search_depth="advanced"  # More thorough search
    )
    result = await wrapped.ainvoke({"query": query})
    return cast(list[dict[str, Any]], result)

@tool
async def basic_research_tool(query: str) -> Optional[list[dict[str, Any]]]:
    """Research trending topics for social media content.
    
    This tool performs quick searches optimized for trending and viral content,
    returning concise results ideal for social media post creation.
    """
    # Using Tavily with lower result count and quicker search for social content
    wrapped = TavilySearchResults(
        max_results=5,  # Default to 3 if not specified
        search_depth="basic",  # Faster, less comprehensive search
        include_raw_content=False,  # Just the highlights
        include_images=True  # Social posts often benefit from images
    )
    result = await wrapped.ainvoke({"query": f"trending {query}"})
    return cast(list[dict[str, Any]], result)

@tool
async def get_todays_date() -> str:
    """Get the current date."""
    return datetime.now().strftime("%Y-%m-%d")


def get_tools(selected_tools: list[str]) -> list[Callable[..., Any]]:
    """Convert a list of tool names to actual tool functions."""
    tools = []
    for tool in selected_tools:
        if tool == "finance_research":
            tools.append(finance_research)
        elif tool == "advanced_research_tool":
            tools.append(advanced_research_tool)
        elif tool == "basic_research_tool":
            tools.append(basic_research_tool)
        elif tool == "get_todays_date":
            tools.append(get_todays_date)
    
    return tools

Run code snippet

Copy Expand

langgraph.json

 {
  "dependencies": ["."],
  "graphs": {
    "react_agent_no_config": "./agents/react_agent/graph_without_config.py:make_graph",
    "react_agent": "./agents/react_agent/graph.py:make_graph",
    "supervisor_prebuilt": "./agents/supervisor/supervisor_prebuilt.py:make_supervisor_graph"
  },
  "env": ".env"
}

Run code snippet

Copy Expand

utils.py:

from langchain.chat_models import init_chat_model
from langchain_core.language_models import BaseChatModel
from langchain_core.messages import BaseMessage


def get_message_text(msg: BaseMessage) -> str:
    """Get the text content of a message."""
    content = msg.content
    if isinstance(content, str):
        return content
    elif isinstance(content, dict):
        return content.get("text", "")
    else:
        txts = [c if isinstance(c, str) else (c.get("text") or "") for c in content]
        return "".join(txts).strip()


def load_chat_model(fully_specified_name: str) -> BaseChatModel:
    """Load a chat model from a fully specified name.

    Args:
        fully_specified_name (str): String in the format 'provider/model'.
    """
    provider, model = fully_specified_name.split("/", maxsplit=1)
    return init_chat_model(model, model_provider=provider)

Run code snippet

Copy Expand

As you can see I can not pass any configurable settings for my react_agent: enter image description here

But I can for my supervisor_prebuilt model