Create_react_agent.invoke throwing error

Here’s my code -
react_prompt = hub.pull(“hwchase17/react”)
react_agent = create_react_agent(model=chat, tools=execute_tools, prompt=react_prompt)
response = react_agent.invoke({“input”: “What is 5+3?”})

What I would like to know is that do I need to call .partial on the prompt to fill in the values for tools and tool_names because I am getting the error below. My understanding was the create_react_agent takes care of this internally.

KeyError: "Input to PromptTemplate is missing variables {‘input’, ‘tools’, ‘tool_names’, ‘agent_scratchpad’}. Expected: [‘agent_scratchpad’, ‘input’, ‘tool_names’, ‘tools’] Received: [‘messages’, ‘is_last_step’, ‘remaining_steps’]\nNote: if you intended {input} to be part of the string and not a variable, please escape it with double curly braces like: ‘{{input}}’.\nFor troubleshooting, visit: INVALID_PROMPT_INPUT | 🦜️🔗 LangChain "

You don’t need to call .partial() on the prompt, your understanding is correct.

I think you’re using ChatOpenAI with create_react_agent, but the prompt is designed for completion models. Just replace it with OpenAI and it should work!

from langchain_openai import OpenAI 
#...
model = OpenAI()

react_prompt = hub.pull("hwchase17/react")
react_agent = create_react_agent(llm=model, tools=execute_tools, prompt=react_prompt)

# Wrap in AgentExecutor
agent_executor = AgentExecutor(agent=react_agent, tools=execute_tools, verbose=True)

response = agent_executor.invoke({"input": "What is 5+3?"})

Thanks, @niilooy. We are using two different create_react_agent - you are using the one from LangChain and I am using the one from LangGraph. The one from LangChain works ok (with ChatOpenAI) while the one from LangGraph is giving me missing variable issues. See if you can try it yourself. I may have missed something here…

Hey @ibrahim, thanks for clarifying that you are using the LangGraph implementation. In that case use this. This is way cleaner and the prompt is built in as well!

from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

model = ChatOpenAI()

#...

agent_executor = create_react_agent(model, execute_tools)

# Invoke with messages format
response = agent_executor.invoke({"messages": [("user", "What is 5+3?")]})

@niilooy, I tried…and it worked. Thank you! Now, my next question is that if I were to pass the prompt, it throws the error. So how should we handle prompts? Thanks again for all the help.

In LangGraph, you cannot inject the “hwchase17/react” prompt when using ChatOpenAI. That prompt is incompatible with chat models. You can structure your prompt something like this to start:

system_prompt = """You are a helpful math assistant that can perform calculations.

You have access to the following tool:

<tool>
<name>calculator</name>
<description>Performs basic mathematical calculations including arithmetic, trigonometry, and common math functions.</description>
<parameters>
<parameter name="expression" type="string" required>Mathematical expression to evaluate (e.g., "2+3", "10*5", "sqrt(16)", "sin(30)")</parameter>
</parameters>
</tool>

.
.
.
(Rest of your instructions...)
"""

You can now pass your defined prompt and tools inside the create_react_agent function within the “prompt” and “tools” parameters!

Thanks, @niilooy . Will play with it and get back to you if I need help. Much appreciated!

1 Like

@niilooy , quick question - is there a way to see the Thought/Action/Observation the way a langchain ZERO_SHOT_REACT_AGENT would display after each step. I am seeing pieces of it using stream…but not something like what was displayed with the former…is there a way to print it out after each step. I just feel that my results have gotten worse after I moved to LangGraph. The logic hasn’t changed but the LLM is not taking the correct actions as it would earlier (feels like it’s not thinking as well as it would earlier). Want to see the thought/action/observation to see what might be issue…thanks.

Hey @ibrahim. You can add some instructions to your prompt to follow a Thought/Action/Observation pattern, and then replace your .invoke with a streaming function and incorporate that. You can try something like this:

Example prompt instructions to add
"""
...

For each user question, follow this pattern:
1. **Thought**: Think about what needs to be calculated
2. **Action**: Use the calculator tool with the appropriate expression  
3. **Observation**: Note the result from the tool
4. **Thought**: Reflect on whether this answers the question or if more steps are needed
...
"""

Example function for streaming

def print_steps(question: str):
    """Print each step of the ReAct process"""
    print(f"\n{'='*60}")
    print(f"QUESTION: {question}")
    print(f"{'='*60}")
    
    step_count = 0
    
    # Stream the agent's response
    for chunk in agent.stream({"messages": [HumanMessage(content=question)]}):
        for node_name, node_data in chunk.items():
            if node_name == "agent":
                step_count += 1
                messages = node_data.get("messages", [])
                
                for message in messages:
                    if isinstance(message, AIMessage):
                        print(f"\n--- STEP {step_count}: AGENT REASONING ---")
                        print(f"Content: {message.content}")
                        
                        # Check if there are tool calls
                        if hasattr(message, 'tool_calls') and message.tool_calls:
                            for tool_call in message.tool_calls:
                                print(f"Tool Call: {tool_call['name']}")
                                print(f"Arguments: {tool_call['args']}")
                    
                    elif isinstance(message, ToolMessage):
                        print(f"\n--- TOOL OBSERVATION ---")
                        print(f"Tool: {message.name}")
                        print(f"Result: {message.content}")
            
            elif node_name == "tools":
                # This shows tool execution
                pass
    
    print(f"\n{'='*60}")

Hope this’ll work! :sparkles:

Thanks, @niilooy.

1 Like