Create_react_agent not working

My LangGraph create_react_prompt does not seem to find the input fields I am passing in my invoke. I am not using an AgentExecutor since I want to be able to decide the course of action after each call. Below is a simplified version of my code where the first call to the agent is not happening.

This is the error I get -
KeyError: "Input to ChatPromptTemplate is missing variables {‘input’, ‘tools’, ‘agent_scratchpad’, ‘tool_names’}. Expected: [‘agent_scratchpad’, ‘input’, ‘tool_names’, ‘tools’] Received: [‘messages’, ‘is_last_step’, ‘remaining_steps’]\nNote: if you intended {input} to be part of the string and not a variable, please escape it with double curly braces like: ‘{{input}}’.\nFor troubleshooting, visit: INVALID_PROMPT_INPUT | 🦜️🔗 LangChain "

Can someone please tell me what do i do here. Thanks.

from dotenv import load_dotenv
load_dotenv()

from langchain_openai import ChatOpenAI
from langchain.prompts import(
ChatPromptTemplate,
SystemMessagePromptTemplate
)

from tools.sql import(run_query_tool, get_element_tool)
from langgraph.prebuilt import create_react_agent
from langchain.agents.format_scratchpad.log_to_messages import format_log_to_messages

execute_tools = [run_query_tool, get_element_tool]
prompt3 = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(“”"
Answer the following questions as best you can. You have access to the following tools:
{tools}

        Use the following format:

        Question: the input question you must answer
        Thought: you should always think about what to do
        Action: the action to take, should be one of [{tool_names}]
        Action Input: the input to the action
        Observation: the result of the action
        ... (this Thought/Action/Action Input/Observation can repeat N times)
        Thought: I now know the final answer
        Final Answer: the final answer to the original input question

        Begin!

        Question: {input}
        Thought:{agent_scratchpad}
    """),
]

)

chat = ChatOpenAI(
model=‘gpt-4o’,
temperature=0,
top_p=1.0,
verbose=True
)

react_agent = create_react_agent(model=chat, tools=execute_tools, prompt=prompt3)

user_input = “Go to website https://www.snapdeal.com/
agent_scratchpad = “”

inputs = {
“input”: user_input,
“agent_scratchpad”: agent_scratchpad,
“tool_names”: “, “.join([tool.name for tool in execute_tools]),
“tools”: “\n”.join([f”{tool.name}: {tool.description}” for tool in execute_tools]),

“intermediate_steps”: intermediate_steps

}

response = react_agent.invoke(inputs)
print(f"RESPONSE: {response}")

Hey! In create_react_agent you can try changing your invoke to use a messages-based format: inputs = {“messages”: [HumanMessage(content=user_input)]}—this lets the agent handle variables internally without manual passing of agent_scratchpad, tools, etc.

For step-by-step control (avoiding AgentExecutor as you mentioned), use .stream(inputs) to process events incrementally, inspecting and deciding after each (e.g., thought or action) via a loop.

This should address your issue but if they still persist, verify prompt variables are not needing escapes like {{input}}.

You might want to check this guide out too: How to migrate from legacy LangChain agents to LangGraph | 🦜️🔗 LangChain

Thank you. Just to confirm, are you saying that I can use .stream to process one action and then use a conditional_edge to decide on the next step?

1 Like

I am dealing with this same exact issue, and the recommended change of

changing your invoke to use a messages-based format: inputs = {“messages”: [HumanMessage(content=user_input)]}

is not fixing anything. I’m still getting the same error as before (the one posted above in the original message) when switching my invoke call to the format you recommended. Is there an example or tutorial you could point to where this type of problem is working as intended? That would mean a problem where you create a react agent with access to a set of tools, feed it a prompt with those same variables to inject (tools, tool_names, input, agent_scratchpad), and then invoke it with a question to successfully get an answer back?

Thank you very much.

Hi @terrance.oneill, this might help.

Also please check this thread. This can also clarify the differences of LangChain and LangGraph implementations of create_react_agent:

If you still need some help, do drop a code snippet and I can try to help!