LangGraph + OpenAI Responses API: 400 Error 'web_search_call' was provided without its required 'reasoning' item

Hi dear community member,

I have been working on a GPT5 agent with langgraph, but have been running into errors like:

Item 'rs_...39' of type 'reasoning' was provided without its required following item.

I can’t find a stable way to trigger this error; it seemed pretty random and is more likely to occur when the agent does a long chain of tool calls. Interestingly, this error only occur when I have both web_search and some custom tools enabled at the same time, neither individually produces this error.

After poking around for a bit, I realized that after a tool call, some of the reasoning in the original response were not fed back to the API, and the order of these tool calls are messed up, as shown here: It ate my reasoning · GitHub

I could not find an issue on github that’s related to this behavior, and I did find this post to be somehow related.

Because my source code is an ongoing research that I could not disclose, I want to briefly describe how it was setup (which to me was pretty standard):

from langchain.tools import tool
from langgraph.prebuilt import create_react_agent
from langgraph.graph.state import CompiledStateGraph
from pydantic import BaseModel
class SomeClass(BaseModel):
    some_output: str
@tool
def some_tool(input: str) -> str:
    """Tool logic here"""
    return "tool response"
def get_agent(system_prompt: str, response_class=SomeClass) -> CompiledStateGraph:
    agent = create_react_agent(
        model="gpt-5",  
        tools=[some_tool, {"type": "web_search"}],
        debug=True,
        prompt=system_prompt,
        response_format=response_class,
    )
    return agent
def ask_llm(input_data) -> SomeClass:
    sys_text = "<SOME_PROMPT_TEXT>"
    agent = get_agent(sys_text, SomeClass)
    message = f"<SOME_PROMPT_TEXT{input_data}>"
    response = agent.invoke({"messages": [{"role": "user","content": message}]})
    answer: SomeClass = response["structured_response"]
    return answer

I tried to reproduce it so that I could open an issue for it, but couldn’t find a stable way after burning $100+ tokens, so I gave up.

Any insights or suggestions are appreciated.

Thanks in advance!

These errors can arise in the original LangChain implementation for the Responses API. You can resolve it by initializing the model with output_version=”responses/v1” as described here.

from langchain.chat_models import init_chat_model
from langchain.tools import tool
from langgraph.prebuilt import create_react_agent

llm = init_chat_model(
    "openai:gpt-5",
    output_version="responses/v1",
)

@tool
def some_tool(input: str) -> str:
    """Tool logic here"""
    return "tool response"

agent = create_react_agent(
    model=llm,  
    tools=[some_tool, {"type": "web_search"}],
)

This will reformat the content of the resulting AIMessages. This will be the default behavior in the upcoming 1.0 versions of LangChain (there are alpha versions available to try now, see docs here), so I’d expect your existing code to work out of the box in 1.0.

Alternatively, if you don’t need OpenAI’s Zero Data Retention and if errors persist, you can enable use_previous_response_id as described here.

llm = init_chat_model(
    "openai:gpt-5",
    use_previous_response_id=True,
)

This will rely on OpenAI’s persistence entirely, so it doesn’t support client-side management of conversation history.

1 Like

Thank you so much! I will go try it out!