LangChain JS + OpenAI Responses API: stateless GPT-5 reasoning messages cause `400 …reasoning without its required following item`

Hello everybody,

I’m trying to figure out how to implement a stateless interaction with a GPT-5 model, with reasoning.
This is my very basic code snippet, where I force the model to reason and call a tool.

// ...imports

const tools: DynamicStructuredTool[] = [ ... ];

// Instantiate the model
let runnable: Runnable = new ChatOpenAI({
	modelName: "gpt-5-2025-08-07",
    temperature: 1,
    openAIApiKey: "****",
    streaming: false,
    useResponsesApi: true
}).bindTools(tools);

const messages: BaseMessage[] = [
	new SystemMessage("*****"),
	new HumanMessage("*****")
];
const callOptions = {};

let response: BaseMessage = await runnable.invoke(messages, callOptions);
if ("tool_calls" in response && Array.isArray(response.tool_calls) && response.tool_calls.length > 0) {
	const toolResults = [];
	for (const toolCall of response.tool_calls) {
		const { name, args } = toolCall;
		const tool = tools.find(t => t.name === name);
		let output = await tool.func(args);
		const toolMessageOutput: ToolMessageFieldsWithToolCallId = {
			tool_call_id: toolCall.id,
			content: typeof output === "string" ? output : JSON.stringify(output),
		};
		toolResults.push(new ToolMessage(toolMessageOutput));	
	}
	messages.push(response, ...toolResults);
} 

response = await runnable.invoke(messages, callOptions);

.. and the last invoke call throws an error:

BadRequestError: 400 Item 'rs_68daad45e5d8819592cbd2a8230ec4df019eeee1dd56aefc' of type 'reasoning' was provided without its required following item.
    at Function.generate (C:\****\node-projects\bookmaster\node_modules\openai\src\core\error.ts:72:14)        
    at OpenAI.makeStatusError (C:\****\node-projects\bookmaster\node_modules\openai\src\client.ts:445:28)      
    at OpenAI.makeRequest (C:\****\node-projects\bookmaster\node_modules\openai\src\client.ts:668:24)
    at processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async C:\****\node-projects\bookmaster\node_modules\@langchain\openai\dist\chat_models.cjs:1202:24      
    at async RetryOperation._fn (C:\****\node-projects\bookmaster\node_modules\p-retry\index.js:50:12) {       
  status: 400,
  headers: { ... },
  requestID: 'req_4a8f20b187070e4ea9218ca13f0ff645',
  error: {
    message: "Item 'rs_68daad45e5d8819592cbd2a8230ec4df019eeee1dd56aefc' of type 'reasoning' was provided without its required following item.",
    type: 'invalid_request_error',
    param: 'input',
    code: null
  },
  code: null,
  param: 'input',
  type: 'invalid_request_error',
  attemptNumber: 1,
  retriesLeft: 6
}

From what I understand, I need to resend the reasoning type message (like an echo), but LangChain does not provide a ReasoningMessage extends BaseMessage type to insert into the conversation.
How can I do this?

Here is the full stateless conversation I get:

First interaction:

[
  SystemMessage {
    "content": "*****",
    "additional_kwargs": {},
    "response_metadata": {}
  },
  HumanMessage {
    "content": [
      { "type": "text", "text": "*****" }
    ],
    "additional_kwargs": {},
    "response_metadata": {}
  }
]

First Model response:

  AIMessage {
    "content": [],
    "additional_kwargs": {
      "reasoning": {
        "id": "rs_68daad45e5d8819592cbd2a8230ec4df019eeee1dd56aefc",
        "type": "reasoning",
        "summary": []
      },
      "__openai_function_call_ids__": {
        "call_vfJ1fZdxk8IhTRCjYnf0b3le": "fc_68daad7cad408195868dfdaf75c0b158019eeee1dd56aefc"
      }
    },
    "response_metadata": {
      "id": "resp_68daad453f6081959038cc97fb020159019eeee1dd56aefc",
      "estimatedTokenUsage": {
        "promptTokens": 1348,
        "completionTokens": 3610,
        "totalTokens": 4958
      },
      "model": "gpt-5-2025-08-07",
      "created_at": 1759161669,
      "incomplete_details": null,
      "metadata": {},
      "object": "response",
      "status": "completed",
      "user": null,
      "service_tier": "default",
      "model_name": "gpt-5-2025-08-07"
    },
    "tool_calls": [
      {
        "name": "******",
        "args": { ... },
        "type": "tool_call",
        "id": "call_vfJ1fZdxk8IhTRCjYnf0b3le"
      }
    ],
    "invalid_tool_calls": [],
    "usage_metadata": {
      "input_tokens": 1348,
      "input_tokens_details": {
        "cached_tokens": 0
      },
      "output_tokens": 3610,
      "output_tokens_details": {
        "reasoning_tokens": 3584
      },
      "total_tokens": 4958
    }
  }

Then I call again the invoke method with the following messages:

[
  SystemMessage { ... as before ... },
  HumanMessage { ... as before ...  },
  AIMessage { ... as before ... },
  ToolMessage {
    "content": "{\"success\":true,\"message\":\"*****\"}",
    "additional_kwargs": {},
    "response_metadata": {},
    "tool_call_id": "call_vfJ1fZdxk8IhTRCjYnf0b3le"
  }
]

Here I get the error:

BadRequestError: 400 Item 'rs_*****' of type 'reasoning' was provided without its required following item.

Please note that with gpt-4o the interaction works perfectly.

Hi @niccotnt

interesting issue… Have you tried setting useResponsesApi: false for GPT-5?

Anyway, maybe this would help:

function sanitizeAssistantForResponsesApi(ai: AIMessage): AIMessage {
  const additional = { ...(ai.additional_kwargs ?? {}) } as Record<string, unknown>;
  // 'reasoning' is output-only in the Responses API; never send it back in input
  if ("reasoning" in additional) delete additional.reasoning;
  return new AIMessage({
    content: ai.content,
    tool_calls: ai.tool_calls ?? [],
    additional_kwargs: additional,
  });
}

// ...

if ("tool_calls" in response && Array.isArray(response.tool_calls) && response.tool_calls.length > 0) {
  // ...

  // IMPORTANT: push a sanitized copy of the assistant message (no 'reasoning')
  messages.push(sanitizeAssistantForResponsesApi(response as AIMessage), ...toolResults);
}

Let me know whether or not this helps :slight_smile:

Hey @niccotnt, thanks for flagging this. Would you mind opening an issue on the langchainjs repo?

@pawel-twardziak‘s solution does resolve the error, but dropping the reasoning content is definitely not desired. The sequencing for reasoning items is important as the error mentioned, however I think it might be an exclusive behavior of gpt-5 that reasoning items can exist for tool calls (which isn’t something that I’ve seen before)

1 Like

Thank you @hntrl and @pawel-twardziak for your reply.

If I remove the reasoning item from additional_kwargs it works, but continuing the interaction, at some point I receive:

BadRequestError: 400 Item 'msg_0bf77cc637a35af90068dbb4da9dfc8195a981bc6cb8fc4361' of type 'message' was provided without its required 'reasoning' item: 'rs_0bf77cc637a35af90068dbb4c69b8c8195929f8311e89903fb'.
    at Function.generate (C:\****\node-projects\bookmaster\node_modules\openai\src\core\error.ts:72:14)        
    at OpenAI.makeStatusError (C:\****\node-projects\bookmaster\node_modules\openai\src\client.ts:445:28)      
    at OpenAI.makeRequest (C:\****\node-projects\bookmaster\node_modules\openai\src\client.ts:668:24)
    at processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async C:\****\node-projects\bookmaster\node_modules\@langchain\openai\dist\chat_models.cjs:1202:24      
    at async RetryOperation._fn (C:\****\node-projects\bookmaster\node_modules\p-retry\index.js:50:12) {       
  status: 400,
  headers: Headers { ... }

I think the reasoning items must be expected for every message and they can occur everywhere, with or without tool_calls, and they must be handled correctly.

I will open an issue as suggested by @hntrl .

Thanks

Bug opened here

1 Like

Hi @hntrl and @niccotnt

I’ve raised a PR - there was an inconsistency in pairing the reasoning item with the following message.