I have been trying to extract reasoning summaries from GPT-5 using @langchainlangchain/openai. I saw Python langchain has this working (PR #30909), but I can’t figure out if the JS version supports it at all.
I’m on @langchain/openai@0.6.14 (latest) and have tried every config I can think of, but the reasoning content just isn’t showing up anywhere in the response.
Here’s what i have tried
Config 1 - Using model_kwargs:
const model = new ChatOpenAI({
model: "gpt-5",
streaming: true,
useResponsesApi: true,
model_kwargs: {
reasoning: {
effort: "medium",
summary: "auto"
}
}
}).bindTools(tools);
Config 2 - Top-level reasoning:
const model = new ChatOpenAI({
model: "gpt-5",
useResponsesApi: true,
reasoning: {
effort: "medium",
summary: "auto"
}
});
Both fail to extract reasoning content.
What I’m expecting to see
Based on how the Python version works and what the direct OpenAI SDK returns, I thought I’d see reasoning in response_metadata.output or additional_kwargs.reasoning.summary, something like:
{
"output": [
{
"id": "rs_...",
"type": "reasoning",
"summary": [
{
"type": "summary_text",
"text": "**Calculating a simple sum**\n\nI can compute 123 + 456..."
}
]
}
]
}
What I’m actually getting
{
"additional_kwargs": {}, // Empty
"response_metadata": {
"id": "resp_...",
"model_name": "gpt-5-2025-08-07",
"model": "gpt-5-2025-08-07"
// No "output" array, no reasoning
}
}
The weird thing is that reasoning tokens ARE being used (I can see reasoning_tokens: 192 in the usage stats), but the actual reasoning content is nowhere to be found in the LangChain response.
Direct OpenAI SDK works fine
Just to confirm I’m not crazy, I tested with the OpenAI SDK directly and it works perfectly:
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await openai.responses.create({
model: "gpt-5",
input: [{ role: "user", content: "What is 123 plus 456?" }],
reasoning: { effort: "medium", summary: "auto" }
});
// Works! Reasoning is in response.output
for (const item of response.output) {
if (item.type === "reasoning") {
console.log(item.summary[0].text);
// "**Calculating a simple sum**\n\nI can compute 123 + 456..."
}
}
So the OpenAI API definitely returns reasoning content when you ask for it.
My questions
-
Does
@langchain/openaisupport this at all? I noticed Python has it (PR #30909) but can’t find docs for the JS version. -
If it does work, what’s the correct config? Am I missing something obvious?
-
If it doesn’t work yet, is it on the roadmap? I’d be happy to help with a PR if needed.
For context, I’m building an agent and trying to understand why it makes certain decisions. The reasoning content would be super useful for debugging and improving the system prompts.
Environment
-
@langchain/openai@0.6.14(latest) -
Node.js v23.7.0
-
Using GPT-5 model
Related
-
Python PR that added this: https://github.com/langchain-ai/langchain/pull/30909
-
OpenAI Responses API docs: https://platform.openai.com/docs/api-reference/responses
Thanks for any help!

