How to extract GPT-5 reasoning summaries with @langchain/openai?

I have been trying to extract reasoning summaries from GPT-5 using @langchainlangchain/openai. I saw Python langchain has this working (PR #30909), but I can’t figure out if the JS version supports it at all.

I’m on @langchain/openai@0.6.14 (latest) and have tried every config I can think of, but the reasoning content just isn’t showing up anywhere in the response.

Here’s what i have tried

Config 1 - Using model_kwargs:

const model = new ChatOpenAI({
  model: "gpt-5",
  streaming: true,
  useResponsesApi: true,
  model_kwargs: {
    reasoning: {
      effort: "medium",
      summary: "auto"
    }
  }
}).bindTools(tools);

Config 2 - Top-level reasoning:

const model = new ChatOpenAI({
  model: "gpt-5",
  useResponsesApi: true,
  reasoning: {
    effort: "medium",
    summary: "auto"
  }
});

Both fail to extract reasoning content.

What I’m expecting to see

Based on how the Python version works and what the direct OpenAI SDK returns, I thought I’d see reasoning in response_metadata.output or additional_kwargs.reasoning.summary, something like:

{
  "output": [
    {
      "id": "rs_...",
      "type": "reasoning",
      "summary": [
        {
          "type": "summary_text",
          "text": "**Calculating a simple sum**\n\nI can compute 123 + 456..."
        }
      ]
    }
  ]
}

What I’m actually getting

{
  "additional_kwargs": {},  // Empty
  "response_metadata": {
    "id": "resp_...",
    "model_name": "gpt-5-2025-08-07",
    "model": "gpt-5-2025-08-07"
    // No "output" array, no reasoning
  }
}

The weird thing is that reasoning tokens ARE being used (I can see reasoning_tokens: 192 in the usage stats), but the actual reasoning content is nowhere to be found in the LangChain response.

Direct OpenAI SDK works fine

Just to confirm I’m not crazy, I tested with the OpenAI SDK directly and it works perfectly:

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const response = await openai.responses.create({
  model: "gpt-5",
  input: [{ role: "user", content: "What is 123 plus 456?" }],
  reasoning: { effort: "medium", summary: "auto" }
});

// Works! Reasoning is in response.output
for (const item of response.output) {
  if (item.type === "reasoning") {
    console.log(item.summary[0].text);
    // "**Calculating a simple sum**\n\nI can compute 123 + 456..."
  }
}

So the OpenAI API definitely returns reasoning content when you ask for it.

My questions

  1. Does @langchain/openai support this at all? I noticed Python has it (PR #30909) but can’t find docs for the JS version.

  2. If it does work, what’s the correct config? Am I missing something obvious?

  3. If it doesn’t work yet, is it on the roadmap? I’d be happy to help with a PR if needed.

For context, I’m building an agent and trying to understand why it makes certain decisions. The reasoning content would be super useful for debugging and improving the system prompts.

Environment

  • @langchain/openai@0.6.14 (latest)

  • Node.js v23.7.0

  • Using GPT-5 model

Related

Thanks for any help!

hi @Nikfury

have you tired this:

const llm = new ChatOpenAI({ model: "gpt-5", useResponsesApi: true });
const msg = await llm.invoke("Is sport necessary for healthy life?", {
  reasoning: {
    effort: "high",
    summary: "auto" /* summary config if desired */,
  },
});

I have and I’ve been getting this error:

BadRequestError: 400 Your organization must be verified to generate reasoning summaries. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate.

Since I am not verified :wink:

1 Like

Yes, have my user verified. I get reasoning summaries, when i request using responses API or vercel’s AI SDK

Hey @pawel-twardziak thanks for the quick response. I have my OPENAI account verifeid and I’m able to extract reasoning summaries when making request to GPT-5, using OPENAI’s official response API or vercel’s AI SDK.

hi @Nikfury

ok, thanks for your reply. However, did passing arguments to the request invocation instead of the model constructor help? I can’t verify it now.

This worked perfectly, thanks @pawel-twardziak :folded_hands:

Hi @Nikfury

Happy to help :orange_heart:
Could you mark the right answer as solving the issue? Then the thread will be marked as “solved”. That would also be a clear sign for the others searching for a solution. Thanks in advance :slight_smile:

1 Like

Done. This really solved an important blocker for us.

@pawel-twardziak Thanks again!

do you have a “buy me a coffee” page, would love to contribute for the help

1 Like

Hi @Nikfury

LangChain and LangGraph have become my passion, so I’m always extremely happy whenever I can help :slight_smile: When someone appreciates it, I’m on cloud nine. :orange_heart:

Btw, I am addicted to coffee - more of it would kill me :smiley:

This post has been very helpful, but is there a way to pass this to a chain, the invoke opperation errors saying it does not expect reasoning as a key for RunnableConfig

hi @AMuresan

any code examples? :slight_smile:

Hi Sorry about that I should have been more specific:

#this is the function that defines the llm to use

private getLLM(): ChatOpenAIWithDefaults {
  return new ChatOpenAIWithDefaults({
    model: "gpt-5",
    apiKey: this.openaiApiKey,
    reasoning: {
      effort: "high",
      summary: "auto"
    },
  });
}

# this is the initializer function that returns a chain of the prompt, structured llm, and the response handler

initialize() {
  const llm = this.getLLM();
  const prompt = this.getPrompt();
  const output = this.getStructuredOutput();
  const structuredLlm = llm.withStructuredOutput(output, {
    name: "func_name_here",
    method: "jsonSchema",
    strict: true,
  });

  let chain = prompt.pipe(structuredLlm)

  if (debug) {
    chain = chain.withConfig({
      callbacks: [consoleLoggingCallback()],
    });
  }
  
  chain = chain.pipe(async (response) => {
    const formatted_response = JSON.parse(response.filter( (item) => item["type"] === "text")[0]["text"]) as Record<string, boolean>;
    return formatted_response;
  });
  return chain.withRetry({ stopAfterAttempt: LANGCHAIN_RUNNABLE_RETRIES });
}

# this is a snippet of code that shows how we use the agent

...some code here...
  const agent = new Agent(apiKey, state, inquiryEvent);
  const chain = agent.initialize();
  const input = agent.prepareInput();
  const result = await chain.invoke(input);
... some more code here...

with this configuration the returned response looks something like:

[
  {
    "type": "text",
    "text": "{\"<<key_here>>\":true}",
    "annotations": []
  }
]

note that this format only came about when i added the summary:auto argument to that config dictionary, before the output was just:

{<<key_here>>:true}

both the outputs are fine in that i can parse them and read the result correctly, but I still cannot see the reasoning summary

when i tried to move the reasoning argument from the getLLM() function to the invoke step I get the following error:

hi @AMuresan

try this please

// 1) Ensure you get the raw AIMessage back
const structuredLlm = llm.withStructuredOutput(output, {
  name: "func_name_here",
  method: "jsonSchema",
  strict: true,
  includeRaw: true,
});

// 2) Extract parsed result and reasoning summary
chain = chain.pipe(async ({ raw, parsed }) => {
  const reasoning = raw?.additional_kwargs?.reasoning; // summary array lives here
  const reasoningText =
    reasoning?.summary?.map((s: { text: string }) => s.text).join("") ?? "";

  // parsed already matches your schema; keep your original shape if desired
  const formatted_response = parsed as Record<string, boolean>;
  return { formatted_response, reasoning, reasoningText };
});

Print out raw value and look for the reasoning.

If you can’t enable includeRaw, you’d need to pass the AIMessage through instead of mapping to the text item; reasoning is not inside that text item - it’s on AIMessage.additional_kwargs.reasoning.

thank you so much for the help, sadly it looks like the include raw argument is hard coded to false:

do you have an example of where to pass the AIMessage through?

hi @AMuresan

in your case, when using chains, follow this:

import { ChatOpenAI } from "@langchain/openai";
import * as dotenv from "dotenv";
import { PromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";

dotenv.config();

const Result = z
  .object({
    result: z.string().describe("Healthy lifestyle"),
  })
  .describe("Healthy lifestyle coach");

class MyLLM {
  // private LLM: ReturnType<(ChatOpenAI['withConfig'])>;
  private LLM: ChatOpenAI;

  private getLLM() {
    if (!this.LLM) {
      this.LLM = new ChatOpenAI({
        model: "gpt-5",
        useResponsesApi: true,
        /**
         * this won't work because the reasoning config is not set on the LLM instance
         *
         * @see BaseChatOpenAI
         *
         * this.reasoning =
         *   fields?.reasoning ?? fields?.reasoningEffort
         *     ? { effort: fields.reasoningEffort }
         *     : undefined;
         */
        // reasoning: { effort: "low", summary: "detailed" },
        // reasoningEffort: "medium",
      });
    }
    return this.LLM;
  }

  initializa() {
    const llm = this.getLLM();
    const prompt = PromptTemplate.fromTemplate(
      "Is sport necessary for healthy life?",
    );
    const structuredLlm = llm.withStructuredOutput(Result, {
      name: "result",
      method: "jsonSchema",
      strict: true,
      includeRaw: true,
    });

    let chain = prompt.pipe(structuredLlm);

    // chain = chain.pipe(async (response) => {
    //   const formatted_response = JSON.parse(
    //     response.filter((item) => item["type"] === "text")[0]["text"],
    //   ) as Record<string, boolean>;
    //   return formatted_response;
    // });

    return chain.withRetry({ stopAfterAttempt: 5 }).withConfig({
      // @ts-ignore
      reasoning: {
        effort: "high",
        summary: "auto" /* summary config if desired */,
      },
    });
  }
}

(async () => {
  const myLLM = new MyLLM();
  const chain = myLLM.initializa();
  const answer = await chain.invoke(
    {},
    {
      /**
       * Or pass reasoning config as a parameter to the invoke method
       */
      // @ts-ignore
      // reasoning: {
      //   effort: "high",
      //   summary: "auto" /* summary config if desired */,
      // },
    },
  );

  console.log(JSON.stringify(answer, null, 2));
})();

Which means:

  • call withConfig and use // @ts-ignore
  • pass configuration to invoke call and use @ts-ignore

It is an issue and will be fixed in the next release

In version 0.6.16

        this.reasoning =
            fields?.reasoning ?? fields?.reasoningEffort
                ? { effort: fields.reasoningEffort }
                : undefined;

In the next version it will be fixed:

this.reasoning = fields?.reasoning;