@langchain/openai for AzureChatOpenAI isn't returning reasoning content or tokens

When using the
import { AzureChatOpenAI } from ‘@langchain/openai’;
I’m using gpt-oss-120B or even other gpt models it’s not returning the reasoning content nor the reasoning tokens in the response or in langsmith.

If I include __includeRawResponse: true then I can see that azure is returning the reasoning_content but the langchain AIMessageChunk and content blocks isn’t working and doesn’t include it.

Using version “@langchain/openai”: “1.1.3”
thanks

Hi @hesamzkr

could share your code? I can debug what is going on there inside the source code

Hi, thank you. yep here’s the way I’m initializing the model

import { AzureChatOpenAI } from '@langchain/openai';

const model = new AzureChatOpenAI({
  maxRetries: 2,
  azureOpenAIApiKey: azureCredentials.apiKey,
  azureOpenAIApiInstanceName: azureCredentials.apiInstanceName,
  azureOpenAIApiVersion: '2025-01-01-preview',
  streaming: true,
  metadata: {
    ls_provider: 'azure',
    ls_model_name: 'gpt-oss-120b'
  },
  reasoning: {
    effort: 'high'
  },
  model: 'gpt-oss-120b',
  temperature: 0,
  maxTokens: 1000,
  azureOpenAIApiDeploymentName: azureCredentials.apiDeploymentName
});

const response = await model.invoke(messages);

Hi @hesamzkr,

I’m running into the exact same issue you described — reasoning content and reasoning tokens are not being returned when using AzureChatOpenAI.
In my case, this seems to happen only with OSS models (e.g. gpt-oss-120b) or models like
Kimi-K2-Thinking, while other models behave as expected.

Have you been able to find a solution or found any updates on this?
I’d really appreciate it if you could share any findings or workarounds.

Thanks in advance!