Does langchain-openai support passing Gemini 3’s thought-signatures metadata via OpenRouter?

We are currently calling various LLMs through the chain langchain-openai → OpenRouter → various models.
The newly released Gemini 3 Pro requires passing back the thoughtSignatures parameter during tool calls, but this parameter is only supported in langchain-google-genai, not in langchain-openai.

Is there any way to work around or solve this issue? Thank you.

I have the same issue. That would be great to support it or provide a workaround.

Yeah, I have the same issue too. I would appreciate any help. Thanks!

Not currently possible. See this issue for tracking for a dedicated OpenRouter integration: OpenRouter · Issue #34328 · langchain-ai/langchain · GitHub

I’m having the same issue when using langchain/Google. It was handled in langchain/vertex so it’s something that was known and once resolved. Any chance the new typescript langchain package can be updated to handle thought signatures?

Hey @Godrules500, could you elaborate on what you’re seeing regarding thought signatures not being supported in @langchain/google? We do have support for this in that new package

Will also call out that we have a first party openrouter integration now: https://www.npmjs.com/package/@langchain/openrouter

Interesting question.

One thing this seems to highlight is that reasoning metadata (like Gemini’s thought-signatures) does not yet have a consistent place in the typical agent runtime stack.

Most frameworks pass:

model inputs
tool calls
outputs

but structured reasoning artifacts often get lost between layers (model → router → framework).

In practice this makes it difficult to preserve:

• reasoning signatures
• execution traces
• decision context

across different runtime components.

I suspect we may eventually need a thin “execution metadata” layer in agent stacks that can carry these artifacts consistently across model providers, routers, and frameworks.

Curious if others have run into similar issues when integrating reasoning metadata across different model backends.