We are currently calling various LLMs through the chain langchain-openai → OpenRouter → various models.
The newly released Gemini 3 Pro requires passing back the thoughtSignatures parameter during tool calls, but this parameter is only supported in langchain-google-genai, not in langchain-openai.
Is there any way to work around or solve this issue? Thank you.
I’m having the same issue when using langchain/Google. It was handled in langchain/vertex so it’s something that was known and once resolved. Any chance the new typescript langchain package can be updated to handle thought signatures?
Hey @Godrules500, could you elaborate on what you’re seeing regarding thought signatures not being supported in @langchain/google? We do have support for this in that new package
One thing this seems to highlight is that reasoning metadata (like Gemini’s thought-signatures) does not yet have a consistent place in the typical agent runtime stack.
Most frameworks pass:
model inputs
tool calls
outputs
but structured reasoning artifacts often get lost between layers (model → router → framework).
I suspect we may eventually need a thin “execution metadata” layer in agent stacks that can carry these artifacts consistently across model providers, routers, and frameworks.
Curious if others have run into similar issues when integrating reasoning metadata across different model backends.