When a tool call fails in ToolNode, the error ToolMessage only contains the exception text. The model never sees its own output metadata (response_metadata) — things like stop reason, token counts, content filter results, etc.
This means the model has no way to diagnose why its tool call was malformed. It just sees a cryptic error like content: Field required and retries the same thing.
Real example: An autonomous agent (using deepagents) called write_file with large content. The model hit max_tokens=16384, truncating the tool call JSON. The validation error gave no hint about truncation. The agent retried 249 times before we killed the run.
The AIMessage.response_metadata already contains stop_reason, token counts, etc. — it’s just never surfaced in the error the model sees.
I filed ToolNode: surface model output metadata in tool error messages to enable self-correction · Issue #7138 · langchain-ai/langgraph · GitHub and have a working PR ( feat(prebuilt): surface model output metadata in tool error messages by vinayakSharm · Pull Request #7139 · langchain-ai/langgraph · GitHub ) — ~30 lines in tool_node.py. The fix is fully generalized: on any tool error, if response_metadata exists, append it to the error message. No hardcoded provider logic, no new config, backwards compatible.
Would love feedback from @sydney-runkle @wfh on whether this is the right approach before the PR is reviewed. Is ToolNode the right layer for this, or should it live elsewhere?