I’m using create_agent from LangChain with gpt-4.1-2025-04-14 (via ChatOpenAI) and ToolStrategy for structured output. I’ve noticed that every AIMessage that contains tool_calls always has content=' ' — there is no natural language reasoning or explanation alongside the tool call.Is this expected behaviour for gpt-4.1, or is there a way to get the model to include a brief explanation in the content field even when it is making a tool call? Would a system prompt instruction be the recommended approach, or is there a framework-level way to handle this?
yes, this is expected behavior. It operates at two distinct levels: the OpenAI API itself, and the LangChain adapter layer. Understanding both helps you work around it effectively.
OpenAI API returns null content with tool calls
When a Chat Completions model like gpt-4.1 decides to call a tool, it sets the content field of the assistant message to null in the raw API response. The OpenAI API spec explicitly allows this
This is a model-level decision, not a bug. The model is saying: “I have no natural language output for this turn; I’m calling a tool instead.” This is the predominant behavior of all OpenAI chat-completion models (GPT-4o, GPT-4.1, GPT-4-turbo, etc.) when they decide to use tools
LangChain converts null → "" (empty string)
In langchain-openai, the adapter function _convert_dict_to_message converts the raw OpenAI API dictionary into a LangChain AIMessage. Here is the relevant code from langchain_openai/chat_models/base.py:
if role == "assistant":
content = _dict.get("content", "") or ""
Source: langchain-openai/langchain_openai/chat_models/base.py
So what you observe - AIMessage(content='', tool_calls=[...]) - is the correct and intentional representation of an OpenAI tool-calling response in LangChain
Getting reasoning text alongside tool calls
There are a few approaches
- Prompt engineering (simplest, but unreliable)
You can instruct the model to produce a short explanation before calling tools. For example, add to your system prompt:
Before making any tool call, always write a short sentence explaining what you're about to do and why.
However, this is not guaranteed to work - GPT-4.1 may still omit the text depending on the task. Even if the model does produce text, it will appear in a separate AIMessage before the tool-calling AIMessage in multi-turn flows (since LangChain/LangGraph may split turns differently).
- Switch to OpenAI’s Responses API with a reasoning model
If you need structured reasoning, use one of OpenAI’s reasoning models (o3, o1, etc.) via the Responses API. These models expose internal reasoning summaries via a dedicated reasoning field.
With langchain-openai and the Responses API:
from langchain.chat_models import init_chat_model
model = init_chat_model(
"openai:o3",
use_responses_api=True,
reasoning={"effort": "medium", "summary": "auto"},
output_version="responses/v1",
)
From LangChain 1.0.0 “responses/v1” is the default strategy.
When output_version="responses/v1" is set, reasoning blocks appear as content blocks of type "reasoning" inside AIMessage.content:
# AIMessage.content may look like:
[
{"type": "reasoning", "summary": [{"type": "summary_text", "text": "I need to..."}]},
{"type": "tool_call", "name": "my_tool", "args": {...}, "id": "call_abc"},
]
You can access reasoning via message.content_blocks:
for block in message.content_blocks:
if block["type"] == "reasoning":
print(block["summary"])
This is documented in the BaseChatOpenAI class and the reasoning parameter docstring in langchain_openai.
Reasoning models with the Responses API are the only OpenAI path to getting reasoning content alongside tool calls. The Chat Completions API (used by gpt-4.1) does not expose reasoning tokens.
- Use Claude (Anthropic) models
Claude models (e.g., claude-sonnet-4-6, the default for Deep Agents) do reliably produce text explanations alongside tool calls. The LangChain Anthropic integration supports this natively - AIMessage.content will be a list of blocks that may include both {"type": "text", "text": "..."} and {"type": "tool_use", ...} blocks.
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model_name="claude-sonnet-4-6")
This is why create_deep_agent defaults to Anthropic - see deepagents/graph.py:
def get_default_model() -> ChatAnthropic:
return ChatAnthropic(model_name="claude-sonnet-4-6")
- Inspecting what
ToolStrategydoes with the AIMessage
ToolStrategy uses tool calls as a structured output mechanism - it adds an artificial tool to the model and parses the tool arguments as the agent’s response. The AIMessage.content field is not involved in this process; only AIMessage.tool_calls is inspected. This means an empty content does not break ToolStrategy - it is working as designed.
From langchain_v1/langchain/agents/structured_output.py, you can see that ToolStrategy parses tool arguments directly:
structured_response = structured_tool_binding.parse(tool_call["args"])
And from langchain_v1/langchain/agents/factory.py, the framework only checks output.tool_calls, not output.content, when deciding whether to run tools.
then what is solution i need too look into?
@pawel-twardziak
sorry, I unintentionally pressed ENTER before I finished my message
now it’s complete - see again @chakka-guna-sekhar
hey @pawel-twardziak thanks for solution. I treid with model gpt-5.2 with low reasoning. Where its worked very well.