I’m reading the docs for LangChain standard content blocks. In the example, there’s one for Anthropic and another for the OpenAI provider.
Why aren’t we able to extract OpenAI reasoning content into a standard summary block? That’s quite unusable. Even if the list were serialized as JSON, it would be much better. As it stands, using standard content blocks results in losing sight of any reasoning that was present.
Example from docs
Anthropic
import { AIMessage } from "@langchain/core/messages";
const message = new AIMessage({
content: [
{
"type": "thinking",
"thinking": "...",
"signature": "WaUjzkyp...",
},
{
"type":"text",
"text": "...",
"id": "msg_abc123",
},
],
response_metadata: { model_provider: "anthropic" },
});
console.log(message.contentBlocks);
output (it includes reasoning)
[
{
type: "reasoning",
reasoning: "...",
signature: "WaUjzkyp...",
}, {
type: "text",
id: "msg_abc123",
text: "...",
}
]
openai
import { AIMessage } from "@langchain/core/messages";
const message = new AIMessage({
content: [
{
"type": "reasoning",
"id": "rs_abc123",
"summary": [
{"type": "summary_text", "text": "summary 1"},
{"type": "summary_text", "text": "summary 2"},
],
},
{"type": "text", "text": "..."},
],
response_metadata: { model_provider: "openai" },
});
console.log(message.contentBlocks);
output (it doesn’t include reasoning)
[
{
"type": "text",
"text": "..."
}
]