Trim_messages throwing "Unrecognized content block type"

The code below works is consistently failing -

model = ChatOpenAI(model=“gpt-5.1”, use_responses_api = True, reasoning = {“effort”: effort, “summary”: “auto” }).bind_tools(all_tools, strict=True)

all_messages = [{“role”: “system”, “content”: system_prompt}, *state.messages]

trimmed_messages = trim_messages(
all_messages,
strategy = “last”,
token_counter = ChatOpenAI(model=“gpt-5.1”, reasoning={ “effort” : “low” }),
max_tokens = 400000,
start_on = “human”,
end_on = (“human”, “tool”),
include_system = True,
allow_partial = False
)

await model.ainvoke(trimmed_messages, runnable_config)

Error trace:

Traceback (most recent call last):
  File "graph.py", line 57, in _invoke_model
    trimmed_messages = trim_messages(
                       ^^^^^^^^^^^^^^
  File "...langchain_core/messages/utils.py", line 409, in wrapped
    return func(messages, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "...langchain_core/messages/utils.py", line 1006, in trim_messages
    return _last_max_tokens(
           ^^^^^^^^^^^^^^^^^
  File "...langchain_core/messages/utils.py", line 1552, in _last_max_tokens
    reversed_result = _first_max_tokens(
                      ^^^^^^^^^^^^^^^^^^
  File "...langchain_core/messages/utils.py", line 1412, in _first_max_tokens
    if token_counter(messages) <= max_tokens:
       ^^^^^^^^^^^^^^^^^^^^^^^
  File "...langchain_openai/chat_models/base.py", line 1754, in get_num_tokens_from_messages
    raise ValueError(msg)
ValueError: Unrecognized content block type

{'id': 'rs_026075ce08c0201e006922928f9d60819cbe4399423e3e6f8b', 'summary': [{'index': 0, 'type': 'summary_text', 'text': '...'}], 'type': 'reasoning', 'index': 0}

Using:
langchain = “1.0.8”
langchain-openai = “1.0.3”

hi @sjay

based on that https://platform.openai.com/docs/guides/reasoning?api-mode=responses and that https://platform.openai.com/docs/models, GPT-5 and GPT-5.1 support reasoning block in AI response.
Unfortunately, trim_messages (actually get_num_tokens_from_messages being used by trim_messages) does not support it yet:

                if isinstance(value, list):
                    # content or tool calls
                    for val in value:
                        if isinstance(val, str) or val["type"] == "text":
                            text = val["text"] if isinstance(val, dict) else val
                            num_tokens += len(encoding.encode(text))
                        elif val["type"] == "image_url":
                            ...
                        elif val["type"] == "function":
                            ...
                        elif val["type"] == "file":
                            ...
                        else:
                            msg = f"Unrecognized content block type\n\n{val}"
                            raise ValueError(msg)

Workarounds / fixes

  1. Use the fast, model-agnostic approximate counter

This bypasses strict block handling and won’t error on “reasoning”:

from langchain_core.messages import trim_messages
from langchain_core.messages.utils import count_tokens_approximately

trimmed_messages = trim_messages(
    all_messages,
    strategy="last",
    token_counter=count_tokens_approximately,  # <= approximate and robust
    max_tokens=400000,
    start_on="human",
    end_on=("human", "tool"),
    include_system=True,
    allow_partial=False,
)
  1. Sanitize history before trimming (drop or rewrite non-text blocks)

If you want to keep using exact OpenAI token counting, strip non-supported blocks (like "reasoning") from AIMessage.content before calling trim_messages:

from typing import List
from langchain_core.messages import BaseMessage

def strip_responses_only_blocks(messages: List[BaseMessage]) -> List[BaseMessage]:
    keep = {"text", "image_url", "function", "file"}
    cleaned: List[BaseMessage] = []
    for m in messages:
        if isinstance(m.content, list):
            new_blocks = [
                b for b in m.content
                if isinstance(b, dict) and b.get("type") in keep
            ]
            cleaned.append(m.model_copy(update={"content": new_blocks}))
        else:
            cleaned.append(m)
    return cleaned

sanitized = strip_responses_only_blocks(all_messages)
trimmed_messages = trim_messages(
    sanitized,
    strategy="last",
    token_counter=ChatOpenAI(model="gpt-5.1"),
    max_tokens=400000,
    start_on="human",
    end_on=("human", "tool"),
    include_system=True,
    allow_partial=False,
)
  1. If you must have exact counts with Responses API blocks

Implement a custom token counter that converts "reasoning" blocks into text (e.g., concatenate their summaries) before delegating to ChatOpenAI(...).get_num_tokens_from_messages. Then pass that function as token_counter.


Always try latest langchain and langchain-openai first, since Responses support has evolved rapidly. However, as of the referenced source, the OpenAI token counter for Chat Completions still does not treat "reasoning" as a recognized content block. Until upstream adds native handling or ignores non-completions blocks in counting, use (1) or (2).