Does metadata in AIMessage affect token usage in LangGraph?

Hi everyone,

I’m using LangGraph with a message state like this:

Blockquote
class AgentState(TypedDict):
messages: Annotated[List[BaseMessage], add_messages]

Each AIMessage returned by the model includes fields such as:

response_metadata
usage_metadata
id
etc

When LangGraph builds the next prompt and sends the conversation history back to the model, does it include any of the metadata fields (like response_metadata or usage_metadata) in the serialized prompt?

Or are only the message content fields used when constructing the next model input?

I’m trying to understand whether keeping metadata inside the state increases token consumption, or if it only affects checkpoint storage size.

hi @meharaz733

No, metadata fields on AIMessage do not affect token consumption. They are stripped out during the message-to-API conversion and are never sent to the LLM provider. They only affect checkpoint storage size.

These metadata fields serve important local/observability purposes within LangChain/LangGraph:

  • usage_metadata: tracks token counts (input, output, total) per message. Used for monitoring, cost tracking, and by summarization middleware to decide when to trigger compression
  • response_metadata: stores provider-specific response info (model name, finish reason, logprobs, headers). used for debugging, logging, and LangSmith tracing
  • id: used by add_messages to identify and replace messages (e.g. updating an existing message in state rather than appending a duplicate)
1 Like