Understanding langgraph usage_metadata

I am trying to track LLM token usage in my application. I use usage_metadata to do this. I am trying Gemini models. I am seeing the snippet below and I cannot make sense of it. Neither the input or output add up to the total tokens.

"usage_metadata": {
    "input_tokens": 7235,
    "output_tokens": 44,
    "total_tokens": 7368,
    "input_token_details": {
        "cache_read": 0
    },
    "output_token_details": {
        "reasoning": 267
    }
}

Hey Josh,

Thanks for flagging! We’re taking a look at this and it should be resolved soon