Howdy! I’m having trouble getting token usage to show up correctly in LangSmith when using Gemini models. Since there’s no Gemini wrapper, I can’t get the token metadata nested under usage_metadata, so it doesn’t roll up into the top-level LangGraph run summary. Here’s what OpenAI’s auto-formatted usage looks like vs. what I can produce with Gemini notice how mine isn’t under usage_metadata, so it’s not being summed.Could anyone share a pattern or example for shaping Gemini traces so token usage lands in usage_metadata and aggregates at the run level?

