Is there a way to get same message format in python and typescript?

As you can see in the code below, from the same piece of code in python and typescript the results are different, I need that both responses look the same.
I know I could convert manually the messages, but I wonder if there is already something in the langchain ecosystem that does this.



import {initChatModel} from "langchain/chat_models/universal";
const model = await initChatModel("ollama:granite4:micro", {
  temperature: 0.25
});

const response = await model.invoke("What is the capital of France?");
console.log(response.toJSON());

{
  lc: 1,
  type: 'constructor',
  id: [ 'langchain_core', 'messages', 'AIMessage' ],
  kwargs: {
    id: undefined,
    content: "The capital of France is Paris. It's known for its historical landmarks such as the Eiffel Tower, Louvre Museum, and Notre-Dame Cathedral.",
    tool_calls: [],
    response_metadata: {
      model: 'granite4:micro',
      created_at: '2025-11-10T10:35:50.78034Z',
      done: true,
      done_reason: 'stop',
      total_duration: 666412750,
      load_duration: 61871500,
      prompt_eval_count: 37,
      prompt_eval_duration: 62975917,
      eval_count: 33,
      eval_duration: 408072999
    },
    usage_metadata: {
      input_tokens: 37,
      output_tokens: 33,
      total_tokens: 70,
      input_token_details: {},
      output_token_details: {}
    },
    invalid_tool_calls: [],
    additional_kwargs: {}
  }
}

Python:

from langchain.chat_models import init_chat_model
model = init_chat_model("ollama:granite4:micro", temperature=0.25)

response = model.invoke("what is the capital of France?")

print("Response:", response.model_dump())

{
    "content": "The capital of France is Paris. It's located in the northern part of the country and serves as its political, cultural, and economic center.",
    "additional_kwargs": {},
    "response_metadata": {
        "model": "granite4:micro",
        "created_at": "2025-11-10T10:34:26.124632Z",
        "done": True,
        "done_reason": "stop",
        "total_duration": 608179417,
        "load_duration": 58786959,
        "prompt_eval_count": 37,
        "prompt_eval_duration": 58739791,
        "eval_count": 30,
        "eval_duration": 368132503,
        "model_name": "granite4:micro",
        "model_provider": "ollama",
    },
    "type": "ai",
    "name": None,
    "id": "lc_run--3e00c734-d3d2-4bde-b0dd-9d493c2fe50c-0",
    "tool_calls": [],
    "invalid_tool_calls": [],
    "usage_metadata": {"input_tokens": 37, "output_tokens": 30, "total_tokens": 67},
}

Hey! We don’t have an exported utility like .model_dump() in the JS version today, but we do use something like this to get a raw data structure for a message:

function modelDump(message: BaseMessage) {
  const { type, data } = message.toDict();
  return { ...data, type };
}

console.log(modelDump(message));
/**
 * {
 *.  type: 'human'
 *   content: 'Hello, how are you?',
 *   response_metadata: {},
 * }
 */