Checkpointer with init_chat_model Function

I’m looking for some help with a graph checkpointer. I’m using init_chat_model so that users can call any provider and model they choose. I’m passing in specifics with RunnableConfig using options like api_key or base_url. The problem comes when I add {"thread_id":"1"} to the config when invoking the compiled graph. More specifically, this error comes from when I end up calling an Anthropic model I get

File "venv/lib/python3.12/site-packages/langchain_anthropic/chat_models.py", line 1316, in _create
    return self._client.messages.create(**payload)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "venv/lib/python3.12/site-packages/anthropic/_utils/_utils.py", line 283, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
TypeError: Messages.create() got an unexpected keyword argument 'thread_id'

It seems like the thread_id is being passed down to ChatAnthropic and it doesn’t like that. It seems like config passed when invoking the graph is getting passed down to my “assistant” node that is invoking the LLM.

I thought about passing things like api_key or base_url as a **kwargs to init_chat_model() instead of in the RunnableConfig but that doesn’t seem to resolve the issue.

Please share code.

I’m guessing you’re doing something like

model.invoke(messages, thread_id=foo)

rather than


model.invoke(messages, config={"thread_id": foo})

I’m using the second method:

    async def invoke2(self, prompt: str):
        self.graph.add_node("assistant", self._assistant_node)
        self.graph.add_node("tools", ToolNode(self.tool_manager.get_all_tools()))
        self.graph.add_edge(START, "assistant")
        self.graph.add_conditional_edges("assistant", tools_condition)
        self.graph.add_edge("tools", "assistant")
        react_graph = self.graph.compile(checkpointer=self.memory)

        if self.config is not None and self.config.get("configurable"):
            self.config["configurable"]["thread_id"] = "1"
        else:
            self.config = {"configurable": {"thread_id": "1"}}
        
        resp = react_graph.invoke({"messages": [HumanMessage(content=prompt)]}, config=self.config)

Hmm, I’m wondering if I replace configurable_fields="any" in the init_chat_model() call with the subset any model is going, if that will work? I loose some flexibility because I’ll have to know all the available config options for all of the model providers.

I think I got it worked out. I stripped out the thread_id from the config before I invoke the model (not the graph) in the _assistant_node.