’m using LangGraph with an AzureChatOpenAI chat model and persistence via Azure Redis Cache Premium and a custom RedisSaver checkpointer.
-
My state has:
messages: Annotated[list[BaseMessage], add_messages] -
so
add_messagesshould merge turns automatically between runs. -
redis_client = Redis(host=..., port=6380, ssl=True, password=...) checkpointer = RedisSaver(redis_client=redis_client) graph = StateGraph(...).compile(checkpointer=checkpointer) -
In my node, I call the chat model like this:
-
tool_llm = llm.bind_tools(tools) response = tool_llm.invoke(state["messages"]) -
only send the new
HumanMessageon each.invoke()and the checkpointer restores the full conversation from Redis. I can see the whole dialogue instate["messages"]before calling the model.
The problem:
Even though state["messages"] contains the complete HumanMessage / AIMessage sequence from all previous turns, the debug log shows AzureChatOpenAI being passed a flattened single string with "Human: ... AI: ... Human: ...", instead of a proper messages=[{"role":"user","content":...}, {"role":"assistant","content":...}] array.
This means the chat model sees it as one user message, not separate turns — so queries like “What was my last question?” fail with “I don’t have access to previous interactions”, even though the history is in state.
Question:
-
How can I make
AzureChatOpenAI(with LangGraph and RedisSaver) send my restoredstate["messages"]as role‑separated chat turns in the API payload, not as one concatenated block? -
Is there a specific call pattern or option needed in
.invoke()to preserve the roles when passing a list of message objects?
Environment:
-
LangGraph:0.6.3
-
LangChain Core: 0 .3.72 (for
BaseMessage,HumanMessage, etc.) -
Python: 3.x
-
Azure Chat model: gpt-4o-2024-08-06
-
Checkpointer:
RedisSaverwith Azure Redis Cache Premium (non‑Enterprise)