Hi, I’m migrating from standard LangGraph to Deep Agents and hitting a state management issue.
━━━ Working pattern (standard LangGraph) ━━━
from langgraph.checkpoint.postgres import PostgresSaver
class State(MessagesState):
request_id: str
user_data: dict
workflow = StateGraph(State)
compiled = workflow.compile(checkpointer=PostgresSaver(…)) # Persists state across runs
result = compiled.invoke({
“messages”: […],
“request_id”: “123”,
“user_data”: {…}
})
Tools access via runtime.state.get(“request_id”) 
State persists across runs via Postgres checkpointer 
━━━ Not working (Deep Agents) ━━━
class State(MessagesState):
request_id: str
user_data: dict
agent = create_deep_agent(
tools=[…],
context_schema=State, # ← Tried this
store=PostgresSaver(…), # Using same Postgres checkpointer
)
result = agent.invoke({
“messages”: […],
“request_id”: “123”,
“user_data”: {…}
})
runtime.state only has “messages” 
request_id and user_data are missing even with checkpointer
━━━ Questions ━━━
-
Is context_schema meant to add fields to runtime.state? Or is it typing-only?
-
How do I pass custom context to Deep Agent tools (per-invocation data, not cross-thread)?
-
Should I use store instead? Or custom middleware like FilesystemMiddleware?
I saw in the file system discussion [1] that files works via middleware. Is that the only way to add custom state fields to Deep Agents?
Thanks!
[1] Deep Agent – Persistent File System - #9 by pawel-twardziak
context_schema in create_agent() defines static context passed at runtime—it’s not a custom State definition, currently you need to handle this with middleware like you noted. However we are looking at supporting state_schema again like pre-v1
Here’s some sample code how you can accomplish it with middleware
from langchain.agents import create_agent, AgentState
from langchain.agents.middleware import AgentMiddleware
from typing import Any
class CustomState(AgentState):
user_name: str
model_call_count: int
class CustomMiddleware(AgentMiddleware[CustomState]):
state_schema = CustomState # State lives here
def before_model(self, state: CustomState, runtime) -> dict[str, Any] | None:
# Access and modify state before LLM call
print(f"User: {state['user_name']}")
return None
def after_model(self, state: CustomState, runtime) -> dict[str, Any] | None:
# Update state after LLM call
return {"model_call_count": state.get("model_call_count", 0) + 1}
agent = create_deep_agent(
model=“gpt-4o”,
tools=[…],
middleware=[CustomMiddleware()] # Pass middleware here
)
1 Like
@xuro-langchain Thanks a lot for your reply — that really helped!
Later I realized your suggestion was already in the documentation (my bad
): Middleware - Docs by LangChain
Now I’m running into a related issue: INVALID_CONCURRENT_GRAPH_UPDATE - Docs by LangChain
I have a hypothesis that since all my SubAgents share the same CustomMiddleware and CustomState (AgentState) schema — even for values that don’t change — there might be some concurrency when the subgraphs “return” and try to rewrite the same values in the shared ‘main graph’ state.
Here’s an example of the error I’m seeing:
InvalidUpdateError("At key 'load_id': Can receive only one value per step. Use an Annotated key to handle multiple values.\nFor troubleshooting, visit: https://docs.langchain.com/oss/python/langgraph/errors/INVALID_CONCURRENT_GRAPH_UPDATE")Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/langgraph/pregel/main.py", line 2689, in stream
loop.after_tick()
File "/usr/local/lib/python3.12/site-packages/langgraph/pregel/_loop.py", line 545, in after_tick
self.updated_channels = apply_writes(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langgraph/pregel/_algo.py", line 294, in apply_writes
if channels[chan].update(vals) and next_version is not None:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langgraph/channels/last_value.py", line 64, in update
raise InvalidUpdateError(msg)
langgraph.errors.InvalidUpdateError: At key 'load_id': Can receive only one value per step. Use an Annotated key to handle multiple values.
For troubleshooting, visit: https://docs.langchain.com/oss/python/langgraph/errors/INVALID_CONCURRENT_GRAPH_UPDATE
Note: The load_id attribute (from my custom state) never changes.
Does that theory make sense?
If so:
-
Is the recommended fix (as mentioned in the docs) to create a reducer, maybe to safely replace or merge the state values?
-
In this case, order isn’t really an issue because the value doesn’t change — but for other state fields it might be. What are the best practices around handling this kind of concurrency in shared state?
That theory makes sense - if your graph has parallelism and tries to update state in parallel it needs some sort of reducer function to handle the concurrent updates. Judging by your error I’d say this is the case
Our messages reducer appends to a list, in your case because the value doesn’t change in your reducer you can just return the first item from the list. It might be useful to validate that your assumption is true - in the reducer you can check that all values given are the same
The cleanest solution however, if load_id never changes, is to exclude it from the state update if possible
1 Like