State Loss in Hierarchical Multi-Agent System with Deep Agents and Custom AgentState

LangGraph State Loss in a heirarchial multi agent system with deep agents and agent

Problem Summary

I’m experiencing state loss when using nested deep agents (create_deep_agent) with a regular agent (create_agent) that has a custom state schema (subagentstate). The custom state updates made via ToolRuntime and Command.update within tools are lost when control returns from the agent to the parent deep agent.

Architecture

1_DeepAgent(State) → 2_DeepAgent(State) → 3_Agent(subagentstate) → Tool(ToolRuntime)

State Schemas

Top-Level State

class State(TypedDict):
    subagentstate: Optional[subagentstate]
    call_trace: Annotated[List[str], operator.add]

Custom Subagent State

class subagentstate(TypedDict):
    user_id: str

Implementation

Top-Level Supervisor (Deep Agent 1)

top_level_supervisor = create_deep_agent(
    model=llm,
    system_prompt=supervisor_prompt,
    subagents=[Subagent1, Subagent2],
    context_schema=State,
    checkpointer=SHORT_TERM_MEMORY,
    store=LONG_TERM_MEMORY,
)

Second-Level Supervisor (Deep Agent 2)

second_level_supervisor = create_deep_agent(
    model=llm,
    system_prompt=second_supervisor_prompt,
    subagents=[agent1, agent2],
    context_schema=State
)

Agent with Custom State Schema (Regular Agent)

agent1 = create_agent(
    model=llm,
    system_prompt=agent_prompt,
    tools=[
        tool1,
        tool2
    ],
    middleware=[
        TodoListMiddleware(
            system_prompt=custom_todo_prompt,
            tool_description=custom_todo_description,
        ),
        SummarizationMiddleware(model=llm, summary_prompt=custom_summary_prompt)
    ],
    response_format=customAgentResponse,
    state_schema=subagentstate,
    name="agent1",
)

Tool Implementation

@tool(
    "tool1",
    description="manipulates the user id",
    args_schema=ToolInputschema
)
def tool1(
    runtime: ToolRuntime,
) -> Command:
    user_id = runtime.state.get("user_id")
    tool_call_id = runtime.tool_call_id
    
    return Command(
        update={
            "messages": [
                ToolMessage(
                    content=f"user id updated",
                    tool_call_id=tool_call_id,
                )
            ],
            "user_id": "new_user_123"  
        },
    )

Observed Behavior

  1. Tool executes and returns Command with state updates
  2. Updates are applied within the agent’s execution context
  3. When control returns from agent1 to second_level_supervisor, the custom state updates are lost
  4. The user_id field in subagentstate reverts to its previous value or becomes None

Expected Behavior

Custom state updates made within tools should propagate back through the agent hierarchy and be accessible in the parent deep agent’s state.

Question

Is this expected behavior when nesting deep agents with a regular agent that uses a custom state schema? What’s the recommended approach for maintaining custom state across this hierarchy?

HI @hwchase17 , Could you help me out here ? this question has been unaswered for a 2 months .

hi @Pradeep

1) Yes, IMHO this is expected with the current state shapes + Deep Agents’ subagent boundary behavior.

There are two separate issues mixed together:

  • State shape mismatch across the hierarchy:

    • Parent expects nested subagentstate.user_id
    • Child updates top-level user_id
    • There is no automatic mapping between user_id and subagentstate.user_id.
  • Deep Agents’ subagent handoff/return is top-level-key based (and filters keys):

    • Deep Agents’ task tool prepares subagent input state by taking runtime.state minus excluded keys and then overwriting messages with a fresh [HumanMessage(description)].
    • On return, it takes the subagent result dict and builds a Command(update=...) that:
      • extracts the final subagent message into a ToolMessage for the parent, and
      • merges other returned keys as top-level updates (again excluding specific keys).

Upstream source (Deep Agents):

  • _EXCLUDED_STATE_KEYS = {"messages", "todos", "structured_response", "skills_metadata", "memory_contents"}
  • Parent → child: subagent_state = {k: v for k, v in runtime.state.items() if k not in _EXCLUDED_STATE_KEYS} then subagent_state["messages"] = [HumanMessage(content=description)]
  • Child → parent: state_update = {k: v for k, v in result.items() if k not in _EXCLUDED_STATE_KEYS} then Command(update={**state_update, "messages": [ToolMessage(...)]})

See: deepagents.middleware.subagents in langchain-ai/deepagents (links in Sources).

2) Also: make sure the child agent state schema really extends AgentState (must include messages)

Deep Agents’ CompiledSubAgent contract explicitly states the runnable’s state must include a messages key, because the parent extracts the final message from the subagent’s messages list to create the ToolMessage it returns upstream.

In LangChain v1 docs, custom state is expected to extend AgentState (which includes messages) when you pass state_schema to create_agent.

If your real subagentstate omits messages, you can get confusing behavior (or errors) even before you hit the state-shape mismatch.

Docs: “Customizing agent memory” / state_schema examples show inheriting from AgentState.

3) Don’t use context_schema as “mutable shared state”

Runtime context is designed for static per-run dependencies (e.g., user id, db connection, API key). The docs explicitly describe static runtime context as immutable during execution; mutable “dynamic context” is handled via LangGraph state instead.

So: if user_id is identity/config, prefer runtime context; if it’s workflow data that changes during the run, keep it in state (but then you must align schemas across graphs).

Docs: “Static runtime context” (/concepts/context) + “Runtime Context” (/langchain/context-engineering).

Fixes/patterns (choose based on intent)

Option A (simplest): Make the shared mutable field top-level everywhere

If you want user_id to propagate through the hierarchy as shared mutable state, define it consistently as a top-level key in every involved state schema (parents + child), and update user_id.

class State(AgentState):
    user_id: str | None
    call_trace: Annotated[list[str], operator.add]

Then tools update Command(update={"user_id": "..."}) and it round-trips cleanly.

Option B: Keep a nested subagentstate, but update subagentstate as a top-level key

If the parent’s contract is “everything is under subagentstate”, then the update has to write that exact top-level key:

return Command(update={
    "subagentstate": {"user_id": "new_user_123"},
    "messages": [ToolMessage(content="user id updated", tool_call_id=runtime.tool_call_id)],
})

This generally means the child agent also needs to carry subagentstate in its own state schema (so its tools/middleware can read/write it), or you need an adapter (Option C).

Option C: Add an explicit adapter at the subagent boundary (nested ↔ flat mapping)

If you prefer the child to work with user_id but the parent wants subagentstate.user_id, insert a tiny wrapper node/tool around the subagent invocation:

  • Before invoke: child_input["user_id"] = parent_state["subagentstate"]["user_id"]

  • After invoke: return Command(update={"subagentstate": {"user_id": child_state["user_id"]}, ...})

This makes the mapping explicit and prevents “it looked updated locally but didn’t persist upstream” surprises.

Option D (often best for identity): Put user_id in runtime context (and/or store), not shared mutable state

If user_id is identity, treat it as configuration:

  • Runtime context for per-run identity/config (static): tools read runtime.context.user_id
  • Store for durable memory across runs/threads (preferences, profiles, etc.)
  • State for transient workflow data and intermediate results

This avoids coupling state schemas across multiple nested graphs.

Option E: If you truly need “hierarchical shared state”, consider LangGraph subgraphs (not task)

Deep Agents’ task is optimized for delegation with an isolated context window and a clean “return value” (ToolMessage + a filtered set of state updates).

If you need strict parent/child state coupling, build a LangGraph StateGraph using a single shared state schema (and reducers where needed), and compose with subgraphs so state ownership is explicit.

Sources

Hello @pawel-twardziak,

Thank you very much for the detailed explanation — it clarified a lot.

I followed Option A (making the shared fields top-level in the state schema across all agents). After doing that, I can see the updated state being returned correctly to the second-level supervisor.

However, I noticed the following:

  • When the value was defined in the context schema, updates were effectively ignored (except for messages) at the top-level and second-level supervisors.
  • Since context schema is immutable, any updated parameters in the child agent did not propagate upward.
  • To resolve this, I changed those fields from context_schema to state_schema when creating agents inside the Deep Agent (graph.py).
  • After this change, the state persisted correctly across all agents in the hierarchy.

My question is:

Is there a way to achieve this state persistence without modifying how the Deep Agent library handles state/context schemas?

I’d prefer not to adjust the Deep Agent internals if there’s a recommended pattern for handling shared mutable state across nested deep agents.

Thanks again for your guidance — it’s been extremely helpful.

Hi @Pradeep ran into the same problem.

Custom Middleware to Surface State Schema

By default, create_deep_agent only exposes context_schema, which, as mentioned above, is immutable. I discovered a workaround by creating a lightweight middleware that extends AgentMiddleware and sets state_schema on itself. When create_agent (called internally by create_deep_agent) builds the final schema, it merges state_schema from all middleware so your custom fields appear as top-level state channels.

Here’s a minimal example:

class StateSchemaMiddleware(AgentMiddleware):
    tools = ()

    def __init__(self, schema):
        super().__init__()
        self.state_schema = schema
my_agent = create_deep_agent(
    model=llm,
    system_prompt=prompt,
    context_schema=MyState,
    middleware=[StateSchemaMiddleware(MyState)],
    tools=[...],
)

I have found this makes the additional fields accessible as part of the agent’s state, even when using create_deep_agent.

Hi @syd ,

Thank you so much, that helped. :slightly_smiling_face:

Found a similar solution in this post as well: Deep Agents: Custom state fields not accessible in ToolRuntime.state - OSS Product Help / LangGraph - LangChain Forum.

In addition, if you want to access a variable across multiple agents and tools, you can also store it in runtime.store and later retrieve it. This can slightly reduce the latency and complexity in my opinion.