Command.update Not Applied When goto=[Send(), Send()] with graph=Command.PARENT

When using `Command` with a list of `Send()` objects and `graph=Command.PARENT` from within a subgraph, the `update` dict doesn’t seem to be reliably applied to the parent graph’s state. This causes state loss when the originating node needs to resume after all parallel branches complete.

## Environment

- **LangGraph version**: 10.4

- **Python version**: 3.11

Architecture

Parent Graph (Network)

├── Supervisor Subgraph (compiled, added as node)

│   └── Returns Command(goto=[Send(...), Send(...)], graph=PARENT, update={...})

├── Expert1 Subgraph (compiled, added as node)

└── Expert2 Subgraph (compiled, added as node)

All subgraphs are compiled separately and added as nodes to the parent graph.

The Issue

When a supervisor subgraph delegates to multiple expert subgraphs in parallel using:

return Command(

goto=[

Send("expert1", expert1_state), 

Send("expert2", expert2_state)

],

graph=Command.PARENT,

update={

"messages": [ai_message_with_tool_calls, tool_message_1, tool_message_2],

"supervisor_context": "important_data"

    }

)

The `update` dict is **not reliably applied** to the parent graph’s state. When the experts complete and control returns to the supervisor, the supervisor’s previous state (which should have been preserved via the `update`) is missing.

Expected vs Actual Behavior

Expected Flow

1. Supervisor runs, has messages: `[SystemMessage, HumanMessage, AIMessage]`

2. Supervisor returns `Command` with `update={“messages”: […]}`

3. **Expected**: Parent graph state should have these messages checkpointed

4. Experts run in parallel (each with their own state via `Send`)

5. Experts complete and return control to supervisor

6. **Expected**: Supervisor should see the messages from step 2 + expert results

Actual Behavior

At step 6, the supervisor’s original messages (including `SystemMessage`) are **missing**. Only the expert results are present in the state.

Contrast: Single Send Works

When using a single `Send` (not a list), the `update` IS applied correctly:

# This works - update is applied to parent state

return Command(

goto=Send("expert1", expert1_state),  # Single Send, not a list

graph=Command.PARENT,

update={"messages": [ai_message, tool_message]}

)


The discrepancy between single `Send` and list of `Send` objects is the core issue.

Reproduction Steps

1. Create a parent graph with 3 subgraphs: supervisor, expert1, expert2

2. Have the supervisor return `Command(goto=[Send(…), Send(…)], graph=Command.PARENT, update={…})`

3. Have experts perform work and return `Command(goto=“supervisor”, graph=Command.PARENT)`

4. Observe that the supervisor’s original state (from `update`) is not present when it resumes

Questions

1. **Is this a bug?** Should the `update` be applied regardless of whether `goto` is a single `Send` or a list of `Send` objects?

2. **Is this intended?** Do `Send()` operations with lists have fundamentally different state semantics where `update` is ignored?

3. **Is this an architectural limitation?** Does the combination of:

- Subgraph returning `Command`

- `graph=Command.PARENT`

- `goto=[Send(), Send()]` (list)

…create a scenario where `update` cannot be reliably applied?

4. **Subgraph-to-subgraph routing question**: In my architecture, the supervisor is itself a subgraph on the parent graph (just like the experts). When I specify `graph=Command.PARENT`, is the `update` being applied to the **parent graph’s state**, but then when the Sends complete and route back to the supervisor node, there’s no mechanism to inject that parent state back into the supervisor subgraph’s internal state?

In other words:

   Supervisor Subgraph (has its own internal state)

       │

       ├── Returns Command(update={...}, graph=PARENT)

       │       └── This updates PARENT graph state, not supervisor's internal state?

       │

       └── When control returns, supervisor reads its OWN internal state

               └── Which was never updated?


Possibly Related Documentation

The docs mention:

> “Updates from a parallel superstep may not be ordered consistently. If you need a consistent, predetermined ordering of updates from a parallel superstep, you should write the outputs to a separate field in the state.”

Is this the same underlying issue? Is `Command.update` with a list of `Send()` treated as a “parallel superstep” where the update itself gets lost or overwritten?

Current Workaround

We’re working around this by storing the supervisor’s full state in a dedicated field that gets passed through the expert state and survives the `Send` operations:

# When delegating (first parallel Send)

if is_first_tool_call:

    supervisor_messages = state.get("messages", [])

    update_dict["supervisor_messages_snapshot"] = serialize(supervisor_messages)




# Pass through expert state

expert_state = {

"messages": [...],

"supervisor_messages_snapshot": supervisor_state.get("supervisor_messages_snapshot"),

...

}




# When expert returns to supervisor, restore from snapshot

def restore_supervisor_messages(state):

    snapshot = state.get("supervisor_messages_snapshot")

if snapshot:

return [deserialize(msg) for msg in snapshot]

return []

This works, but it feels like we’re fighting the framework rather than using it as intended.

What Would Help

1. Clarification on whether `Command.update` is **supposed** to work with `goto=[Send(), Send()]`

2. If not supported, documentation update to clarify this limitation

3. If it’s a bug, a fix or recommended pattern for this use case

4. Guidance on the recommended architecture for “supervisor delegates to multiple experts in parallel, then resumes with full context”

Thank you!

hi @jriedel199715

could you share your code or at least minimal reproducible example? I could debug it.