Hi all — I’m implementing a custom BaseStore backend (Aerospike) for LangGraph and I’m looking for guidance on recommended semantics for store.batch(ops).
I’ve seen two plausible approaches:
Option A: Sequential semantics (apply writes immediately)
Process ops in order:
-
PutOp→ write immediately, returnNone -
subsequent
GetOp/SearchOpsee the newly written data
Pros:
-
Intuitive: later reads in the same batch reflect earlier writes
-
Matches “transaction-like” ordering (even if not atomic)
Cons:
-
More network calls (one write per PutOp)
-
Less opportunity for write dedup/batching optimizations
Option B: Deferred/deduped writes (apply writes after reads)
Process all reads (GetOp, SearchOp, ListNamespacesOp) first, buffer PutOps in a dict (dedupe by (namespace,key)), then apply writes at the end (or in a bulk operation if supported).
Pros:
-
Can reduce write load (dedupe) and improve throughput
-
Allows “read snapshot” semantics (all reads reflect pre-batch state)
Cons:
-
Surprising if a
GetOpafter aPutOpin the same ops list doesn’t see the put -
Requires documenting snapshot semantics clearly
Question: What does LangGraph expect batch() to do — should it guarantee that reads later in the ops list see earlier writes, or is it acceptable/recommended to treat reads as happening on a consistent “pre-write snapshot” and apply writes afterward?
Also: Currently using sequential semantics for batch() and deferred writes for asynch batch()
Thanks!