Deep Agent Filesystem vs Store Backend

I am using deep agents with the filesystem backend but I realized that the cross-conversation between multiple agents doesn’t work and it doesn’t integrate with a normal LangChain agent (which uses LangGraph store). Do I have to use LG store for the deep agent also.? If yes, don’t I lose the performance and other benefits of filesystem?

Thanks,

Ishveen Kaur

Yes you can use Traditional Langraph Store with deep agent without losing any performance benefits.

I’ve been looking into this same issue. Managing a full VM for simple filesystem tasks is overkill and adds a lot of latency.

I actually ended up building a lightweight, serverless Bash API using a simulated TypeScript environment to solve this for my own agents. It boots in ~5ms and is completely sandboxed.

If you’re interested in the architecture or want to test the endpoint, let me know. I’m looking for feedback from other LangChain devs.

1 Like

hi @ishveen-ai

If you want your Deep Agents to share state and “talk” to other LangGraph/LangChain agents via the LangGraph store, the data that should be shared does need to go through a LangGraph Store (via StoreBackend). However, you do not have to give up the filesystem: you can use a hybrid backend where most heavy file work stays on the local filesystem, and only the small pieces of information that need to be shared go into the store.

Why your current setup doesn’t cross-talk

  • The filesystem backend (FilesystemBackend) writes to the real OS filesystem under a root directory. It is fast and great for working with code, logs, and other large files, but:
    • Those files live outside LangGraph’s Store.
    • A “normal” LangGraph agent that only looks at the Store will not see those files unless you explicitly give it tools to read the same directory.
  • Deep Agents features that talk about long-term memory and cross-conversation are implemented on top of LangGraph’s BaseStore via StoreBackend, not the raw filesystem.
  • So when you use only FilesystemBackend, your agents may share a disk directory, but they are not sharing the LangGraph store state that other agents rely on for memory.

Relevant docs:

What you need for cross-conversation with other LangGraph agents

To get real cross-conversation between:

  • multiple Deep Agents, and
  • a “normal” LangGraph agent that is already using BaseStore

you need all of them to share a common Store instance (or at least a shared namespace within one).

For Deep Agents, that means:

  • Configure a StoreBackend (or CompositeBackend that includes a StoreBackend) for the paths that represent shared memory
  • Pass the same store object you use for your other LangGraph agent into create_deep_agent

Conceptually in Python (API may vary slightly by version, check the docs linked above):

from deepagents import create_deep_agent
from deepagents.backends import CompositeBackend, FilesystemBackend, StoreBackend
from langgraph.store.memory import InMemoryStore  # or SQLite/Postgres/etc.

# 1. Create a Store that both your "normal" agent and Deep Agent will share
shared_store = InMemoryStore()

# 2. Define a hybrid backend for the Deep Agent
def make_backend(runtime):
    return CompositeBackend(
        # Most file activity stays on the fast local filesystem
        default=FilesystemBackend(root_dir="/srv/agents/workspace"),
        # Anything under /memories/ (or another prefix you choose)
        # is mapped into the LangGraph Store via StoreBackend
        routes={
            "/memories/": StoreBackend(store=shared_store),
        },
    )

# 3. Create the Deep Agent wired to the same store
agent = create_deep_agent(
    backend=make_backend,
    store=shared_store,
)

With CompositeBackend you can:

  • Make FilesystemBackend the default for speed
  • Route just one or two prefixes (e.g. /memories/, /shared/) to StoreBackend so they are visible to other agents

Performance‑wise:

  • Local InMemoryStore or SQLite‑based stores are typically fast enough for small memory objects.
  • Because you are only putting small, structured data in the store, the overhead is low compared to shoving entire project trees into it.

So the pattern could be:

  1. Use FilesystemBackend (or a sandboxed version of it) for almost all file I/O
  2. Layer a StoreBackend behind a CompositeBackend for the “memory” paths that should be shared with other agents
  3. Point your non‑Deep LangGraph agents at the same store so they can read/write those shared memory entries

This gives you:

  • Cross‑conversation, cross‑agent memory via LangGraph Store, and
  • The performance and ergonomics of a real filesystem for everything else.
1 Like

thank you so much @pawel-twardziak! This makes a lot more sense now. After I posted this question, I decided to have a main agent that uses my deep agent and other agents as tools (using the subagent architecture) so that I could share the backends with StoreBackend, but I will try your solution so I can include filesystem.

1 Like