Help deploying createDeepAgent based agent on LangGraph Cloud

Hi,

TLDR: Any docs related to deploying an agent created with createDeepAgent on LangGraph Cloud?

Context: I already have a conventional LangGraph workflow agent deployed on LangGraph Cloud. However I am creating a new graph but want to use the createDeepAgent because I want to build a more sophisticated agent with subagents, memory etc.

Is there a way to host such an agent via LangGraph cloud, if so where are the docs to do so. I am using the JS/TS version of LangChain/LangGraph

Thanks!

Hiya! A deep agent is a compiled Pregel object, same as any LangGraph workflow, meaning that you can deploy it directly by pointing to the agent variable directly in the config, pushing that to a repo, and connecting to that repo for auto deployment.

A longer response from Chat LangChain that happens to be mostly correct is below:

Example Project Structure

my-deep-agent/
├── src/
│   └── agent.ts          # Your createDeepAgent code
├── package.json
└── langgraph.json

1. Create Your Deep Agent

// src/agent.ts
import { createDeepAgent, CompositeBackend, StateBackend, StoreBackend } from "deepagents";
import { ChatAnthropic } from "@langchain/anthropic";

// Define subagents for your sophisticated agent
const researchSubagent = {
  name: "research-agent",
  description: "Conducts in-depth research using web search",
  systemPrompt: "You are a thorough researcher...",
  tools: [internetSearch],
};

// Create and export the deep agent with memory
export const agent = createDeepAgent({
  model: new ChatAnthropic({ model: "claude-sonnet-4-5-20250929" }),
  subagents: [researchSubagent],
  // CompositeBackend gives you both ephemeral and persistent memory
  backend: (rt) => new CompositeBackend(
    new StateBackend(rt),  // Ephemeral (single thread)
    { "/memories/": new StoreBackend(rt) }  // Persistent (across threads)
  ),
  systemPrompt: "You are a sophisticated agent with memory and subagents...",
});

2. Configure langgraph.json

{
  "node_version": "20",
  "dependencies": ["."],
  "graphs": {
    "deep_agent": "./src/agent.ts:agent"
  },
  "env": ".env"
}

The key is the graphs entry - it references your exported agent variable with the format ./path/to/file.ts:exportedVariableName.

3. Deploy via LangSmith Deployment

From your project directory:

# Test locally first
npx @langchain/langgraph-cli dev

# Then deploy to LangSmith
# (Connect your GitHub repo via the LangSmith UI)

Some relevant links:

1 Like

If I want to deploy a deep agent on LangGraph Cloud that uses a sandbox backend, how do I manage the sandbox lifetime?

Deep agent samples like this one advise to call shutdown() which is simple enough for a command-line app, but I’m not sure how I would make sure sandboxes get shut down at the right time in the context of a LangGraph Cloud deployment?

Thanks in advance! –Andrew B.

Hey @szabrown — great question :waving_hand: this is an important nuance when moving from CLI → Cloud.

You’re right: in local/CLI examples, you often manually call:

await backend.shutdown()

But in LangGraph Cloud, you don’t control process lifetime directly — the runtime does.


Key Concept: Treat Sandboxes as Per-Run Resources

In LangGraph Cloud:

  • Each invocation = isolated graph run

  • You should not rely on process-level teardown

  • You shouldn’t depend on global shutdown hooks

So instead of managing sandbox lifetime at the app level, you want to scope it to:

The graph run or thread lifecycle.


Recommended Patterns

:one: Bind Sandbox Lifetime to Runtime Context (Best Practice)

Since your backend receives rt (runtime), create the sandbox inside the backend constructor and tie cleanup to run completion.

Instead of:

const sandbox = new Sandbox()

Use a pattern like:

backend: (rt) => {
  const sandbox = new Sandbox()

  rt.onShutdown(async () => {
    await sandbox.shutdown()
  })

  return new CompositeBackend(...)
}

This ensures cleanup happens when the runtime tears down the execution context.


:two: Make Sandboxes Ephemeral Per Invocation

If the sandbox is execution-scoped (e.g., code execution tool):

  • Create it lazily

  • Dispose after tool execution

  • Avoid long-lived sandbox objects

Example pattern:

async function runInSandbox(code: string) {
  const sandbox = new Sandbox()
  try {
    return await sandbox.run(code)
  } finally {
    await sandbox.shutdown()
  }
}

This is often cleaner in cloud deployments.


:three: Avoid Global Singletons in Cloud

Don’t do:

let sandbox = new Sandbox()

Because:

  • Multiple concurrent runs

  • No guarantee when container is recycled

  • Risk of leaked resources

Cloud ≠ long-running CLI process.


What NOT to Rely On

  • Process exit handlers

  • process.on("SIGTERM")

  • Manual shutdown scripts

LangGraph Cloud manages lifecycle for you.


Architectural Guideline

Ask:

Is this sandbox:

  • Per tool call?

  • Per thread?

  • Per deployment?

Most use cases → per tool call or per thread is safest.


Practical Rule of Thumb

If it’s:

  • CPU-heavy

  • Memory-heavy

  • Security-sensitive (like code execution)

Make it short-lived and explicitly closed in the same scope it was created.


If you share which sandbox backend you’re using (e.g., E2B, custom Docker, VM-based, etc.), I can give a more concrete recommendation.