Call multiple deep agent from a parent langgraph/deepagent

Hi Team,

I have multiple Deepagent with its set of tools I need to create a orchestrator deep atgent which have access to this sub-agent .

How to do

Note : I dont want subagent as I have dedicated deepagent with dedicated MD files with instructions.

hi @Somanath

In Deep Agents (Python), the “orchestrator that can delegate” pattern is built-in: you create one parent Deep Agent and give it a set of named delegates via create_deep_agent(..., subagents=[...]). The key detail for your constraint (“my delegates are themselves Deep Agents with dedicated Markdown instructions”) is to use compiled subagents: pass each specialist Deep Agent graph as a runnable in a CompiledSubAgent.

This way, your “subagents” are not a different lightweight concept - they can be full Deep Agents that you configure independently (tools, memory/AGENTS.md, backend, etc.), and the parent just calls them through the built-in task tool.

Pattern A: Parent Deep Agent + compiled subagents (each is a Deep Agent)

  • Create each specialist as its own Deep Agent with its own toolset and its own instruction Markdown loaded via memory=[...] (Deep Agents’ MemoryMiddleware loads “AGENTS.md-style” Markdown into the system prompt).
  • Create an orchestrator Deep Agent and pass those specialists as {"name", "description", "runnable"} entries in subagents=[...].
from langchain_core.messages import HumanMessage

from deepagents import create_deep_agent
from deepagents.backends import FilesystemBackend

# 1) Backend so Deep Agents can load memory files from disk.
#    (Pick a root_dir that can see your instruction files.)
backend = FilesystemBackend(root_dir="/")

# 2) Specialist Deep Agents (each with its own tools + Markdown instructions)
sql_agent = create_deep_agent(
    tools=[...],  # SQL tools only
    system_prompt="You are a SQL specialist. Keep answers precise.",
    memory=["/path/to/sql_agent/AGENTS.md"],  # your dedicated MD instructions
    backend=backend,
    name="sql-agent",
)

retrieval_agent = create_deep_agent(
    tools=[...],  # retrieval/RAG tools only
    system_prompt="You are a retrieval specialist. Cite sources you used.",
    memory=["/path/to/retrieval_agent/AGENTS.md"],
    backend=backend,
    name="retrieval-agent",
)

# 3) Orchestrator Deep Agent that can delegate via `task`
orchestrator = create_deep_agent(
    tools=[...],  # orchestrator can have zero or minimal tools if you want
    system_prompt=(
        "You are an orchestrator. Decide which specialist should handle each part. "
        "Delegate with the `task` tool and then synthesize a final answer."
    ),
    subagents=[
        {
            "name": "sql",
            "description": "Handles SQL querying, schema reasoning, and query optimization.",
            "runnable": sql_agent,  # <-- a Deep Agent graph
        },
        {
            "name": "retrieval",
            "description": "Handles document retrieval and grounded answers from sources.",
            "runnable": retrieval_agent,  # <-- a Deep Agent graph
        },
    ],
    backend=backend,
    name="orchestrator",
)

# 4) Run it
result = orchestrator.invoke(
    {"messages": [HumanMessage(content="Answer this question; query SQL if needed; cite docs if needed.")]},
)
print(result["messages"][-1].content)

Why this matches your “dedicated MD files” requirement

Important behavioral detail (so you’re not surprised)

  • The parent delegates work via a built-in task(description, subagent_type=...) tool (implemented by SubAgentMiddleware). The subagent is invoked with the provided description as a new HumanMessage, rather than automatically receiving the entire conversation history. If your specialists need context, include it explicitly in the delegated description, or build a custom LangGraph supervisor that forwards whatever state you want. (This behavior is visible in the Deep Agents SubAgentMiddleware implementation in the repo.)

Pattern B (if you really want “no Deep Agents subagent mechanism”): wrap agents as tools

If your main objection is the mechanism (the task tool + subagents=) rather than the architecture, LangChain’s multi-agent pattern is to wrap each specialist agent as a tool and let a parent agent call them. This is described in LangChain’s “subagents” docs (Subagents - Docs by LangChain).

Conceptually:

  • build specialist runnables with create_agent(...) or create_deep_agent(...)
  • expose each as a @tool function that calls .invoke(...) and returns the final text
  • give those tools to your orchestrator create_agent(...)
    This gives you explicit control over what gets forwarded (messages/state), at the cost of writing a little glue.

Hi my requirement is we are planning a built autonomous agent for various usecases.

1.say task 1 specialist deep agent (works nicely)

2.say task 2 specialist deep agent (works nicely)

we have individual deep agent which perfectly acheive this but however I am looking for some option where we already host these individual agents as api endpoints can pass context and re-direct to a specific agent so that history and context are maintained separately.

A2A protocol is something we are eyeing for.

You can do, what you are describing, using LangGraph/LangChain - history and context can be mainained separately within each subagent.

With A2A, agents communicate with each other via http endpoint which might be an overhead/ overengineering in some cases.

Practical rule of thumb

  • Same process / same app → direct calls (Deep Agents subagents / tool wrappers).
  • Different processes / different frameworks / remote services → A2A (endpoints), optionally served by one shared server that hosts multiple agents.

hi Thanks!!

Can you please share some code snippets for sending history from parent to child agent if you have any example somewhere in the langgraph or deeagent docs thanks a lot for answering patiently!!

  • The parent delegates work via a built-in task(description, subagent_type=...) tool (implemented by SubAgentMiddleware). The subagent is invoked with the provided description as a new HumanMessage, rather than automatically receiving the entire conversation history. If your specialists need context, include it explicitly in the delegated description, or build a custom LangGraph supervisor that forwards whatever state you want. (This behavior is visible in the Deep Agents SubAgentMiddleware implementation in the repo.)

also is there any way we can get the child agent history ?

yes, let me prepare something for you

I’ve prepared this (follow up in case of further questions):

"""
Demo: Parent -> Child Deep Agent delegation with history + Postgres persistence.

What this script showcases (answers the forum follow-up):
1) **How to send parent history to a child agent**
   - Approach A (works with Deep Agents built-in `task` tool):
     The parent includes (serializes) relevant history inside the `description` it sends.
     This is necessary because Deep Agents' SubAgentMiddleware invokes subagents with a
     *new* HumanMessage(description), not the full message list by default.
   - Approach B (direct "message list" priming):
     You can explicitly "prime" a child thread by writing the parent's messages into the
     child's thread once (so the child has native message history in its own state).

2) **How to get the child agent history**
   - Use `child_agent.get_state(child_config).values["messages"]` (persisted via PostgresSaver).

Prereqs:
- `OPENAI_API_KEY` in your environment (loaded via dotenv)
- `POSTGRES_URI` pointing to a Postgres DB (tables will be created on first run)

Example .env:
  OPENAI_API_KEY=...
  POSTGRES_URI=postgres://postgres:postgres@localhost:5432/postgres?sslmode=disable
"""

from __future__ import annotations

import os
import textwrap
import uuid
from contextlib import ExitStack
from typing import Iterable

try:
    # Requested by the author: load env vars from a local `.env`.
    from dotenv import load_dotenv
except ModuleNotFoundError:  # pragma: no cover
    load_dotenv = None  # type: ignore[assignment]
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage
from langchain_core.tools import tool
from langgraph.checkpoint.postgres import PostgresSaver

from deepagents import create_deep_agent


def _must_getenv(name: str) -> str:
    value = os.getenv(name, "").strip()
    if not value:
        raise RuntimeError(f"Missing required env var: {name}")
    return value


def _role(msg: BaseMessage) -> str:
    # BaseMessage has .type ("human", "ai", "tool", ...) in LC.
    t = getattr(msg, "type", None)
    if t == "human":
        return "user"
    if t == "ai":
        return "assistant"
    return t or "unknown"


def _safe_text(msg: BaseMessage) -> str:
    # Most message types expose `.content`. For AIMessage, `.content` may be list blocks.
    c = getattr(msg, "content", "")
    if isinstance(c, str):
        return c
    return str(c)


def _strip_embedded_parent_history(text: str) -> str:
    """If a delegated message embeds PARENT_HISTORY + TASK, return only TASK."""
    s = text.strip()
    if not s.startswith("PARENT_HISTORY:"):
        return text
    # Our delegated format is:
    # PARENT_HISTORY:\n...\n\nTASK:\n<task>
    marker = "\nTASK:\n"
    idx = s.find(marker)
    if idx == -1:
        # Try a more forgiving marker
        idx = s.find("\n\nTASK:\n")
        if idx == -1:
            return text
        marker = "\n\nTASK:\n"
    task = s[idx + len(marker) :].strip()
    return task or text


def _format_history_for_prompt(messages: Iterable[BaseMessage], max_chars: int = 6000) -> str:
    """Serialize messages into a prompt-friendly string (last max_chars)."""
    chunks: list[str] = []
    for m in messages:
        chunks.append(f"[{_role(m)}]\n{_safe_text(m)}")
    joined = "\n\n".join(chunks).strip()
    if len(joined) <= max_chars:
        return joined
    return joined[-max_chars:]


def _tail_messages(messages: list[BaseMessage], n: int) -> list[BaseMessage]:
    if n <= 0:
        return []
    return messages[-n:]


def _print_messages(
    title: str, messages: list[BaseMessage], n: int = 12, *, strip_parent_history: bool = False
) -> None:
    print(f"\n--- {title} (last {n}) ---")
    for m in _tail_messages(messages, n):
        text = _safe_text(m).strip()
        if strip_parent_history and isinstance(m, HumanMessage):
            text = _strip_embedded_parent_history(text).strip()
        text = textwrap.shorten(text.replace("\n", " "), width=180, placeholder=" …")
        print(f"{_role(m):>10}: {text}")
    print("---")


@tool
def word_count(text: str) -> int:
    """Count words in a text (toy research tool)."""
    return len([t for t in text.split() if t.strip()])


@tool
def uppercase(text: str) -> str:
    """Uppercase a string (toy coding tool)."""
    return text.upper()


def main() -> None:
    if load_dotenv is None:
        raise RuntimeError(
            "python-dotenv is not installed. Install it (e.g. `pip install python-dotenv`) "
            "or remove dotenv usage."
        )
    load_dotenv()

    # OpenAI provider is selected by model string "openai:..."
    _must_getenv("OPENAI_API_KEY")
    db_url = _must_getenv("POSTGRES_URI")

    session_id = os.getenv("SESSION_ID", "").strip() or uuid.uuid4().hex

    # IMPORTANT:
    # `checkpoint_ns` is used by LangGraph for *subgraph namespaces* when calling `get_state()`.
    # If you set it on a top-level graph and then call `.get_state()`, LangGraph will try to
    # route into a subgraph namespace and you can get: "ValueError: Subgraph X not found".
    #
    # So, for "separate parent vs child histories", we instead use distinct `thread_id`s.
    parent_config = {"configurable": {"thread_id": f"{session_id}:parent"}}
    research_config = {"configurable": {"thread_id": f"{session_id}:child:research"}}
    coding_config = {"configurable": {"thread_id": f"{session_id}:child:coding"}}

    with ExitStack() as stack:
        # Separate saver instances (one per agent) as requested; same DB underneath.
        parent_saver = stack.enter_context(PostgresSaver.from_conn_string(db_url))
        research_saver = stack.enter_context(PostgresSaver.from_conn_string(db_url))
        coding_saver = stack.enter_context(PostgresSaver.from_conn_string(db_url))

        # IMPORTANT: must be called first time (safe to call repeatedly).
        parent_saver.setup()
        # These share the same tables; calling setup() again is harmless.
        research_saver.setup()
        coding_saver.setup()

        # Child agents (each persisted separately)
        research_agent = create_deep_agent(
            model="openai:gpt-4o-mini",
            system_prompt=(
                "You are a Research specialist.\n"
                "- Be factual and cite assumptions.\n"
                "- If the parent provides a 'PARENT_HISTORY' section, treat it as authoritative context.\n"
            ),
            tools=[word_count],
            checkpointer=research_saver,
            name="research-agent",
        )

        coding_agent = create_deep_agent(
            model="openai:gpt-4o-mini",
            system_prompt=(
                "You are a Coding specialist.\n"
                "- Provide runnable Python snippets when asked.\n"
                "- If the parent provides a 'PARENT_HISTORY' section, use it as context.\n"
            ),
            tools=[uppercase],
            checkpointer=coding_saver,
            name="coding-agent",
        )

        # Parent agent. Note: its built-in `task` tool passes only a new HumanMessage(description)
        # into the subagent. So we instruct the parent to *embed history* into that description.
        parent_agent = create_deep_agent(
            model="openai:gpt-4o-mini",
            system_prompt=(
                "You are an orchestrator agent.\n"
                "You can delegate to specialized subagents via the `task` tool:\n"
                "- subagent_type='research'\n"
                "- subagent_type='coding'\n\n"
                "CRITICAL: When you delegate, include the *relevant conversation history* inside the\n"
                "task `description` you send, in this format:\n"
                "PARENT_HISTORY:\\n<last N messages summarized or quoted>\\n\\nTASK:\\n<what you need>\n"
                "Then, after the subagent returns, synthesize the final answer to the user.\n"
            ),
            tools=[],
            subagents=[
                {
                    "name": "research",
                    "description": "Research questions, fact gathering, summarization.",
                    "runnable": research_agent,
                },
                {
                    "name": "coding",
                    "description": "Python implementation details and code snippets.",
                    "runnable": coding_agent,
                },
            ],
            checkpointer=parent_saver,
            name="parent-orchestrator",
        )

        print(
            textwrap.dedent(
                f"""
                Deep Agents parent/child history demo
                Session: {session_id}

                Commands:
                  /q
                    Quit

                  /parent <message>
                    Send a message to the parent orchestrator (it may delegate using `task`)

                  /delegate <research|coding> <task>
                    Manual delegation that INCLUDES parent history in the child input (Approach A)

                  /prime <research|coding>
                    One-time "message list" priming: copy parent's *actual messages* into the child thread (Approach B)

                  /child-history <research|coding> [n]
                    Print the child's persisted message history (from Postgres checkpointer)

                  /parent-history [n]
                    Print the parent's persisted message history (from Postgres checkpointer)

                Tip: You can just type without a prefix to send to /parent.
                """
            ).strip()
        )

        primed: set[str] = set()

        while True:
            raw = input("\n> ").strip()
            if not raw:
                continue
            if raw in {"/q", "quit", "exit"}:
                break

            if raw.startswith("/parent "):
                user_text = raw[len("/parent ") :].strip()
            elif (
                raw.startswith("/delegate ")
                or raw.startswith("/prime ")
                or raw.startswith("/child-history")
                or raw.startswith("/parent-history")
            ):
                user_text = ""
            else:
                user_text = raw

            # Helpers to fetch current persisted messages
            def get_parent_messages() -> list[BaseMessage]:
                snap = parent_agent.get_state(parent_config)
                values = snap.values if isinstance(snap.values, dict) else {}
                msgs = values.get("messages", [])
                return list(msgs) if isinstance(msgs, list) else []

            def get_child(agent_name: str):
                if agent_name == "research":
                    return research_agent, research_config
                if agent_name == "coding":
                    return coding_agent, coding_config
                return None, None

            def _invalid_agent_name_message(agent_name: str) -> str:
                return (
                    f"Unknown agent '{agent_name}'.\n"
                    "Correct usage examples:\n"
                    "  /delegate research where is France located?\n"
                    "  /delegate coding write a python function to reverse a string\n"
                    "  /child-history research 20\n"
                    "  /prime coding\n"
                )

            if raw.startswith("/parent-history"):
                parts = raw.split()
                n = int(parts[1]) if len(parts) > 1 else 12
                _print_messages("PARENT HISTORY", get_parent_messages(), n=n)
                continue

            if raw.startswith("/child-history"):
                parts = raw.split()
                if len(parts) < 2:
                    print("Usage: /child-history <research|coding> [n]")
                    continue
                agent_name = parts[1].strip()
                n = int(parts[2]) if len(parts) > 2 else 12
                agent, cfg = get_child(agent_name)
                if agent is None or cfg is None:
                    print(_invalid_agent_name_message(agent_name))
                    continue
                snap = agent.get_state(cfg)
                values = snap.values if isinstance(snap.values, dict) else {}
                msgs = list(values.get("messages", []))
                # The manual `/delegate` command embeds the parent's history into each child task
                # (because the built-in `task` tool passes only a new HumanMessage description).
                # When *displaying* child history, we strip that embedded block so the child
                # history reads naturally as: "task -> reply -> task -> reply".
                _print_messages(
                    f"CHILD '{agent_name}' HISTORY",
                    msgs,
                    n=n,
                    strip_parent_history=True,
                )
                continue

            if raw.startswith("/prime "):
                parts = raw.split(maxsplit=1)
                if len(parts) != 2:
                    print("Usage: /prime <research|coding>")
                    continue
                agent_name = parts[1].strip()
                agent, cfg = get_child(agent_name)
                if agent is None or cfg is None:
                    print(_invalid_agent_name_message(agent_name))
                    continue
                if agent_name in primed:
                    print(f"Child '{agent_name}' already primed for this session.")
                    continue

                parent_msgs = get_parent_messages()
                # Keep only user/assistant messages; drop tool/system-ish messages if present.
                parent_msgs = [m for m in parent_msgs if isinstance(m, (HumanMessage, AIMessage))]
                if not parent_msgs:
                    print("No parent messages yet to prime with. Talk to /parent first.")
                    continue

                # Prime child by writing these messages into the child's state (persisted).
                agent.invoke({"messages": parent_msgs}, config=cfg)
                primed.add(agent_name)
                print(f"Primed child '{agent_name}' with {len(parent_msgs)} parent messages.")
                continue

            if raw.startswith("/delegate "):
                parts = raw.split(maxsplit=2)
                if len(parts) < 3:
                    print(
                        "Incorrect command.\n"
                        "Correct usage:\n"
                        "  /delegate <research|coding> <task>\n"
                        "Example:\n"
                        "  /delegate research where is France located?\n"
                    )
                    continue
                agent_name = parts[1].strip()
                task = parts[2].strip()
                agent, cfg = get_child(agent_name)
                if agent is None or cfg is None:
                    print(_invalid_agent_name_message(agent_name))
                    continue

                parent_msgs = get_parent_messages()
                parent_msgs = [m for m in parent_msgs if isinstance(m, (HumanMessage, AIMessage))]

                history_block = _format_history_for_prompt(_tail_messages(parent_msgs, 12))
                child_input = textwrap.dedent(
                    f"""
                    PARENT_HISTORY:
                    {history_block}

                    TASK:
                    {task}
                    """
                ).strip()

                result = agent.invoke({"messages": [HumanMessage(content=child_input)]}, config=cfg)
                reply = result["messages"][-1].content
                print(f"\n[{agent_name} child reply]\n{reply}")
                continue

            # Default: send to parent and let it decide whether to call `task`
            result = parent_agent.invoke({"messages": [HumanMessage(content=user_text)]}, config=parent_config)
            reply = result["messages"][-1].content
            print(f"\n[parent reply]\n{reply}")


if __name__ == "__main__":
    main()

I will go through and will get back to you

1 Like

import os

import textwrap

import uuid

from typing import Iterable, List, Optional

from IPython.display import display, Markdown

from deepagents.backends import FilesystemBackend

from langchain_core.messages import AIMessage, BaseMessage, HumanMessage

from langchain_core.tools import tool

from langgraph.checkpoint.memory import MemorySaver

from deepagents import create_deep_agent

# — Helper Functions (Preserved from original) —

def _role(msg: BaseMessage) → str:

t = getattr(msg, "type", None)

if t == "human": return "user"

if t == "ai": return "assistant"

return t or "unknown"

def _safe_text(msg: BaseMessage) → str:

c = getattr(msg, "content", "")

return c if isinstance(c, str) else str(c)

def _format_history_for_prompt(messages: Iterable[BaseMessage], max_chars: int = 6000) → str:

chunks = \[f"\[{\_role(m)}\]\\n{\_safe_text(m)}" for m in messages\]

joined = "\\n\\n".join(chunks).strip()

return joined if len(joined) <= max_chars else joined\[-max_chars:\]

# — Tool Definitions —

@tool

def word_count(text: str) → int:

"""Count words in a text."""

return len(\[t for t in text.split() if t.strip()\])

@tool

def uppercase(text: str) → str:

"""Uppercase a string."""

return text.upper()

# — Notebook-Friendly Session Manager —

class DeepAgentSession:

def \__init_\_(self, db_url: str=None, session_id: Optional\[str\] = None):

    self.session_id = session_id or uuid.uuid4().hex

    self.db_url = db_url

    self.primed = set()

    

    \# Configurations

    self.parent_config = {"configurable": {"thread_id": f"{self.session_id}:parent"}}

    self.child_configs = {

         "research": {"configurable": {"thread_id": f"{self.session_id}:child:research"}},

         "coding": {"configurable": {"thread_id": f"{self.session_id}:child:coding"}}

     }



    \# # Initialize Saver

    \# self.saver = PostgresSaver.from_conn_string(self.db_url)

    \# self.saver.setup()



    \# # Initialize Agents

    self.\_init_agents()

    print(f"âś… Session initialized: {self.session_id}")



def \_init_agents(self):

    \# Child Agents

    self.research_agent = create_deep_agent(

        model= model2, #"openai:gpt-4o-mini",

        system_prompt="You are a Research specialist. Use PARENT_HISTORY if provided.",

        tools=\[word_count\],

        checkpointer=MemorySaver(),#self.saver,

        name="research-agent",

    )



    self.coding_agent = create_deep_agent(

        model=model2 ,#"openai:gpt-4o-mini",

        system_prompt="You are a Coding specialist. Use PARENT_HISTORY if provided.",

        tools=\[uppercase\],

        checkpointer=MemorySaver(),# self.saver,

        name="coding-agent",

    )



    \# Parent Agent

    self.parent_agent = create_deep_agent(

        model=model2, #"openai:gpt-4o-mini",

        subagents=\[

            {"name": "research", "description": "Research specialist", "runnable": self.research_agent},

            {"name": "coding", "description": "Coding specialist", "runnable": self.coding_agent},

        \],

        checkpointer=MemorySaver(),

        name="parent-orchestrator",

        memory=\["/content/parent_orchestrator.md"\],

        backend=FilesystemBackend(root_dir="/content")

    )



def chat(self, message: str):

    """Primary method to interact with the parent agent."""

    result = self.parent_agent.invoke(

        {"messages": \[HumanMessage(content=message)\]}, 

        config=self.parent_config

    )

    reply = result\["messages"\]\[-1\].content

    display(Markdown(f"\*\*Parent Orchestrator:\*\*\\n\\n{reply}"))



def prime_child(self, agent_name: str):

    """Explicitly copy parent history to a child thread (Approach B)."""

    if agent_name not in self.child_configs:

        print(f"❌ Invalid agent: {agent_name}")

        return



    parent_msgs = self.get_history("parent")

    clean_msgs = \[m for m in parent_msgs if isinstance(m, (HumanMessage, AIMessage))\]

    

    agent = self.research_agent if agent_name == "research" else self.coding_agent

    agent.invoke({"messages": clean_msgs}, config=self.child_configs\[agent_name\])

    self.primed.add(agent_name)

    print(f"âś… Primed {agent_name} with {len(clean_msgs)} messages.")



def get_history(self, target: str = "parent", n: int = 5):

    """Fetch and display history for parent or children."""

    if target == "parent":

        config = self.parent_config

        agent = self.parent_agent

    else:

        config = self.child_configs.get(target)

        agent = self.research_agent if target == "research" else self.coding_agent



    snap = agent.get_state(config)

    msgs = snap.values.get("messages", \[\]) if snap.values else \[\]

    

    print(f"\\n--- {target.upper()} HISTORY (Last {n}) ---")

    for m in msgs\[-n:\]:

        print(f"{\_role(m):>10}: {textwrap.shorten(\_safe_text(m), width=100)}")

    return msgs

hi @Somanath

please, format the message - I am not able to read it and reuse it for testing :frowning:

hi @pawel-twardziak how to add the complete code block here if I paste it is getting misaligned

split the message into plain texts and code blocks.

Plain text is just a regular text.

Code block - see the tool bar above for text formatting

import os
import textwrap
import uuid
from typing import Iterable, List, Optional
from IPython.display import display, Markdown
from deepagents.backends import FilesystemBackend

from langchain_core.messages import AIMessage, BaseMessage, HumanMessage
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from deepagents import create_deep_agent

# --- Helper Functions (Preserved from original) ---

def _role(msg: BaseMessage) -> str:
    t = getattr(msg, "type", None)
    if t == "human": return "user"
    if t == "ai": return "assistant"
    return t or "unknown"

def _safe_text(msg: BaseMessage) -> str:
    c = getattr(msg, "content", "")
    return c if isinstance(c, str) else str(c)

def _format_history_for_prompt(messages: Iterable[BaseMessage], max_chars: int = 6000) -> str:
    chunks = [f"[{_role(m)}]\n{_safe_text(m)}" for m in messages]
    joined = "\n\n".join(chunks).strip()
    return joined if len(joined) <= max_chars else joined[-max_chars:]

# --- Tool Definitions ---

@tool
def word_count(text: str) -> int:
    """Count words in a text."""
    return len([t for t in text.split() if t.strip()])

@tool
def uppercase(text: str) -> str:
    """Uppercase a string."""
    return text.upper()

# --- Notebook-Friendly Session Manager ---

class DeepAgentSession:
    def __init__(self, db_url: str=None, session_id: Optional[str] = None):
        self.session_id = session_id or uuid.uuid4().hex
        self.db_url = db_url
        self.primed = set()
        
        # Configurations
        self.parent_config = {"configurable": {"thread_id": f"{self.session_id}:parent"}}
        self.child_configs = {
             "research": {"configurable": {"thread_id": f"{self.session_id}:child:research"}},
             "coding": {"configurable": {"thread_id": f"{self.session_id}:child:coding"}}
         }

        # # Initialize Saver
        # self.saver = PostgresSaver.from_conn_string(self.db_url)
        # self.saver.setup()

        # # Initialize Agents
        self._init_agents()
        print(f"âś… Session initialized: {self.session_id}")

    def _init_agents(self):
        # Child Agents
        self.research_agent = create_deep_agent(
            model= model2, #"openai:gpt-4o-mini",
            system_prompt="You are a Research specialist. Use PARENT_HISTORY if provided.",
            tools=[word_count],
            checkpointer=MemorySaver(),#self.saver,
            name="research-agent",
        )

        self.coding_agent = create_deep_agent(
            model=model2 ,#"openai:gpt-4o-mini",
            system_prompt="You are a Coding specialist. Use PARENT_HISTORY if provided.",
            tools=[uppercase],
            checkpointer=MemorySaver(),# self.saver,
            name="coding-agent",
        )

        # Parent Agent
        self.parent_agent = create_deep_agent(
            model=model2, #"openai:gpt-4o-mini",
            subagents=[
                {"name": "research", "description": "Research specialist", "runnable": self.research_agent},
                {"name": "coding", "description": "Coding specialist", "runnable": self.coding_agent},
            ],
            checkpointer=MemorySaver(),
            name="parent-orchestrator",
            memory=["/content/parent_orchestrator.md"],
            backend=FilesystemBackend(root_dir="/content")
        )

    def chat(self, message: str):
        """Primary method to interact with the parent agent."""
        result = self.parent_agent.invoke(
            {"messages": [HumanMessage(content=message)]}, 
            config=self.parent_config
        )
        reply = result["messages"][-1].content
        display(Markdown(f"**Parent Orchestrator:**\n\n{reply}"))

    def prime_child(self, agent_name: str):
        """Explicitly copy parent history to a child thread (Approach B)."""
        if agent_name not in self.child_configs:
            print(f"❌ Invalid agent: {agent_name}")
            return

        parent_msgs = self.get_history("parent")
        clean_msgs = [m for m in parent_msgs if isinstance(m, (HumanMessage, AIMessage))]
        
        agent = self.research_agent if agent_name == "research" else self.coding_agent
        agent.invoke({"messages": clean_msgs}, config=self.child_configs[agent_name])
        self.primed.add(agent_name)
        print(f"âś… Primed {agent_name} with {len(clean_msgs)} messages.")

    def get_history(self, target: str = "parent", n: int = 5):
        """Fetch and display history for parent or children."""
        if target == "parent":
            config = self.parent_config
            agent = self.parent_agent
        else:
            config = self.child_configs.get(target)
            agent = self.research_agent if target == "research" else self.coding_agent

        snap = agent.get_state(config)
        msgs = snap.values.get("messages", []) if snap.values else []
        
        print(f"\n--- {target.upper()} HISTORY (Last {n}) ---")
        for m in msgs[-n:]:
            print(f"{_role(m):>10}: {textwrap.shorten(_safe_text(m), width=100)}")
        return msgs

done @pawel-twardziak

session = DeepAgentSession(session_id=123)

session.chat(“hi”)
session.chat("create a python module for executing factorial and write to /output/factorial.py ")

session.get_history(“parent“)

Please try this

ok, thanks. Will try it soon :slight_smile:

hi @pawel-twardziak any luck

hi @Somanath

I acutally don’t know what’s your question now?

oh sorry qn is basically I could see child agent is called

but i am not able to pass child config or I am I missing something

Will dedicated checkpointer for subagent work only with Postgres

as If I call get history with coding agent as param I am seeing no results