hi @OZOOOOOH
I think there a few approaches for that.
1: Native Deep Agents routing (the task tool + orchestrator pattern)
Deep Agents has built-in routing through the task tool and the subagent system - it’s effectively the “Orchestrator Pattern” you described as option 3, but it’s natively supported.
When you pass subagents to create_deep_agent(), the library automatically:
- registers each subagent with the built-in
task tool
- the orchestrator LLM sees all subagent names and descriptions
- the LLM decides which subagent to invoke based on the user prompt
- each subagent runs in isolated context and returns its result as a
ToolMessage
each subagent’s structured_response is isolated - it is explicitly excluded from the parent state via _EXCLUDED_STATE_KEYS in the subagent middleware (source: subagents.py L126):
_EXCLUDED_STATE_KEYS = { "messages", "todos", "structured_response", "skills_metadata", "memory_contents" }
This means different subagents can each have their own Pydantic schema without schema conflicts. The parent (orchestrator) agent receives each subagent’s result as a plain ToolMessage containing the string representation of the structured response.
Implementation Using CompiledSubAgent
To give each specialized agent its own structured output schema, use CompiledSubAgent. This lets you pre-compile each specialist with its own response_format:
from pydantic import BaseModel, Field
from langchain.agents import create_agent
from langchain.agents.structured_output import ToolStrategy
from deepagents.graph import create_deep_agent
from deepagents.middleware.subagents import CompiledSubAgent
class LegalAnalysis(BaseModel):
"""Structured output for legal domain."""
case_summary: str = Field(description="Brief summary of the legal case")
applicable_laws: list[str] = Field(description="List of relevant laws")
risk_assessment: str = Field(description="Risk level: low, medium, high")
recommendation: str = Field(description="Recommended course of action")
class FinancialReport(BaseModel):
"""Structured output for finance domain."""
revenue: float = Field(description="Total revenue")
expenses: float = Field(description="Total expenses")
net_income: float = Field(description="Net income")
key_insights: list[str] = Field(description="Key financial insights")
class MedicalDiagnosis(BaseModel):
"""Structured output for medical domain."""
symptoms: list[str] = Field(description="Reported symptoms")
possible_conditions: list[str] = Field(description="Possible conditions")
recommended_tests: list[str] = Field(description="Recommended diagnostic tests")
urgency: str = Field(description="Urgency level: routine, urgent, emergency")
legal_agent = create_agent(
model="anthropic:claude-sonnet-4-6",
tools=[legal_search_tool, case_law_tool],
response_format=ToolStrategy(schema=LegalAnalysis),
)
finance_agent = create_agent(
model="openai:gpt-4.1",
tools=[financial_data_tool, market_tool],
response_format=ToolStrategy(schema=FinancialReport),
)
medical_agent = create_agent(
model="anthropic:claude-sonnet-4-6",
tools=[medical_db_tool, symptom_checker_tool],
response_format=ToolStrategy(schema=MedicalDiagnosis),
)
orchestrator = create_deep_agent(
model="anthropic:claude-sonnet-4-6",
system_prompt=(
"You are a routing orchestrator. Analyze the user's request "
"and delegate to the appropriate specialist agent. "
"After receiving results, synthesize a clear response."
),
subagents=[
CompiledSubAgent(
name="legal-specialist",
description="Handles legal questions, case analysis, and compliance reviews.",
runnable=legal_agent,
),
CompiledSubAgent(
name="finance-specialist",
description="Handles financial analysis, reporting, and market insights.",
runnable=finance_agent,
),
CompiledSubAgent(
name="medical-specialist",
description="Handles medical symptom analysis and diagnostic recommendations.",
runnable=medical_agent,
),
],
)
result = orchestrator.invoke(
{"messages": [{"role": "user", "content": "What are the legal risks of our new product launch?"}]}
)
- the orchestrator LLM sees the descriptions of all three specialists
- it recognizes this is a legal question and calls
task(description="...", subagent_type="legal-specialist")
- the legal specialist runs with its own
LegalAnalysis schema, produces a structured response
- The result comes back to the orchestrator as a
ToolMessage (string representation)
- the orchestrator synthesizes and presents the result to the user
Can You Also Use Declarative SubAgent Dicts?
Yes, but with a caveat: the SubAgent TypedDict does not have a response_format field. If you use declarative subagents, you cannot directly set a structured output schema on them. You have two options:
- use
CompiledSubAgent - pre-compile each with its own response_format as shown above
- use
create_deep_agent for each subagent - build each as a full Deep Agent with response_format, then wrap it as a CompiledSubAgent:
legal_deep_agent = create_deep_agent(
model="anthropic:claude-sonnet-4-6",
tools=[legal_search_tool],
system_prompt="You are a legal analysis specialist.",
response_format=LegalAnalysis,
)
CompiledSubAgent(
name="legal-specialist",
description="Handles legal questions.",
runnable=legal_deep_agent,
)
2: LangGraph StateGraph with conditional edges
This approach gives you maximum control over routing logic but requires more manual wiring. It’s best when you need deterministic routing rules (not just LLM-based routing) or complex graph topologies.
from typing import Annotated, TypedDict, Literal
import operator
from langgraph.graph import StateGraph, START, END
from langchain_core.messages import BaseMessage
class RouterState(TypedDict):
messages: Annotated[list[BaseMessage], operator.add]
domain: str
result: str
def classify_domain(state: RouterState) -> dict:
"""Use an LLM to classify which domain the query belongs to."""
messages = state["messages"]
# Use a lightweight model for classification
classification = classifier_llm.invoke(
f"Classify this query into one of: legal, finance, medical.\n"
f"Query: {messages[-1].content}\n"
f"Domain:"
)
return {"domain": classification.content.strip().lower()}
def route_to_specialist(state: RouterState) -> str:
domain = state["domain"]
if domain == "legal":
return "legal_agent"
elif domain == "finance":
return "finance_agent"
elif domain == "medical":
return "medical_agent"
return "fallback"
def legal_node(state: RouterState) -> dict:
result = legal_agent.invoke({"messages": state["messages"]})
return {"result": result["structured_response"].model_dump_json()}
def finance_node(state: RouterState) -> dict:
result = finance_agent.invoke({"messages": state["messages"]})
return {"result": result["structured_response"].model_dump_json()}
def medical_node(state: RouterState) -> dict:
result = medical_agent.invoke({"messages": state["messages"]})
return {"result": result["structured_response"].model_dump_json()}
graph = StateGraph(RouterState)
graph.add_node("classifier", classify_domain)
graph.add_node("legal_agent", legal_node)
graph.add_node("finance_agent", finance_node)
graph.add_node("medical_agent", medical_node)
graph.add_edge(START, "classifier")
graph.add_conditional_edges("classifier", route_to_specialist)
graph.add_edge("legal_agent", END)
graph.add_edge("finance_agent", END)
graph.add_edge("medical_agent", END)
router = graph.compile()
Pros: Full control over routing logic, can combine LLM + rule-based routing, explicit graph topology.
Cons: More boilerplate, you manage schema differences manually, no built-in parallelism.
For parallel routing to multiple specialists, use LangGraph’s Send primitive:
from langgraph.types import Send
def route_to_multiple(state: RouterState):
"""Route to multiple specialists in parallel."""
return [
Send("legal_agent", state),
Send("finance_agent", state),
]
graph.add_conditional_edges("classifier", route_to_multiple)
3: hybrid - Deep Agents orchestrator + LangGraph routing
You can also combine both: use a LangGraph StateGraph for the overall routing topology but use Deep Agents for each specialist node. This is useful when you want deterministic routing logic (LangGraph) with powerful agent capabilities (Deep Agents) at each node.
which approach to use?
| Criteria |
Native Deep Agents (Approach 1) |
LangGraph StateGraph (Approach 2) |
| Ease of setup |
Easiest - just define subagents |
More manual wiring |
| Routing logic |
LLM-driven (agent picks the specialist) |
Explicit (conditional edges + classifier) |
| Structured output isolation |
Automatic (_EXCLUDED_STATE_KEYS) |
Manual (you handle schema differences) |
| Parallel subagent execution |
Built-in (multiple task calls in one turn) |
Via Send primitive |
| Complex graph topologies |
Limited to single-level delegation |
Full graph expressiveness |
| Best for |
Most use-cases |
Complex workflows needing explicit control |
For your use-case (routing to specialized agents with different structured outputs), Approach 1 with CompiledSubAgent is the most straightforward. The structured output isolation is handled automatically, and you get parallel execution for free.