Hi,
Context:
-
We have a parent agent (main graph) delegating complex work to a sub-agent (subgraph). Using create_agent.
-
In certain scenarios the subgraph exceeds the recursion_limit and raises a GraphRecursionError.
-
We want the parent agent to detect this error and either:
-
retry with adjusted parameters,
-
degrade to a safe path,
-
serialize the error into the graph state and continue without crashing.
-
Questions:
-
Is there an official/recommended pattern to catch GraphRecursionError at the node/subgraph boundary and return structured state instead of raising?
-
Are there hooks/middlewares to centrally handle exceptions (like GraphRecursionError) during compile/run?
-
Can different recursion_limit values be set per subgraph/node, or adjusted dynamically between retries?
-
Is there a built-in retry mechanism, or a recommended way to implement bounded retries via conditional edges that avoids re-triggering recursion loops?
-
Are diagnostics (e.g., last executed node, edge ID, partial call stack) attached to GraphRecursionError to support downstream decisions?
Environment:
[project]
name = "langchainlab"
version = "0.1.0"
description = "Add your description here"
requires-python = ">=3.13, <3.14"
dependencies = [
"black>=25.9.0",
"deepagents>=0.1.3",
"fastapi[standard]>=0.120.1",
"isort>=7.0.0",
"langchain>=1.0.2",
"langchain-deepseek>=1.0.0",
"langchain-openai>=1.0.1",
"langgraph>=1.0.1",
"langgraph-cli[inmem]>=0.4.4",
"python-dotenv>=1.1.1",
"tavily-python>=0.7.12",
]
Code
# from deepagents.middleware.subagents import SubAgentMiddleware
# from langchain.agents import create_agent
# from langchain.agents.middleware import TodoListMiddleware, SummarizationMiddleware
# from src.llm.llm import get_model, get_small_model
# from src.middleware import inject_project_context
# from src.subagents import codeact_subagent, research_subagent
# from src.utils import now_local, get_system_info, format_system_info
system_info = get_system_info()
formatted_system_info = format_system_info(system_info)
agent = create_agent(
system_prompt="""You are an intelligent task coordinator focused on optimizing programming/code projects. Your job is to analyze user requirements and invoke the most suitable sub-agent to complete tasks.
"""
),
model=get_model(),
middleware=[
inject_project_context, # First, inject project context (no explicit call required)
TodoListMiddleware(),
SummarizationMiddleware(model=get_model(), max_tokens_before_summary=10000),
SubAgentMiddleware(
default_model=get_small_model(),
default_tools=[],
subagents=[research_subagent, codeact_subagent],
),
],
)