My name is Raaghav, and I am a software engineer and an ex-founder.
I am interested in making a contribution to LangGraph. Before I started coding away I wanted to get an idea of if my effort would be wasted.
I’ve been studying the recent Recursive Language Models (RLM) work on inference-time scaling for arbitrarily long prompts, and I believe there’s an opportunity to introduce an opt-in “environment-mediated recursive executor” within LangGraph that aligns with its graph-based orchestration model.
The idea would not be to modify default agent behavior, but to provide a new executor pattern that:
• Treats large prompts or tool outputs as an external environment (chunked + indexed)
• Allows the model to iteratively retrieve, decompose, and solve over bounded snippets
• Enforces hard constraints (max depth, max tool calls, token/cost budgets, timeouts)
• Provides deterministic termination and full execution traces
• Benchmarks against flat-context and naive retrieval agents
LangGraph already supports cyclical graphs and recursion limits, which makes it a natural substrate for this. What’s missing today is a standardized, budget-aware recursive execution pattern that scales beyond model context windows in a principled way.
I would propose building this as:
A new opt-in executor/graph template (e.g., RecursiveEnvironmentExecutor)
A structured memory layer (chunk graph with provenance)
A BudgetManager enforcing cost and depth constraints
A benchmark suite demonstrating improvements on long-log and tool-heavy workflows
Would this contribution be something useful? Im happy to start coding away. I wanted to get some feedback first
This sounds like a very strong and aligned contribution, especially since LangGraph already supports cyclic graphs and recursion control but doesn’t have a standard, budget-aware recursive executor pattern. I would suggest starting a design discussion issue first to make sure the scope is correct with the maintainers, but this direction is definitely useful and not a waste of time.