but… Are your sub-agents representations of the ReAct pattern? I doubt it, seeing AgentRunnable. Could you elaborate on that?
Trying to answer your questions @soorajvarrma
1. Is Runnable as model= supported in LangChain v1 create_agent()?
No - not directly. In LangChain v1, create_agent is typed and implemented to accept:
- a model string, which is immediately resolved via
init_chat_model(...), or
- a
BaseChatModel instance
You can see this in the LangChain v1 source: create_agent(model: str | BaseChatModel, ...), and it does if isinstance(model, str): model = init_chat_model(model) before building the agent loop (langchain/libs/langchain_v1/langchain/agents/factory.py).
This isn’t just a typing limitation: v1’s create_agent specifically creates an LLM/tool-calling loop (call model → if AIMessage.tool_calls then run tools → repeat), so the “model” is expected to behave like a chat model.
Docs reference from the function docstring:
2. Recommended migration path for deterministic Runnable “sub-agents”
In LangChain v1 terms, what you describe (“deterministic worker agent that does routing and domain logic”) maps more cleanly to tools and/or graph nodes, not the model= slot.
You have three practical migration options, depending on what your Runnable actually returns:
Option A: (most common): convert your Runnable worker into a tool
LangChain v1 create_agent(..., tools=...) explicitly accepts tools as BaseTool | Callable | dict (see tools: Sequence[BaseTool | Callable | dict[str, Any]] in the v1 source).
If your worker needs access to the conversation state/messages to make deterministic decisions, use v1’s runtime/state injection for tools:
from langchain_core.tools import tool
from langchain.tools import ToolRuntime
@tool
def domain_worker(task: str, runtime: ToolRuntime) -> str:
messages = runtime.state["messages"]
# deterministic logic / routing using messages
return "result for the supervisor"
This pattern is exercised in v1’s own tests (runtime injection into tools) and is part of the supported tool interface (langchain/libs/langchain_v1/tests/unit_tests/agents/test_injected_runtime_create_agent.py).
If you already have a Runnable object (not a “function that returns a Runnable”), langchain-core also supports converting a Runnable to a tool via the tool(...) helper (see Convert Python functions and Runnables to LangChain tools in langchain/libs/core/langchain_core/tools/convert.py).
Relevant docs:
Option B: wrap your deterministic policy as a BaseChatModel
If your old “runnable agent” truly behaves like “the model” (i.e., it returns an AIMessage, potentially with tool_calls), then the closest mechanical migration is to wrap that logic in a custom BaseChatModel implementation.
That keeps you within v1’s contract (model is a chat model), while still being deterministic/no-network. This is the closest equivalent to “a non-LLM model that still speaks AIMessage + tool calls”.
This is more work than Option A, but it preserves the “agent loop is driven by my policy” shape.
Option C: build your own LangGraph ReAct-style loop, using your existing AgentRunnable as the policy node
If what you really want is to preserve the graph architecture (nodes + routing) while keeping a deterministic “agent policy” that emits AIMessage/tool calls, you can build a minimal LangGraph loop yourself:
policy node: calls your deterministic AgentRunnable and appends an AIMessage
tools node: ToolNode(...) executes tool calls and appends ToolMessages
- conditional edge: if the last
AIMessage has tool calls, go to tools, else stop
This mirrors the same conceptual loop used by create_agent / ReAct, but the “model call” is replaced by your policy.
Concise example:
from typing import Annotated, Sequence, TypedDict
from langchain_core.messages import AIMessage, BaseMessage
from langchain_core.tools import tool
from langgraph.graph import END, StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
class AgentState(TypedDict):
# IMPORTANT: reducer merges message lists across node updates
messages: Annotated[Sequence[BaseMessage], add_messages]
@tool
def domain_lookup(query: str) -> str:
"""Deterministic domain lookup."""
return f"domain_result(query={query})"
tools = [domain_lookup]
tool_node = ToolNode(tools)
def agent_policy(state: AgentState) -> dict[str, list[BaseMessage]]:
# Your deterministic policy. It should return an AIMessage:
# - with tool_calls=[...] to request tool execution, OR
# - with tool_calls=[]/None to finish.
#
# Example: always call domain_lookup on the latest user content
user_text = state["messages"][-1].content if state["messages"] else ""
ai = AIMessage(
content="Calling tool",
tool_calls=[
{
"id": "call_1",
"name": "domain_lookup",
"args": {"query": str(user_text)},
}
],
)
return {"messages": [ai]}
g = StateGraph(AgentState)
g.add_node("policy", agent_policy)
g.add_node("tools", tool_node)
g.set_entry_point("policy")
g.add_conditional_edges("policy", tools_condition, {"tools": "tools", END: END})
g.add_edge("tools", "policy")
graph = g.compile()
# Run:
# result = graph.invoke({"messages": [{"role": "user", "content": "foo"}]})
Gotchas / sharp edges:
- You must use a message reducer: your state should define
messages with the add_messages reducer; otherwise returning {"messages": [ai]} from nodes may overwrite/lose history instead of appending.
- Tool-call bookkeeping must be correct:
- If your policy emits
AIMessage.tool_calls, every one must eventually get a matching ToolMessage with the same tool_call_id.
ToolNode handles this pairing, but only if your tool_calls are well-formed (have id, name, args).
- Termination condition: your policy must eventually emit an
AIMessage with no tool_calls (or an empty list) or your graph will loop forever.
- Keep the contract “node returns state updates”: nodes should return partial state updates like
{"messages": [ai]} (a list), not a raw AIMessage, and not a Runnable object.
- Tool signatures + names must match: tool
name in tool calls must match the registered tool name (for @tool functions this is typically the function name unless overridden).
- If you were returning
RunnableLambda(...) before: in this pattern your agent_policy should return the final AIMessage directly (wrapped in the {"messages": [...]} update). Don’t return another runnable from inside the node.
- Optional features: if you previously relied on checkpointers, interrupts, store, or streaming, you can add those when compiling/executing the graph — but keep the core loop simple first.
This option is closest to “LangGraph nodes can be deterministic runnable/policy components”, while still giving you a ReAct-like tool loop.
3. Does LangChain v1 require only the supervisor to be an LLM?
No, but create_agent is LLM/tool-loop shaped, so each thing you build with create_agent expects a chat-model-like model.
For multi-agent, v1’s create_agent returns a compiled graph and is explicitly designed to be embeddable as a subgraph node (“useful for building multi-agent systems” in the v1 docstring). That said:
- If a component is deterministic routing/business logic, the v1-native representation is usually a tool (Option A) or a graph node, not “a model”.
- If you want “nodes can be arbitrary runnables,” that’s fundamentally LangGraph’s sweet spot; LangChain v1 agents focus on the tool-calling loop driven by a chat model.