Prevent last LLM call after tool calls

Hi folks,

In a `create_agent` loop, is it possible to prevent the very last LLM call to happen?
For instance say that model wants to call tool 1, then tool 2 to solve an issue, with 2 rounds of the agentic loop.
I’d like the agent to output the JSON of tool 2 but NOT feed it back to the LLM.
Most of my “final” tool will output a comprehensive response that doesn’t need the final LLM interpretation and could be sent to user as is.
I feel like the problem is that the agent loop termination is based on whether the LLM calls a tool or not, at least last time I’ve checked LangGraph react agent template.
It might be more efficient to allow the model to output a termination boolean alongside the final tool call to stop the loop? Is that doable in LangChain without falling back to LangGraph?

hi @eric-burel

use return_direct prop of the tool decorator or BaseTool subclass return_direct | langchain_core | LangChain Reference

1 Like

Interesting, it’s a step forward but that means that you need to define some tools that are specifically considered as “end tools” that kills the agent loop. That works for certain use case but I really feel like the agent loop itself is not 100% efficient, I wonder if some model provider do output this kind of “done” flag.

do you build with custom state graph, or create_agent/create_deep_agent?

I am looking for a `create_agent` solution, I suspect it would not be too hard to implement with a custom LangGraph graph (though there might be many gotchas) but I wonder about LangChain create_agent which is newer to me.

then look at the middleware funability Custom middleware - Docs by LangChain - you can control everything with this feature