Hi, I am creating a multi-agent architecture using LangGraph with create_agent function
I have intent classifier node, task generator node and some node act as subagent, the task generated in task generator will distribute to each subagent via the state field.
I found the token usage is quite high, is it related to create_agent or my architecture design issue?
if you are using the sub agent as tools one issue could be the tool description, the other issue could be maybe you are passing the entire context from the main agent to the sub agent, maybe you need to send what is needed, context engineering
sorry, as it is a company project so i may not be able to share the code, there is multiple tool calling for the subagent as shown in the langfuse tracing and seems the system prompt will load to the model multiple times, is it possible to make the LLM to run multiple tool instead of one tool at a time?
afaik it depends on a provider and whether the tool calls depend on each other, meaning whether the current tool call input depends on the previous tool call’s outputs.
In addition to that, it is usually better for LLMs to call tools one by one (step by step) in order to achieve reliable, true results