LLM drops parts of user query (like dates) when Controller Agent forwards raw input to sub-agent

Hi everyone,
I’m building a multi-agent system using LangGraph with a Controller Agent that routes user queries to domain-specific sub-agents.

Architecture (simplified example)

Imagine a system for a railway ticket booking assistant:

  • GeneralInfoAgent – answers queries about trains, stations, timings

  • BookingAgent – handles ticket availability, booking, cancellations

  • SupportAgent – handles account/profile issues

  • ControllerAgent – receives user requests and routes them to the correct sub-agent

The Issue

The Controller agent is instructed to:

  • pass the raw user query verbatim into the tool call

  • not summarize or paraphrase

  • especially not remove dates like “today/tomorrow/day after tomorrow"

But during execution, part of the user query gets dropped.

Example

User:
“Can you check seat availability for 3 tickets on Train 2105? I plan to travel day after tomorrow.”

Expected tool call:

{
“raw_user_input”: “Can you check seat availability for 3 tickets on Train 2105? I plan to travel day after tomorrow.”
}

Actual tool call generated by LLM:

{
“raw_user_input”: “Can you check seat availability for 3 tickets on Train 2105?”
}

Why is the model still dropping parts of the input during tool-call generation?
Is there a recommended pattern to guarantee 100% verbatim forwarding of the user’s text into a tool call in LangGraph?

Any guidance would be appreciated!


Hi @bharathiselvan

what is the tool definition? Could you also share some code?

hi @bharathiselvan

It’s probably because of your model, I guess. But do you plan to allow accessing context from tools? I hope Tools - Docs by LangChain would help you.