How to view the actual LLM prompt that decided a tool call in langsmith

I am having some real difficulty in understanding how to view the actual decision to use a tool in langsmith?

I am using the out of the box agent which is documented in the official docs LangChain overview - Docs by LangChain

The issue is that the traces are not showing me the actual prompt which the LLM uses to decide what to answer with (be it “use a tool” or “answer directly”). This would be very helpful for debugging.

I think i’ve discovered the issue - its using the tool calling feature of the LLMs API (in this case VertexAI). So the reason its not showing the prompt is because the tool definitions are literally being passed directly to the low-level LLM API.

1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.