Hello LangChain Team,
We are seeking guidance on reasoning transparency after migrating away from deprecated agent components in LangChain.
We are building a JSON-based navigation agent using LangChain v0.3.23 and LangGraph. Previously, the agent was implemented via create_json_agent, which internally relied on ZeroShotAgent. Since ZeroShotAgent has been deprecated, we are migrating to newer LangChain versions and adopting the newer create_agent framework.
Previous Behavior (ZeroShotAgent)
With ZeroShotAgent, the agent explicitly surfaced its step-by-step reasoning in a structured ReAct-style format:
- Thought – rationale for the current step
- Action – selected tool
- Action Input – input passed to the tool
- Observation – tool output
This provided clear visibility into:
- Why a specific tool was selected
- How intermediate decisions were made
- How the agent navigated the JSON structure step by step toward a final decision
Using LangGraph, we were able to stream and persist this reasoning reliably, even for medium to large JSON documents.
Current Behavior (create_agent)
After migrating to create_agent, we observe the following:
- For small JSON inputs, tool execution works correctly and produces the expected final output.
- Intermediate Thought / Action / Observation steps are no longer exposed by default.
- For medium and large JSON inputs, most intermediate reasoning steps are not surfaced.
- The cumulative output lacks step-level visibility into how decisions were made during JSON traversal.
At present, the final structured response and tool-level interactions (such as key access and tool inputs) are visible if we use streaming (astream); however, there is no insight into why specific keys were traversed, how conditions were evaluated or how decisions were reached.
Requirement
We are looking to restore the same level of reasoning transparency that existed with ZeroShotAgent, specifically:
- Visibility into Thoughts before each step
- Explicit Action and Action Input selection
- Observable tool outputs (Observations)
- A clear step-wise reasoning trace compatible with LangGraph streaming and logging
This level of transparency is critical for understanding why specific JSON keys or values are selected, as well as for debugging, auditing, and validation, particularly in complex JSON navigation workflows.
Questions
-
Is it possible to surface reasoning in the same
Thought → Action → Action Input → Observation format when usingcreate_agent? -
Are there architectural changes in
create_agentthat intentionally abstract away or limit this behavior? -
What is the recommended approach for designing agents with
create_agentwhile preserving step-level reasoning visibility, especially for larger JSON-based navigation use cases?
Any guidance or clarification would be greatly appreciated.
Thank you for your time and support.