Hiding parts of input on Langsmith, when using LangGraph platform

Hi,

We want to not log some parts of input to Langsmith. Is it possible wit this setup:

We are deploying our app to LangGraph platform. We are using LangGraph to define graphs. We are using langchain’s libraries to invoke LLMs (ChatOpenAI, ChatAnthropic etc.).
When running the graphs, every node’s input and output get stored in Langsmith.

This is a sample of what our code looks like:

# State definitions
class GraphInput(TypeDict):
    text: str
    big_text: str # a larger input


class GraphOutput(TypeDict):
    verdict: str


class GraphState(GraphInput, GraphOutput):
    hidden_text: str




# Example of a node
async def example_node(state, config):
   chain = ChatOpenAI() | StrOutputParser()

   res = await chain.ainvoke(state["text"] + state["big_text"])

   return {"hidden_text": res}

For this example, I’m interested in hiding in Langsmith traces the values (or entire keys) of big_text and hidden_text.

How to do that? I don’t want to hide all inputs and outputs, just some of these keys.

I’ve found these docs (Prevent logging of sensitive data in traces | 🦜️🛠️ LangSmith), but there are no examples of using this for LangGraph platform + langchain libraries, which have tracing automatically failing.

Thanks

For LangGraph Platform, there’s currently no built-in way to selectively hide specific keys from traces while keeping others visible. The platform doesn’t support custom serializers or tracer configurations that work locally. The environment variables LANGCHAIN_HIDE_INPUTS=true and LANGCHAIN_HIDE_OUTPUTS=true hide everything, not specific keys. Your only option is to sanitize sensitive data at the source in your nodes - either remove sensitive fields from your state schema entirely, or overwrite them with placeholder values like "[REDACTED]" before returning state updates. You could create wrapper functions that strip sensitive keys before logging, but this requires manually managing what gets stored in state versus what gets passed to your LLM calls.

1 Like

there’s currently no built-in way to selectively hide specific keys from traces while keeping others visible
Is it on the roadmap?

Thanks