How to enable LangSmith tracing in my project?

Can you explain how to enable LangSmith tracing in my project?

Usually, what I do is create a .env file with the following variables:

LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT=https://api.smith.langchain.com
LANGSMITH_API_KEY=.....
LANGSMITH_PROJECT=.....

Then in my project I import:

from dotenv import load_dotenv
from langsmith import Client

Is this still the correct way to configure it, or is there a newer recommended approach?

Hi @Najiya

Yes, this is more or less enough. I’m not sure you need LANGSMITH_ENDPOINT.

I’m dropping some links:

Earlier I used to enable tracing the way I described, and it was working fine. But at some point it stopped tracing.

When I checked the documentation recently, I saw examples using Client() and the @traceable decorator along with the OpenAI client. But what if I’am not using OpenAI or don’t have an OpenAI API key, how should I enable LangSmith tracing?

Hi @Najiya

True, the @traceable decorator and wrap_openai examples in the docs can make it look like LangSmith tracing is OpenAI-specific - it’s absolutely not.

The core setup hasn’t changed. You only need two environment variables:

export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>

LANGSMITH_PROJECT is optional (defaults to "default"), and LANGSMITH_ENDPOINT is only needed if you’re self-hosting - for the hosted service at api.smith.langchain.com you can omit it entirely.

So your .env file should actually look like:

LANGSMITH_TRACING=true
LANGSMITH_API_KEY=lsv2_pt_xxxxxxxxxxxx
LANGSMITH_PROJECT=my-project   # optional

If you’re building with LangChain (chains, agents, tools), you don’t need to do anything extra. LangChain auto-traces all invocations when LANGSMITH_TRACING=true is set. No decorators, no wrappers, no Client() needed:

from dotenv import load_dotenv
load_dotenv() 

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{question}")
])
model = ChatOpenAI(model="gpt-4.1-mini")
chain = prompt | model | StrOutputParser()

# This is automatically traced in LangSmith - no extra code needed!
chain.invoke({"question": "What is LangSmith?"})

This works the same way regardless of the LLM provider - ChatOpenAI, ChatAnthropic, ChatOllama, ChatGoogleGenerativeAI, etc. The tracing comes from LangChain’s callback system, not from the provider.

Under the hood, LangChain’s callback manager checks if tracing is enabled and automatically adds a LangChainTracer to every invocation (source: langchain_core/callbacks/manager.py).

If you’re not using LangChain - use @traceable

The @traceable decorator is for any oython function, not just OpenAI calls. It creates trace spans in LangSmith:

from dotenv import load_dotenv
load_dotenv()

from langsmith import traceable

@traceable
def my_pipeline(question: str) -> str:
    context = retrieve_docs(question)
    answer = call_my_llm(question, context)
    return answer

@traceable(run_type="retriever", name="Document Retrieval")
def retrieve_docs(question: str) -> list:
    return ["doc1", "doc2"]

@traceable(run_type="llm")
def call_my_llm(question: str, context: str) -> str:
    return "some answer"

my_pipeline("What happened in the meeting?")

Non-OpenAI Provider Wrappers

LangSmith provides dedicated wrappers for other providers too. For example, Anthropic:

import anthropic
from langsmith import traceable
from langsmith.wrappers import wrap_anthropic

client = wrap_anthropic(anthropic.Anthropic())

@traceable(name="Chat Pipeline")
def chat_pipeline(question: str):
    message = client.messages.create(
        model="claude-sonnet-4-6",
        messages=[{"role": "user", "content": question}],
        max_tokens=1024,
    )
    return message

chat_pipeline("Summarize this morning's meetings")

I referred to one of the links you shared and tried the following approach:

import langsmith as ls

# You can create a client instance with an api key and api url
client = ls.Client(
    api_key="YOUR_API_KEY",  # This can be retrieved from a secrets manager
    api_url="https://api.smith.langchain.com",
)

# You can pass the client and project_name to the tracing_context
with ls.tracing_context(client=client, project_name="test-no-env", enabled=True):
    chain.invoke({"question": "Am I using a callback?", "context": "I'm using a callback"})

With this setup, tracing is now working and I’m able to see the runs in LangSmith.

Thank you! :slight_smile:

Great if it helps @Najiya :slight_smile: Huge favor, mark this post as Solved for the others so they can get benefits from it :slight_smile: Thanks in advance!

1 Like