Hi @Najiya
True, the @traceable decorator and wrap_openai examples in the docs can make it look like LangSmith tracing is OpenAI-specific - it’s absolutely not.
The core setup hasn’t changed. You only need two environment variables:
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=<your-api-key>
LANGSMITH_PROJECT is optional (defaults to "default"), and LANGSMITH_ENDPOINT is only needed if you’re self-hosting - for the hosted service at api.smith.langchain.com you can omit it entirely.
So your .env file should actually look like:
LANGSMITH_TRACING=true
LANGSMITH_API_KEY=lsv2_pt_xxxxxxxxxxxx
LANGSMITH_PROJECT=my-project # optional
If you’re building with LangChain (chains, agents, tools), you don’t need to do anything extra. LangChain auto-traces all invocations when LANGSMITH_TRACING=true is set. No decorators, no wrappers, no Client() needed:
from dotenv import load_dotenv
load_dotenv()
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{question}")
])
model = ChatOpenAI(model="gpt-4.1-mini")
chain = prompt | model | StrOutputParser()
# This is automatically traced in LangSmith - no extra code needed!
chain.invoke({"question": "What is LangSmith?"})
This works the same way regardless of the LLM provider - ChatOpenAI, ChatAnthropic, ChatOllama, ChatGoogleGenerativeAI, etc. The tracing comes from LangChain’s callback system, not from the provider.
Under the hood, LangChain’s callback manager checks if tracing is enabled and automatically adds a LangChainTracer to every invocation (source: langchain_core/callbacks/manager.py).
If you’re not using LangChain - use @traceable
The @traceable decorator is for any oython function, not just OpenAI calls. It creates trace spans in LangSmith:
from dotenv import load_dotenv
load_dotenv()
from langsmith import traceable
@traceable
def my_pipeline(question: str) -> str:
context = retrieve_docs(question)
answer = call_my_llm(question, context)
return answer
@traceable(run_type="retriever", name="Document Retrieval")
def retrieve_docs(question: str) -> list:
return ["doc1", "doc2"]
@traceable(run_type="llm")
def call_my_llm(question: str, context: str) -> str:
return "some answer"
my_pipeline("What happened in the meeting?")
Non-OpenAI Provider Wrappers
LangSmith provides dedicated wrappers for other providers too. For example, Anthropic:
import anthropic
from langsmith import traceable
from langsmith.wrappers import wrap_anthropic
client = wrap_anthropic(anthropic.Anthropic())
@traceable(name="Chat Pipeline")
def chat_pipeline(question: str):
message = client.messages.create(
model="claude-sonnet-4-6",
messages=[{"role": "user", "content": question}],
max_tokens=1024,
)
return message
chat_pipeline("Summarize this morning's meetings")