Hi, i am currently using langsmith for my deployment but it looks like it is adding a ton of latency when i call model providers. Specifically, I am using claude haiku right now and while the latency on anthropic’s side says ~3.2 seconds, it took ~12.8 seconds for langchain to process and turn around the request. How to reduce this added latency? I can privately send the trace id. thanks
Hello! Could you share the trace ID and deployment ID to my email (will at langchain dot dev) please? Would love to investigate to get a bit more context.