I am currently using Langchain for the LLM instances for my agent, however sometimes whenever I run the agent, it does not return anything or takes too long and returns error. I am able to observe this on Langsmith and I just see it spinning. However, I am not really sure what is the cause of the issue. I am using Openrouter as my provider, however every time I call their API normally I am able to get a response all the time, so I am thinking this could be cause by Langchain. Does anyone have any experience with this and what they would recommend? Also, I am thinking of moving off of Langchain for the LLM instances and such, and use Langgrpah with the native APIs, would this be plausible?
hi @Art
how do you use OpenRouter through LangChain - via ChatOpenAI with a custom base_url, or via the dedicated langchain-openrouter package?
HI @pawel-twardziak ,
Thanks for the quick response. I use it with the init_chat_model using the custom base url. I assume that this is using the ChatOpenAI under the hood to my understanding.
hi @Art
When you call init_chat_model with a model name and a custom base_url, the function infers the provider from the model name prefix.
For example:
"gpt-4o"→ infers"openai"→ createsChatOpenAI"claude-sonnet-4-5"→ infers"anthropic"→ createsChatAnthropic
So if you’re doing something like:
from langchain.chat_models import init_chat_model
llm = init_chat_model(
"gpt-4o",
base_url="https://openrouter.ai/api/v1",
api_key="your-openrouter-key",
)
This creates a ChatOpenAI instance with a custom base_url pointing to OpenRouter.
This setup has two critical default behaviors that cause stalling:
Problem 1: No default timeout - ChatOpenAI sets request_timeout to None by default
Problem 2: stream_usage stays disabled with custom base_url - when a custom base_url is set, ChatOpenAI skips enabling stream_usage by default
add explicit timeout and retries:
from langchain.chat_models import init_chat_model
llm = init_chat_model(
"gpt-4o",
base_url="https://openrouter.ai/api/v1",
api_key="your-openrouter-key",
timeout=60,
max_retries=2,
stream_usage=True,
)
or better alternative imho (the package itself is quite old though):
model_provider="openrouter" instead of custom base_url
from langchain.chat_models import init_chat_model
llm = init_chat_model(
"openai/gpt-4o",
model_provider="openrouter",
timeout=30000,
max_retries=1,
)
It requires pip install langchain-openrouter