hello, I’m using code like:
from langchain_openai import ChatOpenAI
llm1 = ChatOpenAI(
model=model_name,
openai_api_key=vllm-key,
openai_api_base=inference_server_url,
max_tokens=1000,
temperature=0.3,
extra_body={
"chat_template_kwargs": {"enable_thinking": False},
}
)
llm2 = ChatOpenAI(
model=model_name,
openai_api_key=vllm-key,
openai_api_base=inference_server_url,
max_tokens=1000,
temperature=0.3,
extra_body={
"chat_template_kwargs": {"enable_thinking": False},
"stop": ["\n"]
}
)
if __name__ == "__main__":
response1 = llm1.invoke([HumanMessage("give me a short poem by Yeats,separated by line breaks")])
response2 = llm2.invoke([HumanMessage("give me a short poem by Yeats,separated by line breaks")])
print(response1.content)
print("#" * 50)
print(response2.content)
then I got result like:
D:\Code\langchain\.venv\Scripts\python.exe D:\Code\langchain\agent\character\llm.py
Sure! Here's a short poem by W.B. Yeats, *"The Lake Isle of Innisfree"*, separated by line breaks as requested:
I will arise and go now, and go to Innisfree,
And a small cabin build there, of clay and twigs.
Nine bean-rows will I have there, a hive for the honey-bee,
And live alone in the bare land there, on the lake island of Innisfree.
And I shall have some peace there, for the sounds of peace,
A lake water lapping with low sounds by the shore;
While peace comes dropping by, dropping from the veils of the morning,
And evening full of the linnet's wings.
I will arise and go now, for always night and day
I hear lake water lapping with low sounds by the shore;
While Ballylee is a ruin, and the towers crumble slowly,
Yet peace comes dropping by, dropping from the veils of the morning.
##################################################
Sure! Here's a short poem by W.B. Yeats, *"The Lake Isle of Innisfree"*, separated by line breaks as requested:
Process finished with exit code 0
but seems I can’t get the stop_reason in content, because if I print response2, I just got:
AIMessage(
content='Sure! Here\'s a short poem by W.B. Yeats, *"The Lake Isle of Innisfree"*, separated by line breaks as requested:',
additional_kwargs={'refusal': None},
response_metadata={
'token_usage': {
'completion_tokens': 31,
'prompt_tokens': 26,
'total_tokens': 57,
'completion_tokens_details': None,
'prompt_tokens_details': None
},
'model_provider': 'openai',
'model_name': 'qwen3-32b-bnb-4bit',
'system_fingerprint': None,
'id': 'chatcmpl-62daf15d21304bb58b242dcaec3fbd81',
'finish_reason': 'stop',
'logprobs': None
},
id='lc_run--b63d046e-3244-4f87-8312-e949aed7a0bb-0',
usage_metadata={
'input_tokens': 26,
'output_tokens': 31,
'total_tokens': 57,
'input_token_details': {},
'output_token_details': {}}
)
there is no stop_reason inside it but only finish_reason
but I did find stop_reason in http response, it’s like "stop_reason": "\n" or "stop_reason": null
I need it to determine which specific stop token triggered the termination, what should I do?