Hi, i am learning langchain tools, but i follow this code
from langchain.tools import tool
from langchain_google_genai import ChatGoogleGenerativeAI# Define the tool
@tool(description=“Get the current weather in a given location”)
def get_weather(location: str) → str:
return “It’s sunny.”# Initialize the model and bind the tool
llm = ChatGoogleGenerativeAI(model=“gemini-2.5-flash-lite”)
llm_with_tools = llm.bind_tools([get_weather])# Invoke the model with a query that should trigger the tool
query = “What’s the weather in San Francisco?”
ai_msg = llm_with_tools.invoke(query)# Check the tool calls in the response
print(ai_msg.tool_calls)# Example tool call message would be needed here if you were actually running the tool
from langchain.messages import ToolMessagetool_message = ToolMessage(
content=get_weather(*ai_msg.tool_calls[0][“args”]),
tool_call_id=ai_msg.tool_calls[0][“id”],
)
llm_with_tools.invoke([ai_msg, tool_message]) # Example of passing tool result back
from this link: https://docs.langchain.com/oss/python/integrations/chat/google_generative_ai
and i got this error
30 from langchain.messages import ToolMessage
32 tool_message = ToolMessage(
—> 33 content=get_weather(*ai_msg.tool_calls[0][“args”]),
34 tool_call_id=ai_msg.tool_calls[0][“id”],
35 )
36 llm_with_tools.invoke([ai_msg, tool_message]) # Example of passing tool result backTypeError: ‘StructuredTool’ object is not callable
i have updated my langchain version to the latest, I think the document has some problems. Thanks
Hi @Khim3, thank you for flagging the issue. The correct code is below. I will update the documentation to fix!
from langchain.tools import tool
from langchain.messages import HumanMessage, ToolMessage
from langchain_google_genai import ChatGoogleGenerativeAI
# Define the tool
@tool(description="Get the current weather in a given location")
def get_weather(location: str) -> str:
return "It's sunny."
# Initialize and bind (potentially multiple) tools to the model
model_with_tools = ChatGoogleGenerativeAI(model="gemini-2.5-flash-lite").bind_tools([get_weather])
# Step 1: Model generates tool calls
messages = [HumanMessage("What's the weather in Boston?")]
ai_msg = model_with_tools.invoke(messages)
messages.append(ai_msg)
# Check the tool calls in the response
print(ai_msg.tool_calls)
# Step 2: Execute tools and collect results
for tool_call in ai_msg.tool_calls:
# Execute the tool with the generated arguments
tool_result = get_weather.invoke(tool_call)
messages.append(tool_result)
# Step 3: Pass results back to model for final response
final_response = model_with_tools.invoke(messages)
how could i bind many tools like @tool
def exponentiate(x: float, y: float) → float:
“”“Raise ‘x’ to the ‘y’.”“”
return x**y
@tool
def add(x: float, y: float) → float:
“”“Add ‘x’ and ‘y’.”“”
return x + y
as the example shown in the document seems to be attached to 1 tool (get_weather), is there a way that i could write many tools and the LLM could decide what to call automatically?
for example, how could i call [HumanMessage(“What’s 2+3 and then multiply the result by 4?”)], model would call 2 tools?
Bind multiple tools like this:
Yep but when calling you have call like this
Add_numbe. invoke (message). Based on code we have to call add function not the 2 tools right?
I think I don’t understand what is your aim. Could you clarify please?
let say i got 2 tools: add (sum of 2 integers), and multiply (product of 2 integers**).** So how could i instruct the LLM to answer this prompt: “what ‘s 2+2 then multiplied by 4“, after having bind 2 tools, meaning that LLM has to call add function first then multiply. But the given code is only using 1 tool (add tool) when invoking, which is add.invoke(prompt), while the prompt needs to use 2 tools
I believe you can use react agent ( from langchain.agents import create_agent) for multiple tool call with same query and modify prompt in a way to force tool call,
from langgraph.graph import StateGraph, END, START
from langchain_core.messages import HumanMessage
from langchain.tools import tool
from langchain.agents import create_agent
from langchain.chat_models import init_chat_model
from langchain_core.messages import HumanMessage, SystemMessage
model_kwargs = {
"temperature": 1.0,
"api_key": "dummy",
"azure_endpoint": "dummy",
"openai_api_version": "latest"
}
llm = init_chat_model("azure_openai:o4-mini", model_kwargs=model_kwargs)
@tool
def multiply(a: int, b: int) -> int:
"""
Multiply two numbers.
:param a:
:param b:
:return: a*b
"""
return a * b
@tool
def add(a: int, b: int) -> int:
"""
Add two numbers
:param a:
:param b:
:return: a + b
"""
return a + b
agent = create_agent(model= llm, tools =[add, multiply])
agent.invoke({"messages": [HumanMessage(content="Using tools, Add 2 and 2 and then multiply by 4")]})
Response
{'messages': [HumanMessage(content='Using tools, Add 2 and 2 and then multiply by 4', additional_kwargs={}, response_metadata={}, id='6b35d017-5754-45fa-a99e-09fef60a3b5a'),
AIMessage(content='', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 280, 'prompt_tokens': 110, 'total_tokens': 390, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 256, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_provider': 'openai', 'model_name': 'o4-mini-2025-04-16', 'system_fingerprint': None, 'id': 'chatcmpl-CbSXPANHO6k6ClyfawTDSn1gWPasB', 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'tool_calls', 'logprobs': None, 'content_filter_results': {}}, id='lc_run--0793fe02-0860-4d66-99e0-32f76fdb8ab5-0', tool_calls=[{'name': 'add', 'args': {'a': 2, 'b': 2}, 'id': 'call_AEV8ifSywqcm4rX0iluEcSF2', 'type': 'tool_call'}], usage_metadata={'input_tokens': 110, 'output_tokens': 280, 'total_tokens': 390, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 256}}),
ToolMessage(content='4', name='add', id='ec6891b0-f184-49a2-9853-ccf3fe47735a', tool_call_id='call_AEV8ifSywqcm4rX0iluEcSF2'),
AIMessage(content='', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 141, 'total_tokens': 168, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_provider': 'openai', 'model_name': 'o4-mini-2025-04-16', 'system_fingerprint': None, 'id': 'chatcmpl-CbSXV89Li5e2SSlMxKH27WDxRivVl', 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'tool_calls', 'logprobs': None, 'content_filter_results': {}}, id='lc_run--5e8ea9e1-fd4f-4af3-a3ec-d86e76d5319d-0', tool_calls=[{'name': 'multiply', 'args': {'a': 4, 'b': 4}, 'id': 'call_XakshJTtPpRDK9NmNpGbmqaW', 'type': 'tool_call'}], usage_metadata={'input_tokens': 141, 'output_tokens': 27, 'total_tokens': 168, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),
ToolMessage(content='16', name='multiply', id='9124d138-4bc8-4660-a348-a599764126be', tool_call_id='call_XakshJTtPpRDK9NmNpGbmqaW'),
AIMessage(content='The result of adding 2 and 2, then multiplying by 4, is 16.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 33, 'prompt_tokens': 172, 'total_tokens': 205, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_provider': 'openai', 'model_name': 'o4-mini-2025-04-16', 'system_fingerprint': None, 'id': 'chatcmpl-CbSXWx4ZSXNx6u1VfIULIBDLKPe6J', 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='lc_run--37f19eee-eaea-4ea9-80dd-200c81d8d7fc-0', usage_metadata={'input_tokens': 172, 'output_tokens': 33, 'total_tokens': 205, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}
I think it is because you are using simple LLM call
You need ReAct agent which will first call add and then based on the result will call multiply tool in the second tour.
use create_agent (react agent) from LangChain v1 for that
Single LLM call cannot call two tools at one shot since it does not know the result of the first operation which is add. That is why you need two rounds to get your prompt solved - and for that you should use react agent.