Do we need some middleware or how to do this??
have you checked that doc? Models - Docs by LangChain
In practice you just need:
- A tool-calling chat model that allows parallel tool calls (e.g.
ChatOpenAI,ChatAnthropic). - A set of tools declared via
@tool(orBaseToolinstances / provider tools). - A
create_agentconstructed with that model and tools.
from langchain.agents import create_agent
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
@tool
def get_weather(city: str) -> str:
"""Return the weather for a city."""
# ... call your API here ...
return f"Weather for {city}"
@tool
def get_time(city: str) -> str:
"""Return the local time for a city."""
# ... call your API here ...
return f"Time for {city}"
model = ChatOpenAI(model="gpt-4o-mini") # OpenAI models allow parallel tool use by default
agent = create_agent(
model=model,
tools=[get_weather, get_time],
system_prompt="You are a helpful travel assistant.",
)
# If the model decides to call both tools in one step, they will be run in parallel
result = agent.ainvoke(
{"messages": [("user", "For SF and NYC, get the weather and the local time.")]}
)