Can we make parallel tool calls in create_agent, if yes how?If no, is there any way we can do?

Do we need some middleware or how to do this??

hi @yogen_pradhan

have you checked that doc? Models - Docs by LangChain

In practice you just need:

  1. A tool-calling chat model that allows parallel tool calls (e.g. ChatOpenAI, ChatAnthropic).
  2. A set of tools declared via @tool (or BaseTool instances / provider tools).
  3. A create_agent constructed with that model and tools.
from langchain.agents import create_agent
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI

@tool
def get_weather(city: str) -> str:
    """Return the weather for a city."""
    # ... call your API here ...
    return f"Weather for {city}"

@tool
def get_time(city: str) -> str:
    """Return the local time for a city."""
    # ... call your API here ...
    return f"Time for {city}"

model = ChatOpenAI(model="gpt-4o-mini")  # OpenAI models allow parallel tool use by default

agent = create_agent(
    model=model,
    tools=[get_weather, get_time],
    system_prompt="You are a helpful travel assistant.",
)

# If the model decides to call both tools in one step, they will be run in parallel
result = agent.ainvoke(
    {"messages": [("user", "For SF and NYC, get the weather and the local time.")]}
)