createAgent: How to skip tool handling and return only the AI message

Hi,

Is there any way to end the graph when the AI message includes tool calls and return only the AI’s message while using the createAgent function?

I tried using jumpTo: 'end' in the afterModel middleware, but I got the following error:

Error: 400 An assistant message with tool_calls must be followed by tool messages responding to each tool_call_id.

It looks like the graph isn’t routing to the end node as expected.

code:

const getWeather = tool(({ location }) => `Weather in ${location}: Sunny, 72°F`, {
  name: "get_weather",
  description: "Get weather information for a location",
  schema: z.object({
    location: z.string().describe("The location to get weather for"),
  }),
});

const agent = createAgent({
  model: new ChatOpenAI({
    model: "gpt-5-nano",
    apiKey: process.env.OPENAI_API_KEY ?? "",
    reasoning: {
      effort: "low",
    },
    verbosity: "low",
  }),
  tools: [getWeather],
  middleware: [
    createMiddleware({
      name: "afterModelJumpToEndMiddleware",
      afterModel: {
        canJumpTo: ["end"],
        hook: async (state) => {
          const lastMessage = state.messages.at(-1);
          if (AIMessage.isInstance(lastMessage)) {
            const { tool_calls } = lastMessage;
            if (tool_calls) {
              return {
                jumpTo: "end",
              };
            }
          }
          return;
        },
      },
    }),
  ],
});

const response = await agent.invoke({
  messages: [new HumanMessage("What's the weather in SF?")],
});

Thank you

hi @rwu

you cannot skip tool handling. It’s because when an LLM wants to call a tool, it responds with a message that contains tool_calls with their unique ID. And when you make another request, the provider requires ToolMessage’s that correspond to the IDs. You cannot skip it.

If you don’t want an LLM to call tools, just don’t list the tools in the system prompt. You can use before_model middleware callback to modify the system prompt.

With createAgent in LangChainJS today, the tools list is part of the agent’s configuration, not something you pass per invoke call. To make the tool set “dynamic”, you either:

  • Create the agent with whatever tools you need at that moment (build the agent dynamically), or
  • Keep a fixed superset of tools and use middleware + prompt guidance to enable/disable tools per turn.

Both patterns are used and documented in the LangChainJS codebase and docs.


1. What createAgent actually expects for tools

If you look at the LangChainJS source, the createAgent params type defines tools as part of the configuration, not as something that changes on each call:

  /**
   * A list of tools or a ToolNode.
   *
   * @example
   * ```ts
   * import { tool } from "langchain";
   *
   * const weatherTool = tool(() => "Sunny!", {
   *   name: "get_weather",
   *   description: "Get the weather for a location",
   *   schema: z.object({
   *     location: z.string().describe("The location to get weather for"),
   *   }),
   * });
   *
   * const agent = createAgent({
   *   tools: [weatherTool],
   *   // ...
   * });
   * ```
   */
  tools?: (ServerTool | ClientTool)[];

There is no tools field on agent.invoke(...): the tools are bound when the agent is created. This matches the JS docs for agents and tools
(LangChain JS agents, tool runtime/how-to).

So if you want the set of tools to differ between calls, you have to either:

  • Build different agents with different tools arrays, or
  • Keep a fixed tools list and control which ones the model is told are available.

2. Simple pattern: build the agent with a dynamic tool list

If your variability is per “session” or per caller (e.g. permissions, tenant, feature flags), a straightforward approach is to construct the agent after you know which tools should be available:

import { createAgent, tool } from "langchain";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";

function buildToolsForUser(user: { canSearch: boolean; canWrite: boolean }) {
  const tools = [];

  if (user.canSearch) {
    tools.push(
      tool(
        async ({ query }) => {
          // ...search logic...
          return `Results for: ${query}`;
        },
        {
          name: "search",
          description: "Search across our internal docs.",
          schema: z.object({
            query: z.string().describe("What to search for"),
          }),
        },
      ),
    );
  }

  if (user.canWrite) {
    tools.push(
      tool(
        async ({ text }) => {
          // ...write/modify resource...
          return `Wrote: ${text}`;
        },
        {
          name: "write_note",
          description: "Write a note for the current user.",
          schema: z.object({
            text: z.string().describe("Content of the note"),
          }),
        },
      ),
    );
  }

  return tools;
}

export function buildAgentForUser(user: { id: string; canSearch: boolean; canWrite: boolean }) {
  const tools = buildToolsForUser(user);

  return createAgent({
    model: new ChatOpenAI({ model: "gpt-4o-mini" }),
    tools,
  });
}

// Usage per request / session
const agent = buildAgentForUser(currentUser);
const result = await agent.invoke({
  messages: [{ role: "user", content: "Help me find and summarize X" }],
});

Why this works well:

  • createAgent is mostly wiring up a graph; it’s fine to construct an agent per user/session as needed.
  • You still get full type safety around tools and inputs.
  • It cleanly expresses “tool set depends on user / context” without fighting the API.

(Tools - Docs by LangChain).


3. More advanced: dynamic tool availability via middleware

If you want the tool set to change during a conversation (e.g. enable a tool only after some steps, or temporarily disable it after N calls), you can keep the tool list static but use middleware to dynamically:

  • Track per-session state (e.g. which tools are “enabled” right now),
  • Update tool descriptions, and
  • Inject guidance via a system message.

LangChainJS ships an official example for this: examples/src/createAgent/updateToolsBeforeModelCall.ts
(source in langchainjs repo).
The core idea in that example looks like this:

import fs from "node:fs/promises";
import { createAgent, createMiddleware, tool } from "langchain";
import { ChatOpenAI } from "@langchain/openai";
import { z } from "zod";
function updateToolAvailabilityAndDescriptions() {
  sessionState.callCount += 1;
  sessionState.enabled.clear();

  if (sessionState.callCount === 1) {
    sessionState.enabled.add("list_files");
  } else if (sessionState.callCount <= 3) {
    sessionState.enabled.add("list_files");
    sessionState.enabled.add("read_file");
  } else {
    sessionState.enabled.add("list_files");
  }

  // Dynamically update read_file description with the current file list
  readFileTool.description = `Read a file by exact name. Currently available files:\n- ${files.join(
    "\n- "
  )}\n(Use list_files first if unsure.)`;

  // Indicate disabled tools in their descriptions (LLM guidance)
  listFilesTool.description = sessionState.enabled.has("list_files")
    ? "List the available files in the project."
    : "(Disabled) List the available files in the project.";
  readFileTool.description = sessionState.enabled.has("read_file")
    ? readFileTool.description
    : `(Disabled) ${readFileTool.description}`;
}
const agent = createAgent({
  model: new ChatOpenAI({ model: "gpt-4o", temperature: 0 }),
  tools: [listFilesTool, readFileTool],
  middleware: [
    createMiddleware({
      name: "updateToolAvailabilityAndDescriptions",
      beforeModel: (state) => {
        updateToolAvailabilityAndDescriptions();

        /**
         * Add a guidance system message describing current availability
         */
        const enabledNow = [...sessionState.enabled];
        const guidance = `Tool availability this turn: ${enabledNow.join(
          ", "
        )}. Only call enabled tools. If read_file is disabled, list files first and ask the user to confirm.`;

        return {
          ...state,
          messages: [{ role: "system", content: guidance }, ...state.messages],
        };
      },
    }),
  ],
  systemPrompt: `You are a file assistant. Use tools thoughtfully.
- On the first turn, only list_files will be enabled.
- On later turns, read_file may become enabled. If disabled, guide the user to list files or confirm.
- Keep answers concise.`,
});

Key ideas from this example:

  • The tools array itself is static: [listFilesTool, readFileTool].
  • A small in-memory sessionState tracks which tools are “enabled” on each turn.
  • Middleware’s beforeModel hook updates tool descriptions and injects a system message telling the LLM which tools it should consider “active”.
  • From the model’s perspective, this behaves like a dynamic tool set, without rebuilding the agent graph each time.

This is the recommended pattern when you want fine-grained, turn-by-turn control over tool availability.


4. Using runtime context inside tools (not for selecting the list, but for behavior)

Separately from which tools are available, you can also pass per-call context that tools can read from config.context. This is useful for things like user IDs, permissions, tenant IDs, etc. For example:

import { z } from "zod";
import { createAgent, tool } from "langchain";
import { ChatOpenAI } from "@langchain/openai";

const getUserName = tool(
  async (_, config) => {
    return `User's name is ${config.context.user_name}`;
  },
  {
    name: "get_user_name",
    description: "Return the current user's name from context.",
    schema: z.object({}),
  },
);

const agent = createAgent({
  model: new ChatOpenAI({ model: "gpt-4o-mini" }),
  tools: [getUserName],
  contextSchema: z.object({
    user_name: z.string(),
  }),
});

const result = await agent.invoke(
  {
    messages: [{ role: "user", content: "What is my name?" }],
  },
  {
    context: { user_name: "Alice" },
  },
);

This doesn’t change which tools exist, but it lets tool behavior depend on per-call context, which often solves the underlying need (permissions, user-specific behavior, etc.).
See the LangChain JS tools/how-to docs for more on this pattern
(LangChain JS tools docs).


5. Putting it together

  • You can’t (currently) pass a different tools list to agent.invoke(...); tools belong in the createAgent config.
  • For per-session or per-user tool sets, build the agent dynamically with the appropriate tools array and reuse it as needed.
  • For turn-by-turn dynamic availability, keep a static tools list and use middleware (as in updateToolsBeforeModelCall.ts) plus system prompts/description changes to guide the LLM about which tools are “active”.
  • Use contextSchema + config.context so tools can react to runtime information without relying on global variables.

These patterns are the ones demonstrated in the LangChainJS examples and docs, and they are the recommended way to achieve a dynamic-feeling tool set with createAgent.

1 Like

Thank you for taking the time to provide such a detailed explanation. @pawel-twardziak

I’m able to retrieve the AI message (including tool calls) with the following code and then decide whether or not to invoke the tool to continue the conversation:

const getWeather = tool(({ location }) => `Weather in ${location}: Sunny, 72°F`, {
  name: "get_weather",
  description: "Get weather information for a location",
  schema: z.object({
    location: z.string().describe("The location to get weather for"),
  }),
});

const response = await new ChatOpenAI({
  model: "gpt-5-nano",
  apiKey: process.env.OPENAI_API_KEY ?? "",
  reasoning: {
    effort: "low",
  },
  verbosity: "low",
})
  .bindTools([getWeather])
  .invoke("What's the weather in SF?");

Is there a way to achieve the same behavior using the createAgent function, so the conversation can be discontinued earlier if some tools are called?

Thank you

Why would you harness createAgent for such a simple task? createAgent produces ReAct pattern graph.

But if you really need createAgent for your task, utilize agent.invoke(..., { recursionLimit: 1 }) or/and toolCallLimitMiddleware