Hey everyone
we are working on a custom middleware in langchain.js and ran into something I’d love your input on.
createMiddleware({
name: "blablaMiddleware",
contextSchema: blablaSchema,
wrapToolCall: async (request, handler) => {
if (something...) {
return handler(request);
} else {
console.log("found a problem");
return handler(request);
}
},
});
The question is: what’s the proper way to handle a failing tool call (e.g., an API error)?
We’ve noticed that sometimes the whole process crashes because the tool result doesn’t match the expected schema .
While digging through the source code , I saw that in some cases LangChain throws an exception , and in others it returns a ToolMessage .
Is there any rule of thumb for when we should catch and convert errors into a tool message vs. letting them bubble up?
Right now I’m considering wrapping the tool call in a try/catch and returning an explicit error message, e.g.:
return new ToolMessage({
content,
tool_call_id: toolCallId,
name: toolName,
status: "error",
});
Curious how others are handling this
opened 10:44AM - 25 Nov 25 UTC
bug
### Checked other resources
- [x] This is a bug, not a usage question. For ques… tions, please use the LangChain Forum (https://forum.langchain.com/).
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain.js documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain.js rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```typescript
import {tool, type ToolRuntime} from "@langchain/core/tools";
import {Command} from "@langchain/langgraph";
import * as z from "zod";
import {AzureChatOpenAI} from "@langchain/openai";
import {AgentMiddleware, createAgent, dynamicSystemPromptMiddleware} from "langchain";
import {config} from "dotenv";
import {ToolMessage} from "@langchain/core/messages";
config();
export const model = new AzureChatOpenAI({
azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION || "2024-10-21",
azureOpenAIEndpoint: process.env.AZURE_OPENAI_API_URL,
azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_DEPLOYMENT_NAME || 'gpt-4o-mini',
model: process.env.AZURE_OPENAI_DEPLOYMENT_NAME || 'gpt-4o-mini',
});
const contextSchema = z.object({
systemPrompt: z.string().optional(),
});
function systemPromptMiddleware(defaultPrompt: string): AgentMiddleware {
return dynamicSystemPromptMiddleware<z.infer<typeof contextSchema>>(
async (_state, runtime) => {
const systemPromptTemplate = runtime?.context?.systemPrompt ? runtime?.context?.systemPrompt : defaultPrompt;
return systemPromptTemplate;
},
);
}
export const authenticateUser = tool(
async ({password}, runtime: ToolRuntime) => {
const tool_call_id = runtime.toolCallId;
console.log(`executing tool with password ${password} and tool_call_id ${tool_call_id}`);
if (password === "correct") {
return new Command({
update: {
authenticated: true
}
})
} else {
return new Command({
update: {
authenticated: false
},
});
}
},
{
name: "authenticate_user",
description: "Authenticate user and update State",
schema: z.object({
password: z.string(),
}),
}
);
export const exampleCustomState = z.object({
authenticated: z.boolean().nullable().describe("Whether or not the user is authenitcated")
});
const exampleAgent = createAgent({
model,
stateSchema: exampleCustomState,
tools: [authenticateUser],
middleware: [systemPromptMiddleware("you are a chatbot that can answer questions")]
});
export async function main() {
const res = await exampleAgent.invoke({
messages: [{ role: "user", content: 'Execute the authenticate_user tool with the password adminadmin' }],
authenticated: null
});
console.log(res);
}
if (import.meta.url === `file://${process.argv[1]}`) {
main().catch((err) => {
console.error(err);
process.exit(1);
});
}
```
### Error Message and Stack Trace (if applicable)
<img width="705" height="350" alt="Image" src="https://github.com/user-attachments/assets/18882d6c-1828-42c1-9e7f-0db0292495ce" />
### Description
- I'm trying to update a variable in my customState inside a Tool
- According to the documentation of Langchain this is the way to do it [(link-to-documentation)](https://docs.langchain.com/oss/javascript/langchain/context-engineering#writes):
```typescript
return new Command({
update: {
authenticated: true
}
})
```
- However when executing this, it will return an error stating the tool did not return a correct toolmessage response
> 400 An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_WQ32iTu2EqdG90IvZN1efiVz
- Right now I fixed this by adding the following line:
```typescript
return new Command({
update: {
authenticated: true,
messages: [new ToolMessage({status: "success", name: tool.name, content: JSON.stringify({authenticated: true}), tool_call_id: runtime.toolCallId})]
}
})
```
- **I don't think it was intended of having to provide the ToolMessage by design**
-> If this is the case, maybe this should be made more clear in the documentation
### System Info
```
"packageManager": "npm@11.2.1",
"dependencies": {
"@langchain/community": "^0.0.40",
"@langchain/core": "1.0.3",
"@langchain/langgraph": "1.0.1",
"@langchain/langgraph-sdk": "^1.0.0",
"@langchain/langgraph-supervisor": "1.0.0",
"@langchain/mcp-adapters": "^1.0.0",
"@langchain/openai": "1.0.0",
"@langchain/tavily": "1.0.0",
"aws-jwt-verify": "^4.0.0",
"axios": "^1.12.2",
"dayjs": "^1.11.18",
"jsonpath-plus": "^10.3.0",
"langchain": "1.0.3",
"langsmith": "^0.3.79",
"llama-cloud-services": "^0.3.10",
"llamaindex": "^0.12.0",
"ulid": "^2.3.0",
"patch-package": "8.0.1"
"typescript": "^5.9.3"
},
"overrides": {
"zod": "3.25.76"
}
```
this is a issue with a promblem after an error in a tool with middleware
hi @yishai-stern_zoominf
you are getting exactly that “Error in middleware “DynamicSystemPromptMiddleware”: 400 An assistant message with ‘tool_calls’ must be followed by tool messages responding to” error?
could you share your code? Right now it is hard to infer what is actually happening to your agent
Hint:
instead of returning a simple ToolMessage, you have to return Command to update the state messages:
return new Command({
update: {
messages: [
new ToolMessage({
content: `An error in... bla bla bla`,
tool_call_id: config.toolCall?.id ?? "tool-call-id",
}),
],
},
});
imho the most elegant way to handle error is using StructuredTool and ToolNode with their native error handling mechanizm
Hey @yishai-stern_zoominf ,
What I have been following is to return the error so the LLM knows how to heal itself. This is what I do in my production app.
export const toolErrorMiddleware = createMiddleware({
name: "ToolErrorMiddleware",
wrapToolCall: async (request, handler) => {
try {
return await handler(request);
} catch (error) {
// Return a custom error message to the model
return new ToolMessage({
content: `[TOOL FAILED]: ${error}`,
tool_call_id: request.toolCall.id!,
});
}
},
});
1 Like