I started learn how to build app with LangChain recently.It’s convenience and powerful.
But it is really make me confused when i use model.withStructuredOutput(scheme)
api. In my opinion, I expect a json output when i use this api.BUT IN FACT, it is often throw an OUTPUT_PARSING_FAILURE
error.
For example usage:
import { ChatOllama, } from "@langchain/ollama";
import { z } from 'zod'
const llm = new ChatOllama({
model: 'qwen3:32b'
})
const joke = z.object({
setup: z.string().describe("The setup of the joke"),
punchline: z.string().describe("The punchline to the joke"),
rating: z.number().optional().describe("How funny the joke is, from 1 to 10"),
});
const structuredLlm = llm.withStructuredOutput(joke);
const res = await structuredLlm.invoke("abcd123456789qweasdgh");
console.log(res)
This will throw an error because i input something wrong on purpose. But in tutorial when i input the “right” message, it will also throw an error in OUTPUT_PARSING_FAILURE
. Yeah you can say it’s possible a question about llm’self ability.But I think throw an error reduce the usability of withStructuredOutput(scheme)
api, because user input is not under control.
To fix this, I have to turn to use prompt + parser + raw invoke. It’s helpful but not convinence like this api.
I’m wonder it’s neccesary to throw an error when structuredLLM gei somethin wrong? I think this might reduce robustness of the system.
Thankyou for reading my message!