hi @Art
have you seen this Warning after resolving args_schema and ToolRuntime conflict
You’re probably seeing a serialization warning, not a real structured-output parse failure.
If your parsed values are correct, the warning usually comes from serializing the raw message object (especially with include_raw=True), where an internal parsed field is attached.
What to do
- If you don’t need raw output, keep
include_raw=False(default). - For OpenAI structured output, use
method="json_schema"(andstrict=Truewhen supported by your schema). - If you do need raw output, log
out["parsed"]separately and sanitizeout["raw"]beforemodel_dump(). - Update packages (
langchain,langchain-openai,openai,pydantic) to latest compatible versions.
My minimal repro
from langchain.chat_models import init_chat_model
from pydantic import BaseModel, Field
class Person(BaseModel):
name: str = Field(description="Person name")
age: int = Field(description="Person age")
llm = init_chat_model("openai:gpt-5-mini")
structured = llm.with_structured_output(
Person,
method="json_schema",
strict=True,
include_raw=False,
)
result = structured.invoke("John is 30 years old")
print(result.model_dump())
If you need include_raw=True
structured = llm.with_structured_output(
Person,
method="json_schema",
strict=True,
include_raw=True,
)
out = structured.invoke("John is 30 years old")
print("parsed:", out["parsed"])
print("parsing_error:", out["parsing_error"])
raw = out["raw"].model_copy(deep=True)
raw.additional_kwargs.pop("parsed", None)
safe_raw = raw.model_dump()
References used: LangChain with_structured_output reference, LangChain OpenAI structured output docs, LangChain OpenAI source