Parsing error with structured output with model output

I used the init_chat_model function to create an instance of a model, that I then add a Pydantic model for structured output using the with_structured_output method, and get the new model instance with the proper structured output added. Whenever I run invoke the model and get my desired structured output, it seems the values are correct whenever I access them, but I get this error:

PydanticSerializationUnexpectedValue(Expected none - serialized value may not be as expected [field_name=‘parsed’

I have looked all over and asked AI for assistance but nothing seems to work, would anyone in the community know? I am using gpt-5-mini and in python, if that makes any difference.

hi @Art

have you seen this Warning after resolving args_schema and ToolRuntime conflict

You’re probably seeing a serialization warning, not a real structured-output parse failure.

If your parsed values are correct, the warning usually comes from serializing the raw message object (especially with include_raw=True), where an internal parsed field is attached.

What to do

  1. If you don’t need raw output, keep include_raw=False (default).
  2. For OpenAI structured output, use method="json_schema" (and strict=True when supported by your schema).
  3. If you do need raw output, log out["parsed"] separately and sanitize out["raw"] before model_dump().
  4. Update packages (langchain, langchain-openai, openai, pydantic) to latest compatible versions.

My minimal repro

from langchain.chat_models import init_chat_model
from pydantic import BaseModel, Field

class Person(BaseModel):
    name: str = Field(description="Person name")
    age: int = Field(description="Person age")

llm = init_chat_model("openai:gpt-5-mini")
structured = llm.with_structured_output(
    Person,
    method="json_schema",
    strict=True,
    include_raw=False,
)

result = structured.invoke("John is 30 years old")
print(result.model_dump())

If you need include_raw=True

structured = llm.with_structured_output(
    Person,
    method="json_schema",
    strict=True,
    include_raw=True,
)
out = structured.invoke("John is 30 years old")

print("parsed:", out["parsed"])
print("parsing_error:", out["parsing_error"])

raw = out["raw"].model_copy(deep=True)
raw.additional_kwargs.pop("parsed", None)
safe_raw = raw.model_dump()

References used: LangChain with_structured_output reference, LangChain OpenAI structured output docs, LangChain OpenAI source

Hi @Art ,

Your structured output is working fine. The error is happening during serialization, not parsing.

PydanticSerializationUnexpectedValue usually means something (FastAPI, logging, JSON dump, etc.) is trying to serialize the wrapper object that contains a parsed field, instead of just the Pydantic model itself.

Quick fix

Instead of returning the full response object, return only the parsed model:

result = structured_model.invoke(...)
return result.model_dump()

Or if you’re in FastAPI:

return result.model_dump()

Don’t JSON-serialize the whole LangChain response object.

That’s it :slightly_smiling_face:

Hi Art,

The warning seems to be rooted in OpenAI’s implementation, not LangChain’s. OpenAI will throw a warning when using Pydantic as the schema format and calling ‘model_dump()’. Refer to this link, please: [ responses.parse() throws a PydanticSerializationUnexpectedValue error in v2.21.0 · Issue #2872 · openai/openai-python · GitHub ]. Since LangChain heavily relies on the ‘model_dump()’ API, what is left for us seems to just suppress the warning, as instructed in my posted Link.