In langchain's ChatOpenai doc it's mentioned how to use structured output with tools, but what if I just wanna use strucutured output, how do I do it?

Here’s the provided piece of code for this, but my usecase is i just want to use the structured output feature of openai, using a pydantic class, how to do it. Link : ChatOpenAI | 🦜️🔗 LangChain

from langchain_openai import ChatOpenAI
from pydantic import BaseModel


def get_weather(location: str) -> None:
    """Get weather at a location."""
    return "It's sunny."


class OutputSchema(BaseModel):
    """Schema for response."""

    answer: str
    justification: str


llm = ChatOpenAI(model="gpt-4.1")

structured_llm = llm.bind_tools(
    [get_weather],
    response_format=OutputSchema,
    strict=True,
)

# Response contains tool calls:
tool_call_response = structured_llm.invoke("What is the weather in SF?")

# structured_response.additional_kwargs["parsed"] contains parsed output
structured_response = structured_llm.invoke(
    "What weighs more, a pound of feathers or a pound of gold?"
)

hi @saumya66

tried this?

structured_llm = llm.with_structured_output(<OutputSchema>)

structured_llm_with_tools = llm.bind_tools(
    [get_weather],
    response_format=OutputSchema,
    strict=True,
)

Docs How to return structured data from a model | 🦜️🔗 LangChain

so basically i don’t want tool call, just need structured output. thus don’t wanna try this as i guesss the main purpose of this way of doing things is calling some tools.

hi @saumya66

definitely not, structured output is not only for tools - it is for any LLM call output you want to be structured.

structured_llm = llm.with_structured_output(<OutputSchema>)

When you build AI agents, you need structured output frequently.