I recently followed the community advice to resolve the conflict between args_schema and ToolRuntime by adding a runtime: ToolRuntime field to my Pydantic schema. This allows the tool function to receive the runtime object. However, after this change, I encounter a new warning when using a custom context object:
UserWarning: Pydantic serializer warnings:PydanticSerializationUnexpectedValue(Expected none - serialized value may not be as expected [input_value=UserContext(user_id=‘1’), input_type=UserContext])
Minimal reproducible example:
from pydantic import BaseModel, ConfigDict
from dataclasses import dataclass
from langchain_core.tools import tool
from langchain.tools import ToolRuntime
from langchain_deepseek import ChatDeepSeek
from langchain.agents import create_agent
from langchain_core.messages import HumanMessage
@dataclass
class UserContext:
user_id: str
class AddInput(BaseModel):
model_config = ConfigDict(arbitrary_types_allowed=True)
a: int
b: int
runtime: ToolRuntime # injected by agent
@tool(args_schema=AddInput, description="Add two integers")
def add3(a: int, b: int, runtime: ToolRuntime) -> str:
print(runtime.context.user_id) # warning appears here
return str(a + b)
agent = create_agent(
model=ChatDeepSeek(model="deepseek-chat"),
tools=[add3],
context_schema=UserContext,
system_prompt="You are helpful; call tools to compute exact answers."
)
result = agent.invoke(
{"messages": [HumanMessage("Add 10 and 20")]},
context=UserContext(user_id="1")
)
Question
-
Why does this PydanticSerializationUnexpectedValue warning occur when using a dataclass as the context?
-
What is the correct way to inject and use ToolRuntime.context with a custom context object without triggering Pydantic serialization warnings?
-
Should I convert my UserContext dataclass to a BaseModel, or is there another recommended approach?
Thanks in advance for guidance!
This post was drafted with the assistance of ChatGPT, as English is not my native language.
1 Like
Hi @TheSecondStep
this is a great question, thanks!
It is because when you inject runtime, Pydantic doesn’t know how to serialize context, which is of UserContext type.
To fix it, you can simply exclude runtime from the serialization:
class AddInput(BaseModel):
# Allow ToolRuntime (which contains BaseStore, etc.)
model_config = ConfigDict(arbitrary_types_allowed=True)
a: int
b: int
runtime: ToolRuntime # injected; hidden from the model
@field_serializer("runtime")
def _serialize_runtime(self, v: ToolRuntime | None) -> Any:
# Don’t serialize runtime; avoid warnings
return None
Or you can implement custom serializer (but I believe it is not necessary at all)
class AddInput(BaseModel):
# Allow ToolRuntime (which contains BaseStore, etc.)
model_config = ConfigDict(arbitrary_types_allowed=True)
a: int
b: int
runtime: ToolRuntime # injected; hidden from the model
# @field_serializer("runtime")
# def _serialize_runtime(self, v: ToolRuntime | None) -> Any:
# # Don’t serialize runtime; avoid warnings
# return None
@field_serializer("runtime")
def _ser_runtime(self, v: ToolRuntime | None) -> Any:
if v is None:
return None
out = {"tool_call_id": v.tool_call_id}
# optional: include a safe view of config
cfg = getattr(v, "config", None)
if cfg is not None:
try:
out["config"] = dict(cfg) # if RunnableConfig behaves like dict
except Exception:
out["config"] = None
# optional: include a safe view of context
ctx = getattr(v, "context", None)
if ctx is None:
out["context"] = None
elif hasattr(ctx, "model_dump"):
out["context"] = ctx.model_dump(mode="json")
elif is_dataclass(ctx):
out["context"] = asdict(ctx)
elif isinstance(ctx, dict):
out["context"] = ctx
else:
out["context"] = str(ctx)
print(f"out: {out}")
return out
1 Like
And answering the next two questions:
- What is the correct way to inject and use
ToolRuntime.context with a custom context object without triggering Pydantic serialization warnings?
Any way is correct until you know what is happening and how to handle errors/warnings.
you can use @dataclass, BaseModel (then read runtime.context.user_id) or TypedDict (then read runtime.context["user_id"])
Whichever way you choose, you will still need to handle the errors/warnings (which are expected) of Pydantic.
- Should I convert my
UserContext dataclass to a BaseModel, or is there another recommended approach?
You can, but it won’t change the situation.
Back to the defaults
You can also skip args_schema and let LangGraph infer the argument types from the function signature, but then you won’t be able to fix the Pydantic warning related to your untyped context.