am trying to setup up a followup middleware for deep agent but it always null on the UI when I can see the followup question that were generated.. on UI is am using react with `useStream` hook
class FollowUpQuestions(BaseModel):
"""Structured follow-up questions."""
questions: List[str] = Field(description="List of 3 follow-up questions")
class FollowupMiddleware(AgentMiddleware):
"""Middleware that generates follow-up questions only on final response."""
def __init__(self, model):
"""Initialize with model instance."""
super().__init__()
self.model = model
def after_model(self, state, runtime: Runtime) -> dict[str, Any] | None:
last_message = state["messages"][-1]
content = getattr(last_message, 'content', None)
if not content:
return None
followup_prompt = f"""Based on this response:
{content}
Generate 3 concise follow-up questions that would naturally continue this conversation."""
followup_response = self.model.with_structured_output(
FollowUpQuestions
).invoke([{"role": "system", "content": followup_prompt}])
return {"follow_up_questions": followup_response.questions}
The UI shows null because follow_up_questions isn’t part of the agent’s output state schema. Add the field to the agent state (via middleware state_schema or create_agent(state_schema=...)) and set it in a hook that runs at the end (use after_agent if you only want it on the final response). In React, read it from thread.values.follow_up_questions.
Sth like this
from typing import NotRequired, Annotated
from pydantic import BaseModel, Field
from langchain.agents.middleware.types import AgentState, AgentMiddleware
from langgraph.runtime import Runtime
class FollowUpQuestions(BaseModel):
questions: list[str] = Field(description="List of 3 follow-up questions")
class FollowupState(AgentState):
# Exposed in output (not omitted), so it appears in the stream and final values
follow_up_questions: NotRequired[list[str]]
class FollowupMiddleware(AgentMiddleware[FollowupState]):
state_schema = FollowupState # ensure schema includes the field
def after_agent(self, state: FollowupState, runtime: Runtime) -> dict[str, object] | None:
last = state["messages"][-1]
content = getattr(last, "content", None)
if not content:
return None
# Use the middleware's configured model; runtime.context is optional and only
# needed if you inject a model there yourself.
followup = self.model.with_structured_output(FollowUpQuestions).invoke(
[
{
"role": "system",
"content": (
f"Based on this response: {content}\n"
"Generate 3 concise follow-up questions."
),
}
]
)
return {"follow_up_questions": followup.questions}
# When creating your agent, include the middleware
# graph = create_agent(model=..., middleware=[FollowupMiddleware()], ...)
import type { Message } from "@langchain/langgraph-sdk";
import { useStream } from "@langchain/langgraph-sdk/react";
const thread = useStream<{ messages: Message[]; follow_up_questions?: string[] }>({
apiUrl: "http://localhost:2024",
assistantId: "agent",
messagesKey: "messages",
});
// Render follow-ups (will be undefined until final)
const followups = thread.values.follow_up_questions ?? [];
Use after_agent instead of after_model if you only want follow-ups on the final response (the after_model hook runs after each model turn in the loop)