hi @Huimin-station
I see three bugs in this code, all of which contribute to the missing output.
bug 1: bouble execution - invoke() then stream() with the same input
This is the most critical problem. The code calls graph.invoke() first, then graph.stream() second, both with the same input and config:
# First call: sends the message, hits interrupt(), checkpoints state
result = graph.invoke({"messages": [HumanMessage(a)]}, config)
# Second call: sends the SAME message AGAIN to the same thread
for chunk in graph.stream({"messages": [HumanMessage(a)]}, config, stream_mode=["values"]):
...
The first invoke() sends HumanMessage("create") to the graph, hits interrupt(), and checkpoints the state with a pending interrupt. Then stream() sends the exact same HumanMessage("create") again to the same thread - this creates a new execution that overwrites the previous pending interrupt. The message is effectively duplicated in the state.
You should use either invoke() or stream() for the initial run - not both. Choose one.
bug 2: calling invoke(Command(resume=...)) inside the stream() loop
Inside the for chunk in graph.stream(...) loop, the code calls:
graph.invoke(Command(resume=human_response), config)
This is a separate, independent graph execution - it runs the resume to completion and returns the result. But:
- the return value is not captured (not assigned to any variable), so the result is silently discarded
- the outer
graph.stream() iterator is now stale - it doesn’t yield the output from the invoke() call, because those are two separate executions
This is why the print("Start execution: rap") fires (the node does execute inside the invoke), but the ai response is never printed - the result goes nowhere.
The resume must be a separate call, not nested inside the first stream’s loop. The correct pattern from the official LangGraph interrupt() documentation is:
# Run 1: stream until interrupt
for chunk in graph.stream(input, config):
print(chunk)
# {'__interrupt__': (Interrupt(value='...', id='...'),)}
# Run 2: resume in a SEPARATE call
for chunk in graph.stream(Command(resume="value"), config):
print(chunk)
# {'node': {'human_value': 'value'}}
bug 3: wrong import for AnyMessage
from autobahn.wamp.gen.wamp.proto.AnyMessage import AnyMessage
This imports AnyMessage from the Autobahn WAMP library (a WebSocket protocol library), not from LangChain. The correct import is:
from langchain_core.messages import AnyMessage
While this may not cause a runtime error (since it’s only used as a type annotation), it introduces a spurious dependency and would fail type-checking.
Corrected code
try it:
import uuid
from typing import TypedDict, Annotated
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, AIMessage
from langchain_deepseek import ChatDeepSeek
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, add_messages
from langgraph.types import interrupt, Command
model = ChatDeepSeek(
model="deepseek-chat",
api_key="...",
streaming=True
)
class State(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
def model_node(state: State):
result = interrupt({"question": "What style do you want?"})
print(f"Start execution: {result}")
return {
"messages": [
model.invoke(
[SystemMessage(f"You are a rapper, you write rhyming lyrics with style: {result}")]
+ state["messages"]
)
]
}
checkpointer = MemorySaver()
builder = StateGraph(State)
builder.add_node("rap", model_node)
builder.add_edge(START, "rap")
graph = builder.compile(checkpointer=checkpointer)
config = {"configurable": {"thread_id": str(uuid.uuid4())}}
print("🚀 Starting execution...")
a = input("Input: ")
# --- Run 1: Send input, get interrupted ---
for chunk in graph.stream({"messages": [HumanMessage(a)]}, config):
if "__interrupt__" in chunk:
# The interrupt value is accessible here
question = chunk["__interrupt__"][0].value
print(f"Interrupt: {question}")
# --- Get human input ---
human_response = input("Please enter the style you want: ").strip()
# --- Run 2: Resume with the human response (SEPARATE call) ---
for chunk in graph.stream(Command(resume=human_response), config):
# chunk looks like: {'rap': {'messages': [AIMessage(...)]}}
if "rap" in chunk:
ai_message = chunk["rap"]["messages"][-1]
if isinstance(ai_message, AIMessage):
print(f"\n🎤 {ai_message.content}")
alternative using invoke (simpler, non-streaming)
# Run 1: hits interrupt, returns partial state
graph.invoke({"messages": [HumanMessage(a)]}, config)
# Get human input
human_response = input("Please enter the style you want: ").strip()
# Run 2: resume and get full result
result = graph.invoke(Command(resume=human_response), config)
print(result["messages"][-1].content)