Hi, this is Yoo.
I’ve been used prebuilt.create_react_agent to make agents for making multi-agent system.
It’s super cool but the thing happened when I tried to stream the output of agents. I used stream function with paramter stream_mode=messages to stream LLM tokens generated by model. And I checked that langgraph_node of chunk’s metadata of all agents are the same as ‘agent’. Below is the example of standard output of chunk → metadata → next chunk → next metadata sequentially.
content='\n' additional_kwargs={} response_metadata={} id='run--f567f474-6ee8-493f-b533-0a89269187a7'
{'thread_id': '975250e8-36a9-47af-afc5-0fb7420fa0db', 'langgraph_step': 3, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'research_agent:f9beb20b-4112-7f3e-227a-5392c6a11448|agent:1bc3291f-4200-e125-bc8f-906557617fa0', 'checkpoint_ns': 'research_agent:f9beb20b-4112-7f3e-227a-5392c6a11448', 'ls_provider': 'openai', 'ls_model_name': 'gpt-4o-mini', 'ls_model_type': 'chat', 'ls_temperature': 0.5}
content='-' additional_kwargs={} response_metadata={} id='run--f567f474-6ee8-493f-b533-0a89269187a7'
{'thread_id': '975250e8-36a9-47af-afc5-0fb7420fa0db', 'langgraph_step': 3, 'langgraph_node': 'agent', 'langgraph_triggers': ('branch:to:agent',), 'langgraph_path': ('__pregel_pull', 'agent'), 'langgraph_checkpoint_ns': 'research_agent:f9beb20b-4112-7f3e-227a-5392c6a11448|agent:1bc3291f-4200-e125-bc8f-906557617fa0', 'checkpoint_ns': 'research_agent:f9beb20b-4112-7f3e-227a-5392c6a11448', 'ls_provider': 'openai', 'ls_model_name': 'gpt-4o-mini', 'ls_model_type': 'chat', 'ls_temperature': 0.5}
So I was not able to distinguish which chunk came from which agents. This is because the name of LLM calling node is hardcoded as ‘agent’ in the prebuilt.create_react_agent. Since there is name parameter in the create_react_agent function, it’s for graph-level, not node-level. So what I did was modifying node name as given ‘name’ parameter temporarily in the function.
For my example, I wanted to only stream output of supevisor agent which is created by create_supervisor function from langgraph-supervisor which also uses create_react_agent internally.
Since I don’t know langgraph that deeply, I wonder if there is some fancy way to handle this.
However, I still think that If this function could optionally receive parameters from the user and dynamically modify the LLM node name, it would make the distinction clearer and allow for more flexible handling of different situations. If this is considered to make sense, I really hope to make a PR for this, receive feedbacks, and contributes this amazing frameworks!!!![]()
I ask for maintainer’s help.
Best Regards,