Issues with Messages in State when using AWS Bedrock and Open AI models

I keep getting Langgraph state validation issues with my messages key or I get API issues with AWS Bedrock with the way my messages are structured. Is there a way to properly standardize the messages (BaseMessage) so that it will work with most model provider apis AND langchain/langraph mesasge objects…


KeyError(‘tool_call_id’)Traceback (most recent call last):

File “/usr/local/lib/python3.11/site-packages/langgraph/utils/runnable.py”, line 678, in ainvoke
input = await step.ainvoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langgraph/utils/runnable.py”, line 440, in ainvoke
ret = await self.afunc(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langgraph/graph/branch.py”, line 185, in _aroute
value = reader(config)
^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langgraph/pregel/read.py”, line 110, in do_read
return mapper(read(select, fresh))
^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langgraph/pregel/algo.py”, line 201, in local_read
cc.update(updated[k])

File “/usr/local/lib/python3.11/site-packages/langgraph/channels/binop.py”, line 91, in update
self.value = self.operator(self.value, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langgraph/graph/message.py”, line 39, in _add_messages
return func(left, right, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langgraph/graph/message.py”, line 182, in add_messages
for m in convert_to_messages(right)
^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py”, line 373, in convert_to_messages
return [_convert_to_message(m) for m in messages]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py”, line 373, in
return [_convert_to_message(m) for m in messages]
^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py”, line 346, in convert_to_message
message
= _create_message_from_message_type(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py”, line 289, in _create_message_from_message_type
message = ToolMessage(content=content, artifact=artifact, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langchain_core/messages/tool.py”, line 146, in init
super().init(content=content, **kwargs)

File “/usr/local/lib/python3.11/site-packages/langchain_core/messages/base.py”, line 72, in init
super().init(content=content, **kwargs)

File “/usr/local/lib/python3.11/site-packages/langchain_core/load/serializable.py”, line 115, in init
super().init(*args, **kwargs)

File “/usr/local/lib/python3.11/site-packages/pydantic/main.py”, line 253, in init
validated_self = self.pydantic_validator.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File “/usr/local/lib/python3.11/site-packages/langchain_core/messages/tool.py”, line 132, in coerce_args
tool_call_id = values[“tool_call_id”]
~~~~~~^^^^^^^^^^^^^^^^

KeyError: ‘tool_call_id’

Hi @jriedel199715

how is your tool message created? Are you using ToolMessage? Can you share how it looks in your code?

1 Like

Long story short, return state.model_dump() at the end of a langgraph node totally messes with the BaseMessages that exist in my state’s messages key. When I save the state it is fine but then the next time I run the messages through an LLM there is a Validation error and that is because the model_dump was wiping some important pieces of the messages.

I have no idea how you handle the state (if you are using checkpointer or not), but it sounds like something custom. Hence sharing the code snippets would be helpful otherwise I’m not sure how to assist :pensive_face:

1 Like

Sorry, I am saying it is resolved. The resolution is do not use state.model_dump() at the end of a node. Might not apply to everyone but for me that was the solution

1 Like