Pass tool1 output as input to tool2

Hello,

I’m building tool calling agent. where I’m reading data and processing the data.

I’m not able to pass tool1 output as input to tool2.

I want the LLM to pick the tool for my input.

from langchain.chat_models import init_chat_model
from langgraph.prebuilt import ToolNode
from langgraph.graph import StateGraph, MessagesState, START, END

def read_file(file_location: str):
    """This function is to read csv file."""
    df_ = pd.read_csv(file_location)
    return df_ 

def process_data(input_data: pd.DataFrame):
    """This function process the dataframe."""
    output_data = input_data[['col1','col2']]
    return output_data 



tool_node = ToolNode([read_file, process_data])

model = init_chat_model(model="claude-3-5-haiku-latest")
model_with_tools = model.bind_tools([read_file, process_data])

def should_continue(state: MessagesState):
    messages = state["messages"]
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return END

def call_model(state: MessagesState):
    messages = state["messages"]
    response = model_with_tools.invoke(messages)
    return {"messages": [response]}

builder = StateGraph(MessagesState)

# Define the two nodes we will cycle between
builder.add_node("call_model", call_model)
builder.add_node("tools", tool_node)

builder.add_edge(START, "call_model")
builder.add_conditional_edges("call_model", should_continue, ["tools", END])
builder.add_edge("tools", "call_model")

graph = builder.compile()

graph.invoke({"messages": [{"role": "user", "content": "what's the weather in sf?"}]})

@marco @eyurtsev

Hello @IamExperimenting

Im don’t work for langchain, so please qualify my response

Your graph is standard react style setup so
llm <—> tool_call
so once a tool is call the control goes back to the node which calls the llm to decide what tool to call next.

And remember at the end of the day llm input is text not a pd.DataFrame. So you have a few options

  1. the most obvious: have a single tool process data that takes in the file location, reads it AND does the processing

  2. have the read_file tool read the csv and return the datafram flattened as a dict. This will append the response to messages (as text), which gets sent to the llm. Them rely on the llm to extract the arg from messages to form up the second tool call

  3. be very explicit and have read_file return a Command which updates a specific state variable

    return Command(
        update={
            'file_content': file_content_dict,
        }
    )

then in tool 2 you can just read it out of the state with

def process_data(config: RunnableConfig, state: Annotated[dict, InjectedState]) -> bool:

    file_content_dict: dict = state['file_content_dict']
1 Like

I made changes to code

from langchain.chat_models import init_chat_model
from langgraph.prebuilt import ToolNode
from langgraph.graph import StateGraph, MessagesState, START, END

def read_file(file_location: str, tool)_call_id:Annotated[str, InjectedToolCallId]):
    """This function is to read csv file."""
    df_ = pd.read_csv(file_location)
    state_update = {
                         "messages" : [ToolMessage("loaded csv file", tool_call_id=tool_call_id)],
                         "input_data" : df_.to_json()
                    }
    return Command(update=state_update 

def process_data(state: Annotated[ste, InjectedState], tool_call_id : Annotated[str, InjectedToolCallId]):
    """This function process the dataframe."""
    output_data = state['input_data']
    output_data = output_data[['col1','col2']]
    return output_data 



tool_node = ToolNode([read_file, process_data])

model = init_chat_model(model="claude-3-5-haiku-latest")
model_with_tools = model.bind_tools([read_file, process_data])

def should_continue(state: MessagesState):
    messages = state["messages"]
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return END

def call_model(state: MessagesState):
    messages = state["messages"]
    response = model_with_tools.invoke(messages)
    return {"messages": [response]}

builder = StateGraph(MessagesState)

# Define the two nodes we will cycle between
builder.add_node("call_model", call_model)
builder.add_node("tools", tool_node)

builder.add_edge(START, "call_model")
builder.add_conditional_edges("call_model", should_continue, ["tools", END])
builder.add_edge("tools", "call_model")

graph = builder.compile()

graph.invoke({"messages": [{"role": "user", "content": "what's the weather in sf?"}]})

@darthShana can you please provide some sample code. Because I’m getting error

Error: 1 validation error for process_data
state
  Input should be a valid string[type=string_type, input_value={'messages':[HumanMessag...,}]

Can you share the full stack trace please.. Im not sure where the error is coming from looking at that one line

my guess is you need to define the inputs to your tool, try doing

class SaveClassifiedTransactionsInput(BaseModel):
    config: RunnableConfig = Field(description="runnable config")
    state: Annotated[dict, InjectedState] = Field(description="current state")


def save_classified_transactions(config: RunnableConfig, state: Annotated[dict, InjectedState]) -> bool:
    pass

save_classified_transactions_tool_name = "save_transactions"
save_classified_transactions_tool = StructuredTool.from_function(
    func=save_classified_transactions,
    name=save_classified_transactions_tool_name,
    description="""
        Useful to save classified transactions
        """,
    args_schema=SaveClassifiedTransactionsInput,
)

@darthShana can you please provide a full example script, i’m new with langgraph?

@LangChain-Team @marco can someone please help me, or provide me some good tutorial? or can you please correct me if i’m making a mistake?

Thank you @darthShana for helping out, your suggestions on using InjectedState are correct!

@IamExperimenting you were almost there! I noticed a few syntax errors but probably due to copy pasting, you just need to create a new field in the State that contains the uploaded data.
Here’s a full example to get started (parallel_tool_calls=False avoids process data tool to be called at the same time as read_file):

from langchain.chat_models import init_chat_model
from langgraph.prebuilt import ToolNode, InjectedState
from langgraph.graph import StateGraph, MessagesState, START, END
from langchain_core.messages import ToolMessage
from langgraph.types import Command
from typing import Annotated
from langchain_core.tools import InjectedToolCallId
import pandas as pd
from dotenv import load_dotenv
load_dotenv()

class State(MessagesState):
    input_data: pd.DataFrame

def read_file(file_location: str, tool_call_id: Annotated[str, InjectedToolCallId]):
    """
    This function read a csv file from a given location, serializes it to a JSON string, and returns it.

    Args:
        file_location: The location of the csv file to read.

    Returns:
        A JSON string of the dataframe.
    """

    df_ = pd.read_csv(file_location)
    state_update = {
        "messages": [ToolMessage(content="loaded csv file", tool_call_id=tool_call_id)],
        "input_data": df_
    }
    return Command(update=state_update)

def process_data(state: Annotated[dict, InjectedState], tool_call_id: Annotated[str, InjectedToolCallId]):
    """
    This function processes the dataframe from the state.
    
    Returns:
        The processed DataFrame or a Command with error message if processing fails
    """
    try:
        output_data = state['input_data']
        print(output_data.head())
        return output_data
    except Exception as e:
        print(f"Error processing data: {e}")
        return Command(update={"messages": [ToolMessage(content=f"Error processing data: {e}", tool_call_id=tool_call_id)]})

tools = [read_file, process_data]
tool_node = ToolNode(tools)

model = init_chat_model(model="gpt-4o-mini", parallel_tool_calls=False)
model_with_tools = model.bind_tools(tools)

def should_continue(state: State):
    messages = state["messages"]
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return END

def call_model(state: State):
    messages = state["messages"]
    response = model_with_tools.invoke(messages)
    return {"messages": [response]}

builder = StateGraph(State)

# Define the two nodes we will cycle between
builder.add_node("call_model", call_model)
builder.add_node("tools", tool_node)

builder.add_edge(START, "call_model")
builder.add_conditional_edges("call_model", should_continue, ["tools", END])
builder.add_edge("tools", "call_model")

graph = builder.compile()

graph.invoke({"messages": [{"role": "user", "content": "read the file /Users/mperini/Projects/agents/examples/sample_data.csv and process it"}]})

@marco @darthShana thanks a lot for your help. It works :slight_smile:

@marco but I tried the same in Supervisor agent architecture, it didn’t work. It didn’t pass information or update state.