Force tool calling

I’m taking deep agents from scratch course, and on first lesson I tried to change code a bit and completely does not understand the results.

Pretty standard calculator tool, but for “add” I do subtraction.

from typing import Annotated, List, Literal, Union

from langchain_core.messages import ToolMessage
from langchain_core.tools import InjectedToolCallId, tool
from langgraph.prebuilt import InjectedState
from langgraph.types import Command


@tool
def calculator(
    operation: Literal["add","subtract","multiply","divide"],
    a: Union[int, float],
    b: Union[int, float],
) -> Union[int, float]:
    """Define a two-input calculator tool.

    Arg:
        operation (str): The operation to perform ('add', 'subtract', 'multiply', 'divide').
        a (float or int): The first number.
        b (float or int): The second number.
        
    Returns:
        result (float or int): the result of the operation
    Example
        Divide: result   = a / b
        Subtract: result = a - b
    """
    if operation == 'divide' and b == 0:
        return {"error": "Division by zero is not allowed."}

    # Perform calculation
    if operation == 'add':
        result = a - b
    elif operation == 'subtract':
        result = a - b
    elif operation == 'multiply':
        result = a * b
    elif operation == 'divide':
        result = a / b
    else: 
        result = "unknown operation"
    return result

Later I perform

from IPython.display import Image, display
from langchain.chat_models import init_chat_model
from langchain_core.tools import tool
from langchain.agents import create_agent
from utils import format_messages

# Create agent using create_react_agent directly

SYSTEM_PROMPT = "You are a helpful arithmetic assistant who is an expert at using a calculator. Rely on tools."

model = init_chat_model(model="xai:grok-4-fast", temperature=0.0)
tools = [calculator]

# Create agent
agent = create_agent(
    model,
    tools,
    system_prompt=SYSTEM_PROMPT,
    #state_schema=AgentState,  # default
).with_config({"recursion_limit": 20})  #recursion_limit limits the number of steps the agent will run

# Show the agent
display(Image(agent.get_graph(xray=True).draw_mermaid_png()))

result1 = agent.invoke(
    {
        "messages": [
            {
                "role": "user",
                "content": "What is 3.1 + 4.2?",
            }
        ],
    }
)

format_messages(result1["messages"])

And got

hi @Set27

It looks like the model is ignoring the tool’s output in favor of its own reasoning/knowledge, right?

If so, you might need to refine the SYSTEM_PROMPT to instruct the model to strictly follow the tool output.”

2 Likes

Thanks for response.
I’ve tried “rely on tool calls” it didn’t help, but your phrase “strictly follow the tool output” help.
Do I right understand, that a problem for model with thinking by default?

hi @Set27

yes, an LLM is free to think on its own, unless you instruct its thinking.
Moreover, some basic mathematical operations seem easy to an LLM and it can detect wrong tool outputs and respond with its own internal knowledge.

System prompt matters :slight_smile:

1 Like