Agent action callback not getting triggered

Implemented Asynccallbackhandler. Able to get LLM START, LLM END, TOOL START, TOOL end events. Not getting ON_AGENT_FINISH, ON _AGENT_ACTION ..
Referred this doc
https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain_core.callbacks.base.BaseCallbackHandler.on_agent_action
Passing config as runnable object still not sure why few events not getting triggered… any suggestions

hi @BharahthyKannan

could you please provide some relevant snippets of your code? That would make it easier to understand what might happen there.

Thanks in advance :slight_smile:
I am looking forward to seeing the snippets :upside_down_face:

Here is the snippet. Agent finish and Agent Action is not getting triggered

from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic
import os
from langchain_core.callbacks import BaseCallbackHandler
from typing import Any, Dict, List,Optional
from uuid import UUID
from langchain_core.agents import AgentAction, AgentFinish
from langchain_anthropic import ChatAnthropic
from langchain_core.callbacks import BaseCallbackHandler
from langchain_core.messages import BaseMessage
from langchain_core.outputs import LLMResult
from langchain_core.prompts import ChatPromptTemplate
class LoggingHandler(BaseCallbackHandler):

def on_agent_finish(
    self,
    finish: AgentFinish,
    *,
    run_id: UUID,
    parent_run_id: Optional[UUID] = None,
    **kwargs: Any,
) -> Any:
    print("agent finish")
def on_agent_action(
    action: AgentAction,
    *,
    run_id: UUID,
    parent_run_id: UUID | None = None,
    **kwargs: Any,
)-> Any:
    print("agent started")  
def on_chat_model_start(
    self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs
) -> None:
    print("Chat model started")
def on_llm_end(self, response: LLMResult, **kwargs) -> None:
    print(f"Chat model ended, response: {response}")
def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs
) -> None:
    print(f"Chain {serialized.get('name')} started")
def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:
    print(f"Chain ended, outputs: {outputs}")

anthropic_model = ChatAnthropic(model=“claude-sonnet-4-5”)
def get_weather(city: str) → str:
“”“Get weather for a given city.”“”
return f"It’s always sunny in {city}!"

agent = create_react_agent(
model=anthropic_model,
tools=[get_weather],
prompt=“You are a helpful assistant”
)
callbacks = [LoggingHandler()]

Run the agent

print(agent.invoke(
{“messages”: [{“role”: “user”, “content”: “what is the weather in sf”}]},config={“callbacks”: callbacks}
))

Hi @BharahthyKannan

something is wrong with your recent message format. Could you fix it please? It’s hard to read, not to mention difficult to copy. Thanks in advance :slight_smile:

from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic
import os
from langchain_core.callbacks import BaseCallbackHandler
from typing import Any, Dict, List,Optional
from uuid import UUID

from langchain_core.agents import AgentAction, AgentFinish

from langchain_anthropic import ChatAnthropic
from langchain_core.callbacks import BaseCallbackHandler
from langchain_core.messages import BaseMessage
from langchain_core.outputs import LLMResult
from langchain_core.prompts import ChatPromptTemplate


class LoggingHandler(BaseCallbackHandler):

    def on_agent_finish(
        self,
        finish: AgentFinish,
        *,
        run_id: UUID,
        parent_run_id: Optional[UUID] = None,
        **kwargs: Any,
    ) -> Any:
        print("agent finish")

    def on_agent_action(
        action: AgentAction,
        *,
        run_id: UUID,
        parent_run_id: UUID | None = None,
        **kwargs: Any,
    )-> Any:
        print("agent started")  
    def on_chat_model_start(
        self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs
    ) -> None:
        print("Chat model started")

    def on_llm_end(self, response: LLMResult, **kwargs) -> None:
        print(f"Chat model ended, response: {response}")

    def on_chain_start(
        self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs
    ) -> None:
        print(f"Chain {serialized.get('name')} started")

    def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:
        print(f"Chain ended, outputs: {outputs}")



anthropic_model = ChatAnthropic(model="claude-sonnet-4-5")


def get_weather(city: str) -> str:  
    """Get weather for a given city."""
    return f"It's always sunny in {city}!"

agent = create_react_agent(
    model=anthropic_model,  
    tools=[get_weather],  
    prompt="You are a helpful assistant"  
)
callbacks = [LoggingHandler()]

# Run the agent
print(agent.invoke(
    {"messages": [{"role": "user", "content": "what is the weather in sf"}]},config={"callbacks": callbacks}
))  

hi @BharahthyKannan

thanks for the snippet :slight_smile:

create_react_agent returns a LangGraph runnable compiled from a graph. It is not the LangChain AgentExecutor loop. The agent-specific callbacks on_agent_action and on_agent_finish are emitted by the AgentExecutor implementation, not by LangGraph’s runnable/graph. With LangGraph prebuilt agents you’ll typically see chain/LLM/tool callbacks, but not the agent-level ones.

Thanks @pawel-twardziak .. understood.. additional question. How to pass extra arguments to onchain or onllmend ? Don’t see any documentation

hi @BharahthyKannan

Have you trued sth like this?

from langchain_core.callbacks import BaseCallbackHandler

class MyHandler(BaseCallbackHandler):
    def __init__(self):
        self._by_run = {}

    def on_chain_start(self, serialized, inputs, *, run_id, tags=None, metadata=None, **_):
        self._by_run[run_id] = {"tags": tags or [], "metadata": metadata or {}}

    def on_chain_end(self, outputs, *, run_id, **_):
        ctx = self._by_run.pop(run_id, {})
        # use ctx["metadata"], ctx["tags"] here

And then

chain.invoke(
  user_input,
  config={
    "callbacks": [MyHandler()],
    "tags": ["job:42"],
    "metadata": {"jobId": 42, "user": "alice"},
  },
)

See there which events metadata are being passed to langchain/libs/core/langchain_core/callbacks/manager.py at 5f9e3e33cd51c3fffd0111302476412f49a06e01 · langchain-ai/langchain · GitHub

@pawel-twardziak this we tried . Onllmend kwargs is not working.. it is coming empty

what kwargs do you expect from on_llm_end?
Metadata are being passed only to on_..._start events, not ..._end.

Say I have internal project name which I need to log on every on_llm_end… I can get this from kwargs ..but how i can pass the kwargs so I can get in llmend

You can’t pass arbitrary extra args directly into on_llm_start/on_llm_end from invoke. Handlers accept kwargs, but built‑ins only forward specific fields.

What you CAN pass:

  • on_llm_start: tags and metadata are forwarded. Use config to supply them, then stash by run_id
  • on_llm_end: tags are forwarded; metadata is not. Read what you stashed at start using run_id

on_llm_start and on_llm_end work together

class MyHandler(BaseCallbackHandler):
    def __init__(self):
        self._ctx = {}

    def on_llm_start(self, serialized, prompts, *, run_id, metadata=None, **_):
        self._ctx[run_id] = metadata or {}

    def on_llm_end(self, response, *, run_id, **_):
        meta = self._ctx.pop(run_id, {})
        projectName = meta.get("projectName")
        if projectName is None:
            return  # or handle/log as needed

llm.invoke(
    "prompt",
    config={"callbacks": [MyHandler()], "tags": ["job:42"], "metadata": {"projectName": "MyProject"}},
)