Feature Request: Support Lazy Evaluation in interrupt via Callables

Description

Currently, the interrupt function requires the value to be passed directly. This value is surfaced to the client when a GraphInterrupt is raised.

I propose extending interrupt (and adding/updating async_interrupt) to support callables (both sync and async). If a callable is provided, it should only be executed at the moment the interrupt is actually raised, rather than being evaluated beforehand and passed as a static argument.

Motivation

In complex nodes, the “value” surfaced to a human might require:

  1. Expensive Computation: Generating a summary or a preview of a large dataset.

  2. External Calls: Fetching the latest state from a DB or a 3-party API to ensure the human sees the most up-to-date context.

  3. Cleanliness: Avoiding boilerplate logic inside the node to “prepare” the interrupt value if it’s only needed conditionally.

By passing a callable, we ensure these operations only happen if the graph hasn’t already been resumed for that specific interrupt index.

Proposed Changes

1. Support for Callables in interrupt

The interrupt function logic would check if value is callable. If it is, and no resume value is found in the scratchpad, it executes the callable to populate the Interrupt object.

2. Introduction of async_interrupt

Since many LangGraph workflows are async, providing an async_interrupt allows users to await database or API calls lazily.

Example implementation for async_interrupt:

Python

async def async_interrupt(value: Union[Any, Callable[[], Awaitable[Any]]]) -> Any:
    # ... logic to check scratchpad ...
    
    # When no resume value is found:
    interrupt_value = await value() if callable(value) else value
    
    raise GraphInterrupt(
        (
            Interrupt.from_ns(
                value=interrupt_value,
                ns=conf[CONFIG_KEY_CHECKPOINT_NS],
            ),
        )
    )

Use Case Example

Python

async def my_node(state: State):
    # The 'get_latest_db_context' only runs if we actually 
    # hit the interrupt, not on every re-execution of the node.
    answer = await async_interrupt(get_latest_db_context)
    return {"data": answer}
3 Likes

hi @alonahmias

I’m a huge fan of this feature :heart_with_arrow: Just raised a PR for it :slight_smile: feat(langgraph): add async_interrupt by pawel-twardziak · Pull Request #6729 · langchain-ai/langgraph · GitHub

@sydney-runkle @mdrxy @wfh

Haha Pawal beat me to it. We’ll get this into the next release

1 Like

Sorry Will, I didn’t want it to turn out this way :face_holding_back_tears: This feature simply stole my heart :person_shrugging: :grinning_face_with_smiling_eyes:

1 Like

I’m very happy you got so excited from my feature request :slight_smile: , for now we hard coded this function in our code, in we would very much like to delete it :joy:

2 Likes

hi @alonahmias

look at that comment feat(langgraph): add async_interrupt by pawel-twardziak · Pull Request #6729 · langchain-ai/langgraph · GitHub :slight_smile:

Hey, unfortunately, this doesn’t solve my issue, because im running the interrupt in a subgraph, and than it still revaluate the heavy node twice, or am i missing something?

I elaborated more in the issue u opened