LangGraph .ainvoke() breaks ASGI async context

When using LangGraph tools that are invoked via .ainvoke(), we’re experiencing async context loss that seems to be specific to LangGraph. This appears to be related to how LangGraph handles async execution contexts differently from ASGI servers.

Background: In ASGI applications (Django, FastAPI, etc.), async views can seamlessly call synchronous database operations because the ASGI server automatically handles the sync/async boundary while preserving request context (user sessions, database connections, tenant information, etc.).

However, when LangGraph executes tools through .ainvoke(), this automatic context preservation breaks down. Functions that work perfectly in regular ASGI views fail when executed within LangGraph’s .ainvoke async context.

SEE: Async programming with langchain | 🦜️🔗 LangChain

By default, LangChain will delegate the execution of unimplemented asynchronous methods to the synchronous counterparts. LangChain almost always assumes that the synchronous method should be treated as a blocking operation and should be run in a separate thread. This is done using asyncio.loop.run_in_executor functionality provided by the asyncio library

For example:

@tool("db_tool", parse_docstring=True)
def get_foo_object(name: str, config: RunnableConfig) -> Foo:
    """Get a Foo by name."""
    # This fails when called via LangGraph's ainvoke()
    # Works fine with ASGI Django async views
    return Foo.objects.get(name = name)

Error symptoms:

  • DB context loss (at least seen using django)
  • Thread-related database connection issues
  • “Can’t call sync method from async context” errors

Resolution
We resolved this by making all tool calls async functions, and wrapping any unresolved syncronous functions used in them with the asgiref.sync library’s sync_to_async (django doc ref), which will by default run on the same thread as other thread sensitive functions, thereby preserving async context.

This resolution isn’t ideal, as this can present itself in many ways. I am wondering if there is
a) guidance on a cleaner way to resolve this without one-off wrappers (seems this can happen with any ASGI python server regardless of framework)
b) the potential for langgraph to add a parameter to ainvoke that enables a similar thread_sensitive capability

Any insight is appreciated

Why not write the tool as a native async function so it doesn’t have to be run on a thread?

But this does sound like it could be a bug in how the config is being propagated, since we do copy context in most cases. Someone from the team will look into this

Thanks for the quick response Will!

We could write these as native async functions explicitly, but it would be an antipattern in our codebase; we have service/helper layers written sync (not explicitly async), and the ASGI framework(s) auto-resolve async handling. So we’d need to define async functions only for Langgraph parts of the code, but the rest of the codebase doesn’t need it.

There’s also not a good way to enforce that- if someone takes an existing function and wants to use it as a tool but forgets to explicitly make it async, it’d continue to work everywhere else, but there would be unintended potentially hard to identify side-effects when invoked with LangGraph- eventually this will happen.

This could potentially help a lot of Django/FastAPI/Starlette users who want to use existing sync service/helper code layers with LangGraph without changing their codebase practices. Appreciate the support!

Ya makes sense. For context, we do auto-promote and copy_context() to propagate async context vars to the sync functions in general. This is also included in CI, so we must be hitting a corner case here somehow.

Do you happen to have a full MRE you could share that illustrates it (just to help our team address this for you faster)

Our project is a bit involved, let me get a cycle and I’ll think of a scoped down example for reproduction.