When using LangGraph tools that are invoked via .ainvoke()
, we’re experiencing async context loss that seems to be specific to LangGraph. This appears to be related to how LangGraph handles async execution contexts differently from ASGI servers.
Background: In ASGI applications (Django, FastAPI, etc.), async views can seamlessly call synchronous database operations because the ASGI server automatically handles the sync/async boundary while preserving request context (user sessions, database connections, tenant information, etc.).
However, when LangGraph executes tools through .ainvoke()
, this automatic context preservation breaks down. Functions that work perfectly in regular ASGI views fail when executed within LangGraph’s .ainvoke async context.
SEE: Async programming with langchain | 🦜️🔗 LangChain
By default, LangChain will delegate the execution of unimplemented asynchronous methods to the synchronous counterparts. LangChain almost always assumes that the synchronous method should be treated as a blocking operation and should be run in a separate thread. This is done using asyncio.loop.run_in_executor functionality provided by the
asyncio
library
For example:
@tool("db_tool", parse_docstring=True)
def get_foo_object(name: str, config: RunnableConfig) -> Foo:
"""Get a Foo by name."""
# This fails when called via LangGraph's ainvoke()
# Works fine with ASGI Django async views
return Foo.objects.get(name = name)
Error symptoms:
- DB context loss (at least seen using django)
- Thread-related database connection issues
- “Can’t call sync method from async context” errors
Resolution
We resolved this by making all tool calls async functions, and wrapping any unresolved syncronous functions used in them with the asgiref.sync library’s sync_to_async
(django doc ref), which will by default run on the same thread as other thread sensitive functions, thereby preserving async context.
This resolution isn’t ideal, as this can present itself in many ways. I am wondering if there is
a) guidance on a cleaner way to resolve this without one-off wrappers (seems this can happen with any ASGI python server regardless of framework)
b) the potential for langgraph to add a parameter to ainvoke that enables a similar thread_sensitive
capability
Any insight is appreciated