When I run my deployed graph (chat_agent) on LangSmith / LangGraph Cloud, the run fails immediately with:
RuntimeError: Cannot patch execution_info before it has been set
In LangSmith, the failure shows up right at the start of the trace (e.g. around __START__). I suspect a platform API vs Python library interaction (not certain), because on traces that work, I see LANGSMITH_LANGGRAPH_API_VERSION 0.7.90. On failing traces, it shows 0.7.96.
My setup
langgraph.json uses "dependencies": ["."], and pyproject.toml has langgraph>=0.2.0. Graph built with Deep Agents (create_deep_agent), MemorySaver checkpointer, if that helps triage.
What I tried:
I dug through the LangSmith build logs for a good build. I see the base image, e.g.
gcr.io/langchain-prod/langgraph-executor-unlicensed:0.7.94-py3.11,
and lines like
langgraph-api==0.7.94,langgraph-runtime-inmem==0.27.0,langgraph-cli==0.4.19
I don’t see a clear langgraph==… line for the core langgraph library, and searching for “already satisfied” / langgraph== didn’t surface it. my understanding is that the core package may already ship in the executor image unless my install step upgrades it.
Questions:
- Is this
execution_info/patch_execution_infoerror a known issue with a specific pairing of platform API / executor /langgraphversions? - What’s the recommended way to determine which
langgraph(PyPI) version is actually executing my graph in Cloud (given the executor base image)? - What’s the recommended way to pin that version from a project that uses
dependencies: ["."]— e.g. shouldlanggraph==x.y.zinpyproject.tomlreliably override what’s in the executor image?
happy to provide repo details, langgraph.json, and traceback if useful.