Hi @dhanlon
have you checked that doc Context overview - Docs by LangChain?
Your mental model is basically right.
In LangGraph:
context= is for static, per‑run runtime context (user metadata, per‑assistant config, DB handles, etc.).
- State is for dynamic, per‑run data (messages, intermediate values, mutable fields).
- Store is for dynamic, cross‑conversation data (long‑term profiles, preferences).
- Studio “assistants” are mostly named, versioned configurations of the same graph plus default context/config.
Your statement:
context is information that is static at runtime. Variables that don’t change.
is exactly what LangGraph calls static runtime context.
The context= argument: static runtime context
That is almost exactly your proposed usage:
-
Put per‑run, read‑only values like customer_name, customer_preferences, user_id, content_type in a context schema.
-
Pass them via context={…} to invoke / stream.
-
Read them inside prompts/tools via runtime.context.
So for “personal assistant used by thousands of users; we inject their name/preferences each run”, your idea of:
context = {
"customer_name": "David",
"customer_preferences": "...",
}
my_agent.invoke(initial_state, context=context)
is precisely what static runtime context is for.
When to use state instead (dynamic runtime context)
Sometimes you do want the graph to be able to change those values during a run (or persist the updated value later): e.g. user says “actually, call me Dave now”, or a tool updates preferences based on behavior.
For that, the docs show putting fields into the state (dynamic runtime context) instead of – or in addition to – context:
class CustomState(AgentState): # (1)!
user_name: str
def prompt(
# highlight-next-line
state: CustomState
) -> list[AnyMessage]:
user_name = state["user_name"]
system_msg = f"You are a helpful assistant. User's name is {user_name}"
return [{"role": "system", "content": system_msg}] + state["messages"]
agent = create_react_agent(
model="anthropic:claude-3-7-sonnet-latest",
tools=[...],
# highlight-next-line
state_schema=CustomState, # (2)!
prompt=prompt
)
agent.invoke({
"messages": "hi!",
"user_name": "John Smith"
})
A reasonable rule of thumb:
- Use
context (static runtime context) when:
- The value should not change during a single run.
- It’s environment/config‑like (who is the user, what assistant flavor is this, what plan/flags are on, what tools/DBs are wired in).
- Use
state when:
- The value can change as the graph runs (updated preferences, derived flags, step counters).
- You want it to participate in LangGraph’s state‑update semantics and optional memory.
You can also mix the two: load a stable profile from the store into context, and then have mutable per‑run fields in state.
Cross‑conversation context (store)
For long‑term user data (profiles, preferences, history) that should survive multiple conversations and restarts, the docs recommend the store as “dynamic cross‑conversation context”.
A common production pattern:
- Store user profile/preferences in the store keyed by user_id / thread_id.
- At the top of each run:
- Read from store,
- Put the data into context if you only want a read‑only snapshot, or
- Into state if you might update it and write it back.
This keeps runs lightweight while still giving you personalization.
How Studio “assistants” relate to context
LangGraph Studio / Cloud adds a UX layer: Assistants.
Conceptually, an assistant is:
- A particular graph (your code),
- plus default context/config values (e.g. content_type=“blog_post”, default system prompt, enabled tools),
- plus environment (model, checkpointer, etc.), versioned and named.
The LangSmith/Cloud configuration docs and assistant editor articles describe this pattern: you define a context schema in code, and the assistant’s configuration UI lets you provide default values for that context. At runtime, those are applied by passing them to the graph as context.
So:
- Your blog writer with content_type=“blog_post” vs “tweet” is just different context values (and maybe different prompts/tools) on the same graph.
- Studio assistants surface that as “assistant configuration”, but semantically you’re still just providing static runtime context + other config.
That’s why Studio feels like “context = assistant configuration knobs”: it is literally using the context mechanism (and sometimes config[‘configurable’]) under the hood.