Is Context for static information or for langgraph assistant configurations?

Issue/Question

I am seeking some clarity on what Langchain and LangGraph considers “context” vs “assistants” (configurable agents… easy example being a blog writer where prompts/tools change for types of content).

Description

From my understanding context is information that is static at runtime. Variables that don’t change.

To me something like customer_name, or user_preferences or user_id (perhaps used in a tool) is “context” that gets utilized at runtime via something like the dyanmic_prompt middleware to fill out the system prompt with context of the user’s name, preferences or strict rules scoped to this user/invocation of the agent. Lets say this agent is a personal assistant (and we have 1000s of users that use this agent).

However LangGraphStudio would appear to suggest “context” is strictly configuration options to be utilized to change agent behavior via customer assistants that you can version. In the example of a blog writer agent… “context” might contain a config variable called content_type , which would be used to influence the system prompt and its tools… thus creating an assistant or variation of the blog writer agent.

Small Example of what I thought would be the correct usage:

class MyAgentContext(TypedDict):

""""""

thread_id: NotRequired[str | None]

customer_name: Required[str]
customer_preferences: Required[str]



class MyAgentState(AgentState):

"""""“

messages: Annotated[list[BaseMessage], add_messages] 
initial_state = {

    "messages": [

        HumanMessage(content=”Hello world!"),

],

}

context = {

“customer_name”: “David”,
"customer_preferences": "David likes to go on long walks, and his favorite football team is the Baltimore Ravens (poor guy)"

}

some_results = my_agent.invoke(initial_state, context=context)

and for every different “customer” that invoked this agent, the agent would know their name across turns… the agent or the graph does not “change” the customer’s name. Just references it at various points

Any and all thoughts/comments are welcomed… would love to understand what is a more correct usage and what others are doing.

Hi @dhanlon

have you checked that doc Context overview - Docs by LangChain?

Your mental model is basically right.

In LangGraph:

  • context= is for static, per‑run runtime context (user metadata, per‑assistant config, DB handles, etc.).
  • State is for dynamic, per‑run data (messages, intermediate values, mutable fields).
  • Store is for dynamic, cross‑conversation data (long‑term profiles, preferences).
  • Studio “assistants” are mostly named, versioned configurations of the same graph plus default context/config.

Your statement:

context is information that is static at runtime. Variables that don’t change.

is exactly what LangGraph calls static runtime context.

The context= argument: static runtime context

That is almost exactly your proposed usage:

  • Put per‑run, read‑only values like customer_name, customer_preferences, user_id, content_type in a context schema.

  • Pass them via context={…} to invoke / stream.

  • Read them inside prompts/tools via runtime.context.

So for “personal assistant used by thousands of users; we inject their name/preferences each run”, your idea of:

context = {
    "customer_name": "David",
    "customer_preferences": "...",
}
my_agent.invoke(initial_state, context=context)

is precisely what static runtime context is for.

When to use state instead (dynamic runtime context)

Sometimes you do want the graph to be able to change those values during a run (or persist the updated value later): e.g. user says “actually, call me Dave now”, or a tool updates preferences based on behavior.

For that, the docs show putting fields into the state (dynamic runtime context) instead of – or in addition to – context:

class CustomState(AgentState): # (1)!
    user_name: str

def prompt(
    # highlight-next-line
    state: CustomState
) -> list[AnyMessage]:
    user_name = state["user_name"]
    system_msg = f"You are a helpful assistant. User's name is {user_name}"
    return [{"role": "system", "content": system_msg}] + state["messages"]

agent = create_react_agent(
    model="anthropic:claude-3-7-sonnet-latest",
    tools=[...],
    # highlight-next-line
    state_schema=CustomState, # (2)!
    prompt=prompt
)

agent.invoke({
    "messages": "hi!",
    "user_name": "John Smith"
})

A reasonable rule of thumb:

  • Use context (static runtime context) when:
    • The value should not change during a single run.
    • It’s environment/config‑like (who is the user, what assistant flavor is this, what plan/flags are on, what tools/DBs are wired in).
  • Use state when:
    • The value can change as the graph runs (updated preferences, derived flags, step counters).
    • You want it to participate in LangGraph’s state‑update semantics and optional memory.

You can also mix the two: load a stable profile from the store into context, and then have mutable per‑run fields in state.

Cross‑conversation context (store)

For long‑term user data (profiles, preferences, history) that should survive multiple conversations and restarts, the docs recommend the store as “dynamic cross‑conversation context”.

A common production pattern:

  1. Store user profile/preferences in the store keyed by user_id / thread_id.
  2. At the top of each run:
    • Read from store,
    • Put the data into context if you only want a read‑only snapshot, or
    • Into state if you might update it and write it back.

This keeps runs lightweight while still giving you personalization.

How Studio “assistants” relate to context

LangGraph Studio / Cloud adds a UX layer: Assistants.

Conceptually, an assistant is:

  • A particular graph (your code),
  • plus default context/config values (e.g. content_type=“blog_post”, default system prompt, enabled tools),
  • plus environment (model, checkpointer, etc.), versioned and named.

The LangSmith/Cloud configuration docs and assistant editor articles describe this pattern: you define a context schema in code, and the assistant’s configuration UI lets you provide default values for that context. At runtime, those are applied by passing them to the graph as context.

So:

  • Your blog writer with content_type=“blog_post” vs “tweet” is just different context values (and maybe different prompts/tools) on the same graph.
  • Studio assistants surface that as “assistant configuration”, but semantically you’re still just providing static runtime context + other config.

That’s why Studio feels like “context = assistant configuration knobs”: it is literally using the context mechanism (and sometimes config[‘configurable’]) under the hood.

1 Like

@pawel-twardziak Thank you for the quick response and detailed explanation! Yes, I have read those docs which has partially led to my confusion.

So in this example, how would you go about making context traceable (using the decorator or is there a config to enable that behavior by default)?

For additional insight:

Imagine trying to do a trace replay of a run where the runtime context used for that run matters. I believe LangGraph studio only allows you to drop in state in the input…. and “context” is only able to be added via the settings/manage assistants button, right? I can’t imagine needing an assistant for every single user to replay a trace haha

Any suggestions on how to “properly” put “context” items in context but still have that trace replay ability?

# assume these don’t change in the graph (maybe not the best example variables)
# these are added as context by the api invoking the agent.
context = {
    "customer_name": "David",
    "customer_profile_summary": "...",
}
my_agent.invoke(initial_state, context=context)



Would you by chance have a link to this in code/docs or YT video? Would like to get a deeper understanding of this pattern