Can someone explain to me how to use the vector database through Langgraph Store, which is already integrated into Langgraph Cloud, without using third-party vector databases, such as Qdrant or Pinecone?
I’m interested in what capabilities are available, because we’ve already uploaded some data into Langgraph Store and we’re very satisfied with it, but I’d like to understand what options exist for Vector Search Similarity using Store.Is it same as pgvectore in PostgreSQL or?
Here is some code example for retrieving
store = get_store()
if "." in form_name:
form_name = form_name.replace(".", "_")
doc_name_store = f"rag_docs_{form_name}"
items = await store.asearch((user_id, doc_name_store), query=query, limit=2)
Specifically, what I’m interested in:
Does LangGraph Store internally use PostgreSQL with pgvector for similarity search?
If yes, can we rely on it as a persistent memory store — for example, for long-term knowledge retention or contextual memory across sessions in RAG applications?
Is there any way to interact with it more transparently as a vector store:
Manually insert and manage raw vectors?
Update or delete embeddings?
Create and manage namespaces or collections?
Lastly, we are currently on the Startup Plan in LangGraph Cloud — what are the current capacity limits for vector storage (e.g., max number of vectors, storage size, or namespace limits)?
Does LangGraph Store internally use PostgreSQL with pgvector for similarity search?
Yes.
If yes, can we rely on it as a persistent memory store — for example, for long-term knowledge retention or contextual memory across sessions in RAG applications?
Yes, you can rely on it.
Is there any way to interact with it more transparently as a vector store:
Manually insert and manage raw vectors?
Update or delete embeddings?
Create and manage namespaces or collections?
Manually insert and manage raw vectors? ← Not directly Update or delete embeddings? ← Not directly Create and manage namespaces or collections? ← Yes
Generally speaking, vector embeddings are generated/deleted based on the Store configuration and Store API interactions.
Lastly, we are currently on the Startup Plan in LangGraph Cloud — what are the current capacity limits for vector storage (e.g., max number of vectors, storage size, or namespace limits)?
There are no capacity limits with respect to the Startup Plan. An individual LangGraph Server deployment’s disk capacity is configured based on the Deployment Type.
Development type deployments have a 10 GB disk size.
Production type deployments have an auto-increasing disk size.
So we can use it instead of Pinecone, Milvus, or Qdrant—I suppose there aren’t any major differences, > or?”
Yes. However, I’m not suggesting that LangGraph Platform’s Store API/vector search functionality is a drop-in replacement for Pinecone, Milvus, or Qdrant. Try it out and see if it’s sufficient for your requirements. We’re always open to hearing direct feedback from users.