Hi LangChain community, I’m a self-professed noob and I just got started with LangMem and have been reading the docs.
However I don’t know how AsyncPostgresStore actually works. I understand that I need to create an index_config and specify an embedding model, in this caseopenai:text-embedding-3-small. I’m assuming by specifying an embedding model that there is some sort of vector table in my postgres db.
I have 3 sets of questions regarding implementation of long-term memory:
- Is specifying an embedding model necessary for long-term memory to work? Is it recommended? What happens if I don’t use an embedding model?
- I see 2 tables in my DB
storeandstore_vectors. If I don’t specify an embedding model, then onlystorewill be used? How do both tables get used by AsyncPostgresStore? - When
.aputor.asearchis called, how doesmemory_searcherormemory_managerfigure out whether to save tostoreorstore_vector? Throughout my setup I didn’t explicit doCREATE TABLE store or store_vectorand I didn’t see any documentation on setting this up so far, so I feel very uneasy with this “magic”
Any guidance would be really appreciated!