Proposal: Add save_local and load_local to USearch VectorStore (Feature Parity with FAISS)

Hi team! :waving_hand:

Currently, the USearch vector store in langchain-community operates strictly in-memory. This means users lose their index on server restarts, leading to slow cold starts and expensive re-embedding API costs for web applications.

The most common local alternative is FAISS, but its load_local method relies on pickle serialization, which poses well-known security risks (requiring users to explicitly pass allow_dangerous_deserialization=True).

My Proposal

I would like to add save_local and load_local methods to USearch to achieve feature parity, but with a secure-by-default architecture:

  1. Use USearch’s high-performance native .save() and .restore() for the binary index.
  2. Serialize the docstore and ids into a standard, readable .json file (completely bypassing pickle vulnerabilities).

I already have the code, type hints, and integration tests (test_usearch_save_load_local) written, formatted (make format), and passing locally.

Before I open a Pull Request, I wanted to ask: would the maintainers be open to this contribution? Thanks for your time and for the amazing work on LangChain!

Hello,
@xXeverton , would also encourage that you open an issue in the langchain community repo: GitHub - langchain-ai/langchain-community: Community-maintained LangChain integrations · GitHub