Qa_chain.invoke({"question": data, "chat_history": chat_history}) error

I am having trouble executing

qa_chain.invoke({“question”: data, “chat_history”: chat_history})

I get the following error

‘syncrpcfilterrequestbuilder’ object has no attribute ‘params’

My script originally worked using Faiss vectorstore but i switched it to supabase vectorstore and this issue happened, it is definitely related to supabase.

here my script

def create_vectorstore(logs, embedding_model):

text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)

documents = \[\]

for log in logs:

    splits = text_splitter.split_text(log.get('full_log', ''))

    for chunk in splits:

        documents.append(Document(page_content=chunk, metadata={"id": 1}))

print("after for loop")

vs = SupabaseVectorStore.from_documents(documents, embedding_model,  client=supabase, table_name="documents", query_name="match_documents", chunk_size=500,)

return vs

embedding_model = HuggingFaceEmbeddings(model_name=“Qwen/Qwen3-Embedding-0.6B”, model_kwargs={‘device’: ‘cuda’})

vectorstore = create_vectorstore(logs, embedding_model)

llm = ChatOllama(model="llama3.1:8b")

context = initialize_assistant_context()

qa_chain = ConversationalRetrievalChain.from_llm(

    llm=llm,

    retriever=vectorstore.as_retriever(),

    return_source_documents=False

)

print(f" Received question: {data}")

response = qa_chain.invoke({“question”: data, “chat_history”: chat_history})

answer = response.get(“answer”, “”).replace(“\\n”, “\n”).strip()

Supabase structure

– Enable the pgvector extension to work with embedding vectors
create extension vector;

– Create a table to store your documents
create table documents (
id uuid primary key,
content text, – corresponds to Document.pageContent
metadata jsonb, – corresponds to Document.metadata
embedding extensions.vector(1536) – 1536 works for OpenAI embeddings, change if needed
);

– Create a function to search for documents
create function match_documents (
query_embedding extensions.vector(1536),
match_count int default null,
filter jsonb DEFAULT ‘{}’
) returns table (
id uuid,
content text,
metadata jsonb,
similarity float
)
language plpgsql
as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding
limit match_count;
end;
$$;

Hi @ei3

what’s the content of requirements.txt/pyproject.toml in the project?

Unfortunately i do not have the code on me right now. But if it helps you, all langchain packages are V0.3.27. The old Langchain, not V1

It would be great if you provide the exact versions of all packages in your requirements

aiohappyeyeballs 2.6.1
aiohttp 3.13.1
aiosignal 1.4.0
annotated-types 0.7.0
anyio 4.11.0
async-timeout 4.0.3
attrs 25.4.0
bcrypt 5.0.0
certifi 2025.10.5
cffi 2.0.0
charset-normalizer 3.4.4
click 8.3.0
colorama 0.4.6
cryptography 46.0.3
dataclasses-json 0.6.7
deprecation 2.1.0
exceptiongroup 1.3.0
faiss-cpu 1.12.0
fastapi 0.119.1
filelock 3.20.0
frozenlist 1.8.0
fsspec 2025.9.0
greenlet 3.2.4
h11 0.16.0
h2 4.3.0
hf-xet 1.1.10
hpack 4.1.0
httpcore 1.0.9
httptools 0.7.1
httpx 0.28.1
httpx-sse 0.4.3
huggingface-hub 0.35.3
hyperframe 6.1.0
idna 3.11
invoke 2.2.1
Jinja2 3.1.6
joblib 1.5.2
jsonpatch 1.33
jsonpointer 3.0.0
langchain 0.3.27
langchain-community 0.3.31
langchain-core 0.3.79
langchain-huggingface 0.3.1
langchain-ollama 0.3.10
langchain-text-splitters 0.3.11
langsmith 0.4.37
lockfile 0.12.2
MarkupSafe 3.0.3
marshmallow 3.26.1
mpmath 1.3.0
multidict 6.7.0
mypy_extensions 1.1.0
networkx 3.4.2
numpy 2.2.6
ollama 0.6.0
orjson 3.11.3
packaging 25.0
paramiko 4.0.0
pillow 12.0.0
pip 21.2.3
postgrest 2.22.2
propcache 0.4.1
pycparser 2.23
pydantic 2.12.3
pydantic_core 2.41.4
pydantic-settings 2.11.0
PyJWT 2.10.1
PyNaCl 1.6.0
python-daemon 3.1.2
python-dotenv 1.1.1
pytz 2025.2
PyYAML 6.0.3
realtime 2.22.2
regex 2025.10.23
requests 2.32.5
requests-toolbelt 1.0.0
safetensors 0.6.2
scikit-learn 1.7.2
scipy 1.15.3
sentence-transformers 5.1.2
setuptools 57.4.0
sniffio 1.3.1
SQLAlchemy 2.0.44
starlette 0.48.0
storage3 2.22.2
StrEnum 0.4.15
supabase 2.22.2
supabase-auth 2.22.2
supabase-functions 2.22.2
sympy 1.14.0
tenacity 9.1.2
threadpoolctl 3.6.0
tokenizers 0.22.1
torch 2.9.0+cu130
torchvision 0.24.0+cu130
tqdm 4.67.1
transformers 4.57.1
typing_extensions 4.15.0
typing-inspect 0.9.0
typing-inspection 0.4.2
urllib3 2.5.0
uvicorn 0.38.0
watchfiles 1.1.1
websockets 15.0.1
yarl 1.22.0
zstandard 0.25.0

I am getting the same error after using SupabaseVector Store
Did you find any solution?