I want the user to be able to invalidate the cache, if any, for a particular LLM call.
I’m using Langchain + Langgraph with RedisCache.
self._models[model_name] = ChatBedrockConverse(
model=AVAILABLE_MODELS[model_name]["model"],
provider="anthropic"
)
redis_client = redis.Redis.from_url("redis://localhost:6379/0")
set_llm_cache(RedisCache(redis_client))
response = self._models[model_name].invoke(messages)
return {"response": response}
I’m omitting a lot of code just for brevity. The caching implementation works like a charm, no issue with that. But I want to be able to force a new LLM call even if there is already a cache for it.
I know I can just disable the cache before the invoke, but I also want the new LLM call response to replace the old cache.
I tried playing around with:
BaseCache.update(prompt, llm_string, return_val)
But I’m having a hard time manually creating promp and llm_string to be able to update the correct Redis key.
Is there an easy way of doing what I’m trying to do?
Any help is much appreciated.