I want to test completion choices which is not supported by the common methods like invoke and only by the generate method. The latter is not fully “compatible” within the langchain framework like e.g. with_structured_output.
In order to stick to the general design pattern I’m investigating the code below. The only remaining hurdle is: how to pass the callbacks for e.g. other kinds of tracing?
prompt = ChatPromptTemplate.from_template(
"""
{query}
FORMAT INSTRUCTIONS:
{format_instructions}
""").partial(format_instructions=parser.get_format_instructions())
generate_llm = RunnableLambda(lambda x: llm.generate([x.messages], callbacks=?????))
generate_parser = RunnableLambda(lambda x: [parser.parse(c.text) for cl in x.generations for c in cl])
chain = (prompt | generate_llm | generate_parser)
result = chain.invoke('Who is the best football player in the world?', config={'callbacks':[opik_tracer]})
You can access parent callbacks in your RunnableLambda through the automatically passed config kwargs:
from langchain_core.runnables import RunnableLambda
def generate_with_callbacks(x, **kwargs):
callbacks = kwargs.get('callbacks', [])
return llm.generate([x.messages], callbacks=callbacks)
def parse_with_callbacks(x, **kwargs):
return [parser.parse(c.text) for cl in x.generations for c in cl]
generate_llm = RunnableLambda(generate_with_callbacks)
generate_parser = RunnableLambda(parse_with_callbacks)
Alternatively, create a custom Runnable subclass for cleaner config handling:
from langchain_core.runnables import Runnable
class GenerateRunnable(Runnable):
def invoke(self, input, config=None):
callbacks = config.get('callbacks', []) if config else []
return llm.generate([input.messages], callbacks=callbacks)
generate_llm = GenerateRunnable()
Both approaches properly propagate the parent config and maintain composability within the LangChain framework.
Tnx @AbdulBasit. Somehow I don’t get notified by this platform.
That’s a way indeed. However, I just figure out a working LangChain “idiomatic” way, although, it looks very similar to your first option. This is also use for e.g. nodes in LangGraph.
def llm_process(x, config: RunnableConfig):
return llm.generate([x.messages], callbacks=config['callbacks'])
generate_llm = RunnableLambda(llm_process).with_config({'run_name':'llm generate'})
generate_parser = RunnableLambda(lambda x: [parser.parse(c.text) for cl in x.generations for c in cl]).with_config({'run_name':'output parser'})
chain = (prompt | generate_llm | generate_parser).with_config({'run_name':'llm generate chain'})
r = chain.invoke('Who is the best football player in the world?', config={'callbacks':[opik_tracer]})
1 Like