I’ve built an agent using create_react_agent with LangGraph and the Gemini 2.5 Flash model. While it works, sometimes the agent doesn’t fully follow the rules I’ve set in the system prompt.
I was wondering if there’s a way to teach the agent additional knowledge—for example by adding embeddings alongside the model, or even fine-tuning the model.
I saw that in langchain-google-genai there are ways to add embeddings and connect documents. What’s the typical workflow for this? Can I attach a specific document (like a “guidance book” for coding or domain rules) so the agent can reliably use it as part of its reasoning?
@eyurtsev@xuro-langchain@AbdulBasit Hi, can you guys help me with this? I had asked earlier but didn’t get any response. Please let me know if I need to add more details or clarify anything. Thanks!
Hi @Najiya , we don’t have instructions on fine tuning models. I’d consult the model provider pages for this purpose.
If you’re trying to solve this using context engineering (rather than fine tuning the model), you could explore updating the system prompt with few shot examples. If you construct a dataset of examples that’s larger, you could even retrieve the most relevant examples for the given task using embeddings.
We don’t have guidelines for this since we haven’t done benchmarking on few shot examples in the context of agents (i.e., where the agent has to perform a sequence of task rather than just a single tool call).
I would suggest that you first set up some evaluation criteria and benchmarks and then try a few different approaches for optimziation.