Most evaluation platforms capture parameters for the experiment, such as prompt version and model configuration. This allows the user to more easily track how changes in these parameters affect performance.
Running experiments in Langsmith UI, I can’t see anything to indicate which prompts and model parameters are used for an experiment. Are these accessible somewhere, please?
Here is how to view the parameters used
After running the eval there is a link to langsmith where you will be able to see the input, ouput, reference output and the evaluators.
If you hover on a given evaluator entry you will see an arrow, click on that arrow this will open a side window where you can be able to see the trace, you can now click the model and see the any other information, for example the prompt used etc
Thanks for taking the time, but unfortunately, the above doesn’t capture model parameters such as temperature, token limits, and an array of other parameters that can be tuned and experimented with. Also, the prompt version isn’t captured as far as I can tell.
So looking at an experiment, I can’t tell what was actually tested.
Other platforms such as confident-ai capture these, I was wondering if Langsmith has similar features I am missing perhaps?
I found this frustrating as well. Quite of few of our experiments were comparing different models and in the main Experiments page there is no easy top level way to view this.
I end up naming all my experiments according to the model/version used which helps but it seems like this should be available on the experiments viewer page.
It would be nice if you could set a custom name format Experiments to with possibility to include model name or parameters
Add model name/parameters as columns to the Experiments table