How to use custom LLM API in Playground

I have set up a custom LLM API, but it still prompts me to enter the Open AI key

Hello @kun1s2
Yes, you can use a custom LLM in the LangSmith Playground by adding a workspace model configuration that points to your provider (or an OpenAI/Anthropic-compatible endpoint).

You do this by adding a workspace secret for the provider API key, then creating a Model Configuration in Settings and making it available for the Playground. If your model’s API is wire-compatible with OpenAI or Anthropic, you can simply set the provider to OpenAI/Anthropic and point base_url to your endpoint; otherwise use any supported provider integration. Playground uses the workspace model configurations to populate the model dropdown and run calls.

// Quick steps (UI)
1. In LangSmith: Settings → Secrets → Add secret
   - Key: e.g. `OPENAI_API_KEY` (or the env var name your provider expects)
   - Value: your API key

2. Settings → Model Configurations → + Create
   - Provider: select the provider (e.g., OpenAI / Anthropic / other)
   - Model: enter the model identifier (e.g., `my-llm-v1` or provider/model)
   - API Key Name: the secret name you added (`OPENAI_API_KEY`)
   - Provider config / Extra Parameters: set `base_url` or other params if needed
   - Save & enable the configuration for the Playground feature

3. Open Playground → Prompt Settings → choose the saved configuration from the Model dropdown


Relevant docs: