New init_chat_model format using LM Studio?

I’m starting a new project and want to make sure I’m suing the latest and greatest approved way of doing things. I see there’s a new way to initialize models init_chat_model instead of constructing a new ChatOpenAI. I am using LM Studio to host the llama.cpp/openai compliant APIs. In the previous methodology, I could supply a base_url and api_key to the constructor.

How can I do this in the new format? Do I lose anything using the older format? Thanks!

Yes, you can use init_chat_model() with LM Studio, since it exposes an OpenAI compatible API (usually via localhost:1234/v1). To make this work, you need to pass the model_provider="openai" so LangChain uses the correct internal logic.

Here’s how to set it up:

from langchain.chat_models import init_chat_model

model = init_chat_model(
    model="your-model-id",  # e.g. "gpt-3.5-turbo" or "lmstudio-llama2"
    model_provider="openai",  # because LM Studio mimics OpenAI's API
    base_url="http://localhost:1234/v1",
    api_key="not-needed"  # LM Studio accepts any string here
)

The init_chat_model() function is just a higher level wrapper introduced to standardize model initialization across providers (OpenAI, Anthropic, HuggingFace, etc.). You’re not losing anything by sticking with the classic ChatOpenAI constructor in fact, for LM Studio and other self-hosted setups, using ChatOpenAI directly may give you more explicit control and transparency.

So both are valid, use what feels cleanest for your setup.

Thanks so much, that’s super helpful. I really appreciate it.

Just to make sure I understand, just about everywhere that’s OpenAI, I can supply my own base url and api key? If I remember correctly, using openai embeddings with those parameters didn’t work. I need to double check.

@AbdulBasit - Okay, I found the spot where I think the OpenAI compatibility may not be working. With the OpenAIEmbeddings. In the below code, if I switch to using the Ollama variant, this works fine. The OpenAI variant gives me a bad request error from LMStudio.

# embeddings = OllamaEmbeddings(model="granite-embedding-30m-english-Q8_0:latest")
embeddings = OpenAIEmbeddings(model="text-embedding-granite-embedding-30m-english", base_url="http://127.0.0.1:1234/v1", api_key="lms-key")
vector_store = Chroma(
    collection_name="gws",
    embedding_function=embeddings,
    persist_directory="./chroma_langchain_db_test",  # Where to save data locally, remove if not necessary
)

uuids = [str(uuid4()) for _ in range(len(docs))]
vector_store.add_documents(documents=docs, ids=uuids)

My documents are valid:

splitter = RecursiveJsonSplitter(
    max_chunk_size=5000,
)
docs = splitter.create_documents(
    texts=all_docs,
    metadatas=[
        {
            "source": key["application"],
            "event_type": key["event_type"],
            "application": key["application"],
        }
        for key in all_docs
    ],
)

Here is my log from LMStudio’s dev console. I can see the model is loaded via JIT so the request is coming in.

2025-07-13 16:48:52  [INFO]
 [LM STUDIO SERVER] Success! HTTP server listening on port 1234
2025-07-13 16:48:52  [INFO]
2025-07-13 16:48:52  [INFO]
 [LM STUDIO SERVER] Supported endpoints:
2025-07-13 16:48:52  [INFO]
 [LM STUDIO SERVER] ->	GET  http://localhost:1234/v1/models
2025-07-13 16:48:52  [INFO]
 [LM STUDIO SERVER] ->	POST http://localhost:1234/v1/chat/completions
2025-07-13 16:48:52  [INFO]
 [LM STUDIO SERVER] ->	POST http://localhost:1234/v1/completions
2025-07-13 16:48:52  [INFO]
 [LM STUDIO SERVER] ->	POST http://localhost:1234/v1/embeddings
2025-07-13 16:48:52  [INFO]
2025-07-13 16:48:52  [INFO]
 [LM STUDIO SERVER] Logs are saved into /Users/akoblentz/.cache/lm-studio/server-logs
2025-07-13 16:48:52  [INFO]
 Server started.
2025-07-13 16:48:52  [INFO]
 Just-in-time model loading active.
2025-07-13 16:48:53  [INFO]
 [Plugin(lmstudio/js-code-sandbox)] stdout: [Tools Prvdr.] Register with LM Studio
2025-07-13 16:48:53  [INFO]
 [Plugin(lmstudio/rag-v1)] stdout: [PromptPreprocessor] Register with LM Studio
2025-07-14 09:08:02  [INFO]
 [JIT] Requested model (text-embedding-granite-embedding-30m-english) is not loaded. Loading "lmstudio-community/granite-embedding-30m-english-GGUF/granite-embedding-30m-english-Q8_0.gguf" now...
2025-07-14 09:08:03 [ERROR]
 'input' field must be a string or an array of strings

I tried different models too - I still get the same “input” field error.

I found on some random page from google - adding check_embedding_ctx_length=False, to the embedding constructor fixes the error. As a note to future readers. :slight_smile: