Anthropic Support for AzureAIChatCompletionsModel

I have been unsuccessful in trying to use Anthropic’s Claude Opus 4.6, deployed within my Azure AI Foundry project, with the AzureAIChatCompletionsModel. I’ve tried many different combinations of credentials and endpoints to try to get it working but to no avail.

Code snippet:

chat_completion_llm = AzureAIChatCompletionsModel(
        model="<claude opus 4.6 deployment name>",
        cache=False,
        max_tokens=2048,
        temperature=0.7,
        endpoint="https://{my_resource}.services.ai.azure.com/anthropic/v1/messages",
        credential="<MY_PROJECT_API_KEY>",
    )
response = chat_completion_llm.invoke(user_question)

Error:

File "/Users/jacobmontgomery/Documents/ADO Repos/eai-3541975-agent-graph/agent-graph-api/src/routes/chat.py", line 131, in test_new_llm_service
    response = chat_completion_llm.invoke(user_question)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jacobmontgomery/Documents/ADO Repos/eai-3541975-agent-graph/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 402, in invoke
    self.generate_prompt(
  File "/Users/jacobmontgomery/Documents/ADO Repos/eai-3541975-agent-graph/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1121, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jacobmontgomery/Documents/ADO Repos/eai-3541975-agent-graph/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 931, in generate
    self._generate_with_cache(
  File "/Users/jacobmontgomery/Documents/ADO Repos/eai-3541975-agent-graph/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1233, in _generate_with_cache
    result = self._generate(
             ^^^^^^^^^^^^^^^
  File "/Users/jacobmontgomery/Documents/ADO Repos/eai-3541975-agent-graph/.venv/lib/python3.12/site-packages/langchain_azure_ai/chat_models/inference.py", line 546, in _generate
    response = self._client.complete(
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jacobmontgomery/Documents/ADO Repos/eai-3541975-agent-graph/.venv/lib/python3.12/site-packages/azure/ai/inference/_patch.py", line 737, in complete
    map_error(status_code=response.status_code, response=response, error_map=error_map)
  File "/Users/jacobmontgomery/Documents/ADO Repos/eai-3541975-agent-graph/.venv/lib/python3.12/site-packages/azure/core/exceptions.py", line 163, in map_error
    raise error
azure.core.exceptions.ResourceNotFoundError: (api_not_supported) Requested API is currently not supported
Code: api_not_supported
Message: Requested API is currently not supported

Does support not exist for Anthropic at this time with their Messages API? Any help would be appreciated.

hi @jacobreesmontgomery

would that be helpful How to use ChatAnthropic with azure_ad_token_provider? - #4 by pawel-twardziak ?

Hi @jacobreesmontgomery ,

This isn’t a Claude support issue, it’s an endpoint mismatch.

You’re pointing to:

/anthropic/v1/messages

But AzureAIChatCompletionsModel doesn’t work with Anthropic’s native Messages API route. Azure exposes Claude through its Chat Completions API surface, not the /v1/messages path.

That’s why you’re getting:

api_not_supported

What to change

Just use the base endpoint:

endpoint="https://{my_resource}.services.ai.azure.com/"

And keep your deployment name as the model value.

No /anthropic/v1/messages.


So yes — Azure doesn’t currently let you call Claude using Anthropic’s native Messages API when using the Azure inference SDK. You have to go through Azure’s Chat Completions abstraction.

I had tried that as well, but that didn’t work either. No matter what api_version I establish, I get this error for that:

  File "/Users/jacobmontgomery/Documents/ADO Repos/eai-3541975-agent-graph/.venv/lib/python3.12/site-packages/langchain_azure_ai/chat_models/inference.py", line 546, in _generate
    response = self._client.complete(
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jacobmontgomery/Documents/ADO Repos/eai-3541975-agent-graph/.venv/lib/python3.12/site-packages/azure/ai/inference/_patch.py", line 738, in complete
    raise HttpResponseError(response=response)
azure.core.exceptions.HttpResponseError: (BadRequest) API version not supported
Code: BadRequest
Message: API version not supported

I get this error whether I provide an api_version parameter or not. I’ve tried a lot of API versions but none work. The above error came from trying 2025-03-01-previewbut when I don’t provide one, I also see it defaulting to 2024-05-01-preview and failing on that.

@jacobreesmontgomery have you checked that post?

Yes. Will AzureAIChatCompletionsModel not suffice for working with Claude models in a Foundry project? I would prefer to use existing integrations than introduce custom, dirtier logic for supporting the Claude models.

Good news is that your provided wrapper of the ChatAnthropic integration did work for my Foundry Claude model. However, I’d rather not have have to use this custom logic local to my codebase. Is there any reason this doesn’t reside within the LangChain library itself? It seems worthwhile considering the evident lack of flexibility of the AzureAIChatCompletionsModel.