Conversation
| # `deployment_id` for Azure OpenAI, so we handle it directly. | ||
| import json | ||
|
|
||
| deployment_id = CHATLAS_CHAT_PROVIDER_MODEL.split("/", 1)[1] |
There was a problem hiding this comment.
just for my understanding is someone required to enter the model in this way?
CHATLAS_CHAT_PROVIDER_MODEL:azure-openai/{deployment_id}(e.g.,azure-openai/gpt-4.1-mini)
it looks like we just want the latter half so can the customer just pass in "gpt-4.1-mini"?
There was a problem hiding this comment.
hmm great point, I was using the existing flow and had tunnel vision for that pattern honestly. That said, in current implementation we need that to determine if it's an azure openai model.
IS_AZURE_OPENAI = (CHATLAS_CHAT_PROVIDER_MODEL or "").startswith("azure-openai/")
I THINK it makes sense to leave it as is so it matches current pattern and we need a way to call out its Azure OpenAI anyways. What do you think @mconflitti-pbc? I need to clean up some commenting from Claude, I disagree with the note and want to make it clearer.
There was a problem hiding this comment.
is CHATLAS_CHAT_ARGS deprecated now? I think that would be the mechanism for Chatauto to assign the proper args when starting up. Just would need the azure openai ones specifically set.
Based on what I am seeing here: https://posit-dev.github.io/chatlas/reference/ChatAuto.html it should be supported for what you are trying to do
As I tried to update the docs to demonstrate how to connect to Azure OpenAI, I ran into multiple errors. Digging into it more, I believe it was the distinction between model being passed through to ChatAuto and azure openai expecting deployment id. I added some logic to check if the app is trying to use Azure OpenAI and, if so call, a different function. This is similar to how we call AWS Bedrock through ChatBedrockAnthropic.
This approach allowed me to run it successfully on a Connect server.