fix: Separate out config for AZURE_OPENAI_MODEL_NAME and AZURE_OPENAI_DEPLOYMENT_NAME - Fixes #35 #44
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Purpose
The following PR addresses an issue I found will troubleshooting #35.
Originally, it was assumed that the Azure OpenAI Model Name and Deployment Name were the same because this is how the accelerator deploys its resources. However, when doing a local deployment, it may be desirable to use an existing Azure Open AI resource and model deployment due to resource and quota constraints.
In my case, my Model name (
gpt-4o
) was not the same as my model deployment name (gpt4o
, no dash). This was resulting in 400 errors when trying to connect to the model.This PR introduces a new optional configuration value,
AZURE_OPENAI_MODEL_NAME
toconfig.py
. It will default to usingAZURE_OPENAI_DEPLOYMENT_NAME
if not present.Does this introduce a breaking change?
How to Test
Test the code
gpt-4o-testing
) and model name (e.g.,gpt-4o
) do not match..env
to includeAZURE_OPENAI_MODEL_NAME
app.py
serverWhat to Check
Verify that the following are valid
gpt-4o-testing
rather thangpt-4o
.