Skip to content

Commit a6e886a

Browse files
Merge pull request #1280 from santiagxf/santiagxf-patch-1
Update llama-index.md
2 parents 0f17eac + f6f65be commit a6e886a

File tree

1 file changed

+19
-6
lines changed

1 file changed

+19
-6
lines changed

articles/ai-studio/how-to/develop/llama-index.md

Lines changed: 19 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ llm = AzureAICompletionsModel(
8383
```
8484
8585
> [!TIP]
86-
> If your model is an OpenAI model deployed to Azure OpenAI service or AI services resource, configure the client as indicated at [Azure OpenAI models and Azure AI model inference service](#azure-openai-models-and-azure-ai-model-infernece-service).
86+
> If your model deployment is hosted in Azure OpenAI service or Azure AI Services resource, configure the client as indicated at [Azure OpenAI models and Azure AI model inference service](#azure-openai-models-and-azure-ai-model-inference-service).
8787
8888
If your endpoint is serving more than one model, like with the [Azure AI model inference service](../../ai-services/model-inference.md) or [GitHub Models](https://github.com/marketplace/models), you have to indicate `model_name` parameter:
8989
@@ -128,23 +128,36 @@ llm = AzureAICompletionsModel(
128128
)
129129
```
130130

131-
### Azure OpenAI models and Azure AI model infernece service
131+
### Azure OpenAI models and Azure AI model inference service
132132

133-
If you are using Azure OpenAI models or [Azure AI model inference service](../../ai-services/model-inference.md), ensure you have at least version `0.2.4` of the LlamaIndex integration. Use `api_version` parameter in case you need to select a specific `api_version`. For the [Azure AI model inference service](../../ai-services/model-inference.md), you need to pass `model_name` parameter:
133+
If you are using Azure OpenAI service or [Azure AI model inference service](../../ai-services/model-inference.md), ensure you have at least version `0.2.4` of the LlamaIndex integration. Use `api_version` parameter in case you need to select a specific `api_version`.
134+
135+
For the [Azure AI model inference service](../../ai-services/model-inference.md), you need to pass `model_name` parameter:
134136

135137
```python
136138
from llama_index.llms.azure_inference import AzureAICompletionsModel
137139
138140
llm = AzureAICompletionsModel(
139-
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
141+
endpoint="https://<resource>.services.ai.azure.com/models",
142+
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
143+
model_name="mistral-large-2407",
144+
)
145+
```
146+
147+
For Azure OpenAI service:
148+
149+
```python
150+
from llama_index.llms.azure_inference import AzureAICompletionsModel
151+
152+
llm = AzureAICompletionsModel(
153+
endpoint="https://<resource>.openai.azure.com/openai/deployments/<deployment-name>",
140154
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
141-
model_name="gpt-4o",
142155
api_version="2024-05-01-preview",
143156
)
144157
```
145158

146159
> [!TIP]
147-
> Using a wrong `api_version` or one not supported by the model results in a `ResourceNotFound` exception.
160+
> Check which is the API version that your deployment is using. Using a wrong `api_version` or one not supported by the model results in a `ResourceNotFound` exception.
148161

149162
### Inference parameters
150163

0 commit comments

Comments
 (0)