You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Once configured, create a client to connect to the endpoint:
70
+
Once configured, create a client to connect to the endpoint. The parameter `model_name` in the constructor is not required for endpoints serving a single model, like serverless endpoints.
71
71
72
72
```python
73
73
import os
@@ -80,7 +80,7 @@ llm = AzureAICompletionsModel(
80
80
```
81
81
82
82
> [!TIP]
83
-
> The parameter `model_name` in the constructor is not required for endpoints serving a single model, like serverless endpoints).
83
+
> If your model is an OpenAI model deployed to Azure OpenAI service or AI services resource, configure the client as indicated at [Azure OpenAI models](#azure-openai-models).
84
84
85
85
Alternatively, if your endpoint support Microsoft Entra ID, you can use the following code to create the client:
If you are using Azure OpenAI models with key-based authentication, you need to pass the authentication key in the header `api-key`, which is the one expected in the Azure OpenAI service and in Azure AI Services. This configuration is not required if you are using Microsoft Entra ID (formerly known as Azure AD). The following example shows how to configure the client:
118
+
119
+
```python
120
+
import os
121
+
from llama_index.llms.azure_inference import AzureAICompletionsModel
Notice that `credentials` is still being passed with an empty value since it's a required parameter.
131
+
115
132
### Inference parameters
116
133
117
134
You can configure how inference in performed for all the operations that are using this client by setting extra parameters. This helps avoid indicating them on each call you make to the model.
0 commit comments