Skip to content

Commit c9406eb

Browse files
authored
Merge pull request #2495 from MicrosoftDocs/main
1/24/2025 AM Publish
2 parents 7c9f431 + 73e1ba1 commit c9406eb

File tree

13 files changed

+472
-22
lines changed

13 files changed

+472
-22
lines changed
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
---
2+
title: Configure key-less authentication with Microsoft Entra ID
3+
titleSuffix: Azure AI Foundry
4+
description: Learn how to configure key-less authorization to use Azure AI model inference with Microsoft Entra ID.
5+
ms.service: azure-ai-model-inference
6+
ms.topic: how-to
7+
ms.date: 10/01/2024
8+
ms.custom: ignite-2024, github-universe-2024
9+
manager: nitinme
10+
author: mrbullwinkle
11+
ms.author: fasantia
12+
recommendations: false
13+
zone_pivot_groups: azure-ai-models-deployment
14+
---
15+
16+
# Configure key-less authentication with Microsoft Entra ID
17+
18+
::: zone pivot="ai-foundry-portal"
19+
[!INCLUDE [portal](../includes/configure-entra-id/portal.md)]
20+
::: zone-end
21+
22+
::: zone pivot="programming-language-cli"
23+
[!INCLUDE [cli](../includes/configure-entra-id/cli.md)]
24+
::: zone-end
25+
26+
::: zone pivot="programming-language-bicep"
27+
[!INCLUDE [bicep](../includes/configure-entra-id/bicep.md)]
28+
::: zone-end
29+
30+
## Next steps
31+
32+
* [Develop applications using Azure AI model inference service in Azure AI services](../supported-languages.md)

articles/ai-foundry/model-inference/how-to/quickstart-ai-project.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -14,17 +14,17 @@ recommendations: false
1414

1515
# Configure your AI project to use Azure AI model inference
1616

17-
If you already have an AI project in an existing AI Hub, models via "Models as a Service" are by default deployed inside of your project as stand-alone endpoints. Each model deployment has its own set of URI and credentials to access it. Azure OpenAI models are deployed to Azure AI Services resource or to the Azure OpenAI Service resource.
17+
If you already have an AI project in Azure AI Foundry, the model catalog deploys models from third-party model providers as stand-alone endpoints in your project by default. Each model deployment has its own set of URI and credentials to access it. On the other hand, Azure OpenAI models are deployed to Azure AI Services resource or to the Azure OpenAI Service resource.
1818

19-
You can configure the AI project to connect with the Azure AI model inference in Azure AI services. Once configured, **deployments of Models as a Service models happen to the connected Azure AI Services resource** instead to the project itself, giving you a single set of endpoint and credential to access all the models deployed in Azure AI Foundry.
19+
You can change this behavior and deploy both types of models to Azure AI Services resources using Azure AI model inference. Once configured, **deployments of Models as a Service models supporting pay-as-you-go billing happen to the connected Azure AI Services resource** instead to the project itself, giving you a single set of endpoint and credential to access all the models deployed in Azure AI Foundry. You can manage Azure OpenAI and third-party model providers models in the same way.
2020

2121
Additionally, deploying models to Azure AI model inference brings the extra benefits of:
2222

2323
> [!div class="checklist"]
24-
> * [Routing capability](../concepts/endpoints.md#routing)
25-
> * [Custom content filters](../concepts/content-filter.md)
26-
> * Global capacity deployment
27-
> * Entra ID support and role-based access control
24+
> * [Routing capability](../concepts/endpoints.md#routing).
25+
> * [Custom content filters](../concepts/content-filter.md).
26+
> * Global capacity deployment type.
27+
> * [Key-less authentication](configure-entra-id.md) with role-based access control.
2828
2929
In this article, you learn how to configure your project to use models deployed in Azure AI model inference in Azure AI services.
3030

@@ -104,7 +104,7 @@ For each model you want to deploy under Azure AI model inference, follow these s
104104

105105
6. You can configure the deployment settings at this time. By default, the deployment receives the name of the model you're deploying. The deployment name is used in the `model` parameter for request to route to this particular model deployment. It allows you to configure specific names for your models when you attach specific configurations. For instance, `o1-preview-safe` for a model with a strict content safety content filter.
106106

107-
7. We automatically select an Azure AI Services connection depending on your project because you have turned on the feature **Deploy models to Azure AI model inference service**. Use the **Customize** option to change the connection based on your needs. If you're deploying under the **Standard** deployment type, the models need to be available in the region of the Azure AI Services resource.
107+
7. We automatically select an Azure AI Services connection depending on your project because you turned on the feature **Deploy models to Azure AI model inference service**. Use the **Customize** option to change the connection based on your needs. If you're deploying under the **Standard** deployment type, the models need to be available in the region of the Azure AI Services resource.
108108

109109
:::image type="content" source="../media/add-model-deployments/models-deploy-customize.png" alt-text="Screenshot showing how to customize the deployment if needed." lightbox="../media/add-model-deployments/models-deploy-customize.png":::
110110

@@ -152,7 +152,7 @@ Although you configured the project to use the Azure AI model inference, existin
152152

153153
### Upgrade your code with the new endpoint
154154

155-
Once the models are deployed under Azure AI Services, you can upgrade your code to use the Azure AI model inference endpoint. The main difference between how Serverless API endpoints and Azure AI model inference works reside in the endpoint URL and model parameter. While Serverless API Endpoints have set of URI and key per each model deployment, Azure AI model inference has only one for all of them.
155+
Once the models are deployed under Azure AI Services, you can upgrade your code to use the Azure AI model inference endpoint. The main difference between how Serverless API endpoints and Azure AI model inference works reside in the endpoint URL and model parameter. While Serverless API Endpoints have a set of URI and key per each model deployment, Azure AI model inference has only one for all of them.
156156

157157
The following table summarizes the changes you have to introduce:
158158

@@ -186,10 +186,11 @@ For each model deployed as Serverless API Endpoints, follow these steps:
186186

187187
## Limitations
188188

189-
Azure AI model inference in Azure AI Services gives users access to flagship models in the Azure AI model catalog. However, only models supporting pay-as-you-go billing (Models as a Service) are available for deployment.
189+
Consider the following limitations when configuring your project to use Azure AI model inference:
190190

191-
Models requiring compute quota from your subscription (Managed Compute), including custom models, can only be deployed within a given project as Managed Online Endpoints and continue to be accessible using their own set of endpoint URI and credentials.
191+
* Only models supporting pay-as-you-go billing (Models as a Service) are available for deployment to Azure AI model inference. Models requiring compute quota from your subscription (Managed Compute), including custom models, can only be deployed within a given project as Managed Online Endpoints and continue to be accessible using their own set of endpoint URI and credentials.
192+
* Models available as both pay-as-you-go billing and managed compute offerings are, by default, deployed to Azure AI model inference in Azure AI services resources. Azure AI Foundry portal doesn't offer a way to deploy them to Managed Online Endpoints. You have to turn off the feature mentioned at [Configure the project to use Azure AI model inference](#configure-the-project-to-use-azure-ai-model-inference) or use the Azure CLI/Azure ML SDK/ARM templates to perform the deployment.
192193

193194
## Next steps
194195

195-
* [Add more models](create-model-deployments.md) to your endpoint.
196+
* [Add more models](create-model-deployments.md) to your endpoint.

articles/ai-foundry/model-inference/includes/code-create-chat-client-entra.md

Lines changed: 42 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ from azure.identity import AzureDefaultCredential
2828
model = ChatCompletionsClient(
2929
endpoint=os.environ["AZUREAI_ENDPOINT_URL"],
3030
credential=AzureDefaultCredential(),
31+
model="mistral-large-2407",
3132
)
3233
```
3334

@@ -48,7 +49,8 @@ import { AzureDefaultCredential } from "@azure/identity";
4849

4950
const client = new ModelClient(
5051
process.env.AZUREAI_ENDPOINT_URL,
51-
new AzureDefaultCredential()
52+
new AzureDefaultCredential(),
53+
"mistral-large-2407"
5254
);
5355
```
5456

@@ -79,13 +81,43 @@ Then, you can use the package to consume the model. The following example shows
7981
```csharp
8082
ChatCompletionsClient client = new ChatCompletionsClient(
8183
new Uri(Environment.GetEnvironmentVariable("AZURE_INFERENCE_ENDPOINT")),
82-
new DefaultAzureCredential(includeInteractiveCredentials: true)
84+
new AzureDefaultCredential(includeInteractiveCredentials: true),
85+
"mistral-large-2407"
8386
);
8487
```
8588

89+
# [Java](#tab/java)
90+
91+
Add the package to your project:
92+
93+
```xml
94+
<dependency>
95+
<groupId>com.azure</groupId>
96+
<artifactId>azure-ai-inference</artifactId>
97+
<version>1.0.0-beta.1</version>
98+
</dependency>
99+
<dependency>
100+
<groupId>com.azure</groupId>
101+
<artifactId>azure-identity</artifactId>
102+
<version>1.13.3</version>
103+
</dependency>
104+
```
105+
106+
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
107+
108+
```java
109+
ChatCompletionsClient client = new ChatCompletionsClientBuilder()
110+
.credential(new DefaultAzureCredential()))
111+
.endpoint("{endpoint}")
112+
.model("mistral-large-2407")
113+
.buildClient();
114+
```
115+
116+
Explore our [samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-inference/src/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/java/reference) to get yourself started.
117+
86118
# [REST](#tab/rest)
87119

88-
Use the reference section to explore the API design and which parameters are available and indicate authentication token in the header `Authorization`. For example, the reference section for [Chat completions](reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions. Notice that the path `/models` is included to the root of the URL:
120+
Use the reference section to explore the API design and which parameters are available and indicate authentication token in the header `Authorization`. For example, the reference section for [Chat completions](../../../ai-studio/reference/reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions. Notice that the path `/models` is included to the root of the URL:
89121

90122
__Request__
91123

@@ -94,4 +126,10 @@ POST models/chat/completions?api-version=2024-04-01-preview
94126
Authorization: Bearer <bearer-token>
95127
Content-Type: application/json
96128
```
97-
---
129+
130+
For testing purposes, the easiest way to get a valid token for your user account is to use the Azure CLI. In a console, run the following Azure CLI command:
131+
132+
```azurecli
133+
az account get-access-token --resource https://cognitiveservices.azure.com --query "accessToken" --output tsv
134+
```
135+
---
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
---
2+
manager: nitinme
3+
author: mrbullwinkle
4+
ms.author: fasantia
5+
ms.service: azure-ai-model-inference
6+
ms.date: 01/23/2025
7+
ms.topic: include
8+
---
9+
10+
### Options for credential when using Microsoft Entra ID
11+
12+
`DefaultAzureCredential` is an opinionated, ordered sequence of mechanisms for authenticating to Microsoft Entra ID. Each authentication mechanism is a class derived from the `TokenCredential` class and is known as a credential. At runtime, `DefaultAzureCredential` attempts to authenticate using the first credential. If that credential fails to acquire an access token, the next credential in the sequence is attempted, and so on, until an access token is successfully obtained. In this way, your app can use different credentials in different environments without writing environment-specific code.
13+
14+
When the preceding code runs on your local development workstation, it looks in the environment variables for an application service principal or at locally installed developer tools, such as Visual Studio, for a set of developer credentials. Either approach can be used to authenticate the app to Azure resources during local development.
15+
16+
When deployed to Azure, this same code can also authenticate your app to other Azure resources. `DefaultAzureCredential` can retrieve environment settings and managed identity configurations to authenticate to other services automatically.
17+
18+
### Best practices
19+
20+
* Use deterministic credentials in production environments: Strongly consider moving from `DefaultAzureCredential` to one of the following deterministic solutions on production environments:
21+
22+
* A specific `TokenCredential` implementation, such as `ManagedIdentityCredential`. See the [Derived list for options](/dotnet/api/azure.core.tokencredential#definition).
23+
* A pared-down `ChainedTokenCredential` implementation optimized for the Azure environment in which your app runs. `ChainedTokenCredential` essentially creates a specific allowlist of acceptable credential options, such as `ManagedIdentity` for production and `VisualStudioCredential` for development.
24+
25+
* Configure system-assigned or user-assigned managed identities to the Azure resources where your code is running if possible. Configure Microsoft Entra ID access to those specific identities.
Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
---
2+
manager: nitinme
3+
author: mrbullwinkle
4+
ms.author: fasantia
5+
ms.service: azure-ai-model-inference
6+
ms.date: 12/15/2024
7+
ms.topic: include
8+
zone_pivot_groups: azure-ai-models-deployment
9+
---
10+
11+
[!INCLUDE [Header](intro.md)]
12+
13+
* Install the [Azure CLI](/cli/azure/).
14+
15+
* Identify the following information:
16+
17+
* Your Azure subscription ID.
18+
19+
## About this tutorial
20+
21+
The example in this article is based on code samples contained in the [Azure-Samples/azureai-model-inference-bicep](https://github.com/Azure-Samples/azureai-model-inference-bicep) repository. To run the commands locally without having to copy or paste file content, use the following commands to clone the repository and go to the folder for your coding language:
22+
23+
```azurecli
24+
git clone https://github.com/Azure-Samples/azureai-model-inference-bicep
25+
```
26+
27+
The files for this example are in:
28+
29+
```azurecli
30+
cd azureai-model-inference-bicep/infra
31+
```
32+
33+
## Understand the resources
34+
35+
The tutorial helps you create:
36+
37+
> [!div class="checklist"]
38+
> * An Azure AI Services resource with key access disabled. For simplicity, this template doesn't deploy models.
39+
> * A role-assignment for a given security principal with the role **Cognitive Services User**.
40+
41+
You are using the following assets to create those resources:
42+
43+
1. Use the template `modules/ai-services-template.bicep` to describe your Azure AI Services resource:
44+
45+
__modules/ai-services-template.bicep__
46+
47+
:::code language="bicep" source="~/azureai-model-inference-bicep/infra/modules/ai-services-template.bicep":::
48+
49+
> [!TIP]
50+
> Notice that this template can take the parameter `allowKeys` which, when `false` will disable the use of keys in the resource. This configuration is optional.
51+
52+
2. Use the template `modules/role-assignment-template.bicep` to describe a role assignment in Azure:
53+
54+
__modules/role-assignment-template.bicep__
55+
56+
:::code language="bicep" source="~/azureai-model-inference-bicep/infra/modules/role-assignment-template.bicep":::
57+
58+
## Create the resources
59+
60+
In your console, follow these steps:
61+
62+
1. Define the main deployment:
63+
64+
__deploy-entra-id.bicep__
65+
66+
:::code language="bicep" source="~/azureai-model-inference-bicep/infra/deploy-entra-id.bicep":::
67+
68+
2. Log into Azure:
69+
70+
```azurecli
71+
az login
72+
```
73+
74+
3. Ensure you are in the right subscription:
75+
76+
```azurecli
77+
az account set --subscription "<subscription-id>"
78+
```
79+
80+
4. Run the deployment:
81+
82+
```azurecli
83+
RESOURCE_GROUP="<resource-group-name>"
84+
SECURITY_PRINCIPAL_ID="<your-security-principal-id>"
85+
86+
az deployment group create \
87+
--resource-group $RESOURCE_GROUP \
88+
--securityPrincipalId $SECURITY_PRINCIPAL_ID
89+
--template-file deploy-entra-id.bicep
90+
```
91+
92+
7. The template outputs the Azure AI model inference endpoint that you can use to consume any of the model deployments you have created.
93+
94+
95+
## Use Microsoft Entra ID in your code
96+
97+
Once you configured Microsoft Entra ID in your resource, you need to update your code to use it when consuming the inference endpoint. The following example shows how to use a chat completions model:
98+
99+
[!INCLUDE [code](../code-create-chat-client-entra.md)]
100+
101+
[!INCLUDE [about-credentials](about-credentials.md)]
102+
103+
104+
105+
## Disable key-based authentication in the resource
106+
107+
Disabling key-based authentication is advisable when you implemented Microsoft Entra ID and fully addressed compatibility or fallback concerns in all the applications that consume the service.
Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
---
2+
manager: nitinme
3+
author: mrbullwinkle
4+
ms.author: fasantia
5+
ms.service: azure-ai-model-inference
6+
ms.date: 12/15/2024
7+
ms.topic: include
8+
zone_pivot_groups: azure-ai-models-deployment
9+
---
10+
11+
[!INCLUDE [Header](intro.md)]
12+
13+
* Install the [Azure CLI](/cli/azure/).
14+
15+
* Identify the following information:
16+
17+
* Your Azure subscription ID.
18+
19+
* Your Azure AI Services resource name.
20+
21+
* The resource group where the Azure AI Services resource is deployed.
22+
23+
24+
## Configure Microsoft Entra ID for inference
25+
26+
Follow these steps to configure Microsoft Entra ID for inference in your Azure AI Services resource:
27+
28+
29+
1. Log in into your Azure subscription:
30+
31+
```azurecli
32+
az login
33+
```
34+
35+
2. If you have more than one subscription, select the subscription where your resource is located:
36+
37+
```azurecli
38+
az account set --subscription "<subscription-id>"
39+
```
40+
41+
3. Set the following environment variables with the name of the Azure AI Services resource you plan to use and resource group.
42+
43+
```azurecli
44+
ACCOUNT_NAME="<ai-services-resource-name>"
45+
RESOURCE_GROUP="<resource-group>"
46+
```
47+
48+
4. Get the full name of your resource:
49+
50+
```azurecli
51+
RESOURCE_ID=$(az resource show -g $RESOURCE_GROUP -n $ACCOUNT_NAME --resource-type "Microsoft.CognitiveServices/accounts")
52+
```
53+
54+
5. Get the object ID of the security principal you want to assign permissions to. The following example shows how to get the object ID associated with:
55+
56+
__Your own logged in account:__
57+
58+
```azurecli
59+
OBJECT_ID=$(az ad signed-in-user show --query id --output tsv)
60+
```
61+
62+
__A security group:__
63+
64+
```azurecli
65+
OBJECT_ID=$(az ad group show --group "<group-name>" --query id --output tsv)
66+
```
67+
68+
__A service principal:__
69+
70+
```azurecli
71+
OBJECT_ID=$(az ad sp show --id "<service-principal-guid>" --query id --output tsv)
72+
```
73+
74+
6. Assign the **Cognitive Services User** role to the service principal (scoped to the resource). By assigning a role, you're granting service principal access to this resource.
75+
76+
```azurecli
77+
az role assignment create --assignee-object-id $OBJECT_ID --role "Cognitive Services User" --scope $RESOURCE_ID
78+
```
79+
80+
8. The selected user can now use Microsoft Entra ID for inference.
81+
82+
> [!TIP]
83+
> Keep in mind that Azure role assignments may take up to five minutes to propagate. Adding or removing users from a security group propagates immediately.
84+
85+
86+
## Use Microsoft Entra ID in your code
87+
88+
Once Microsoft Entra ID is configured in your resource, you need to update your code to use it when consuming the inference endpoint. The following example shows how to use a chat completions model:
89+
90+
[!INCLUDE [code](../code-create-chat-client-entra.md)]
91+
92+
[!INCLUDE [about-credentials](about-credentials.md)]

0 commit comments

Comments
 (0)