Skip to content

Commit 58e387c

Browse files
authored
Merge pull request #6854 from msakande/freshness-configure-entra-id
freshness entra id configuration
2 parents e4f46a8 + bd62756 commit 58e387c

File tree

8 files changed

+100
-98
lines changed

8 files changed

+100
-98
lines changed

articles/ai-foundry/foundry-models/how-to/configure-entra-id.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,21 @@
11
---
22
title: Configure key-less authentication with Microsoft Entra ID
33
titleSuffix: Azure AI Foundry
4-
description: Learn how to configure key-less authorization to use Azure AI Foundry Models with Microsoft Entra ID.
4+
description: Learn how to configure key-less authorization to use Azure AI Foundry Models with Microsoft Entra ID and enhance security.
55
ms.service: azure-ai-foundry
66
ms.subservice: azure-ai-foundry-model-inference
77
ms.topic: how-to
8-
ms.date: 08/29/2025
8+
ms.date: 09/26/2025
99
ms.custom: ignite-2024, github-universe-2024
1010
author: msakande
1111
ms.author: mopeakande
1212
recommendations: false
1313
zone_pivot_groups: azure-ai-models-deployment
1414
ms.reviewer: fasantia
1515
reviewer: santiagxf
16+
ai-usage: ai-assisted
17+
18+
#CustomerIntent: As a developer, I want to configure keyless authentication with Microsoft Entra ID for Azure AI Foundry Models so that I can secure my AI model deployments without relying on API keys and leverage role-based access control for better security and compliance.
1619
---
1720

1821
# Configure key-less authentication with Microsoft Entra ID
@@ -29,6 +32,6 @@ reviewer: santiagxf
2932
[!INCLUDE [bicep](../../foundry-models/includes/configure-entra-id/bicep.md)]
3033
::: zone-end
3134

32-
## Next steps
35+
## Next step
3336

3437
* [Develop applications using Azure AI Foundry Models](../../model-inference/supported-languages.md)

articles/ai-foundry/foundry-models/includes/code-create-chat-client-entra.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -3,20 +3,20 @@ manager: nitinme
33
ms.service: azure-ai-foundry
44
ms.subservice: azure-ai-foundry-model-inference
55
ms.topic: include
6-
ms.date: 1/21/2025
6+
ms.date: 09/26/2025
77
ms.author: fasantia
88
author: santiagxf
99
---
1010

1111
# [Python](#tab/python)
1212

13-
Install the package `azure-ai-inference` using your package manager, like pip:
13+
Install the `azure-ai-inference` package, using a package manager like pip:
1414

1515
```bash
1616
pip install azure-ai-inference
1717
```
1818

19-
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions with Entra ID:
19+
Then, use the package to consume the model. The following example shows how to create a client to consume chat completions with Microsoft Entra ID:
2020

2121
```python
2222
import os
@@ -32,13 +32,13 @@ client = ChatCompletionsClient(
3232

3333
# [JavaScript](#tab/javascript)
3434

35-
Install the package `@azure-rest/ai-inference` using npm:
35+
Install the `@azure-rest/ai-inference` package with npm:
3636

3737
```bash
3838
npm install @azure-rest/ai-inference
3939
```
4040

41-
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions with Entra ID:
41+
Then, use the package to consume the model. The following example shows how to create a client to consume chat completions with Microsoft Entra ID:
4242

4343
```javascript
4444
import ModelClient from "@azure-rest/ai-inference";
@@ -76,7 +76,7 @@ using Azure.Identity;
7676
using Azure.AI.Inference;
7777
```
7878

79-
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions with Entra ID:
79+
Then, use the package to consume the model. The following example shows how to create a client to consume chat completions with Microsoft Entra ID:
8080

8181
```csharp
8282
TokenCredential credential = new DefaultAzureCredential();
@@ -108,7 +108,7 @@ Add the package to your project:
108108
</dependency>
109109
```
110110

111-
Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
111+
Then, use the package to consume the model. The following example shows how to create a client to consume chat completions:
112112

113113
```java
114114
TokenCredential defaultCredential = new DefaultAzureCredentialBuilder().build();
@@ -118,11 +118,11 @@ ChatCompletionsClient client = new ChatCompletionsClientBuilder()
118118
.buildClient();
119119
```
120120

121-
Explore our [samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-inference/src/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/java/reference) to get yourself started.
121+
Explore our [samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-inference/src/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/java/reference) to get started.
122122

123123
# [REST](#tab/rest)
124124

125-
Use the reference section to explore the API design and which parameters are available and indicate authentication token in the header `Authorization`. For example, the reference section for [Chat completions](../../model-inference/reference/reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions. Notice that the path `/models` is included to the root of the URL:
125+
Use the reference section to explore the API design and see which parameters are available. Indicate the authentication token in the header `Authorization`. For example, the reference section for [Chat completions](../../model-inference/reference/reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions. The path `/models` is included in the root of the URL:
126126

127127
__Request__
128128

@@ -132,7 +132,7 @@ Authorization: Bearer <bearer-token>
132132
Content-Type: application/json
133133
```
134134

135-
Tokens have to be issued with scope `https://cognitiveservices.azure.com/.default`.
135+
Tokens must be issued with scope `https://cognitiveservices.azure.com/.default`.
136136

137137
For testing purposes, the easiest way to get a valid token for your user account is to use the Azure CLI. In a console, run the following Azure CLI command:
138138

articles/ai-foundry/foundry-models/includes/configure-entra-id/about-credentials.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,15 @@ author: santiagxf
44
ms.author: fasantia
55
ms.service: azure-ai-foundry
66
ms.subservice: azure-ai-foundry-model-inference
7-
ms.date: 01/23/2025
7+
ms.date: 09/26/2025
88
ms.topic: include
99
---
1010

1111
### Options for credential when using Microsoft Entra ID
1212

13-
`DefaultAzureCredential` is an opinionated, ordered sequence of mechanisms for authenticating to Microsoft Entra ID. Each authentication mechanism is a class derived from the `TokenCredential` class and is known as a credential. At runtime, `DefaultAzureCredential` attempts to authenticate using the first credential. If that credential fails to acquire an access token, the next credential in the sequence is attempted, and so on, until an access token is successfully obtained. In this way, your app can use different credentials in different environments without writing environment-specific code.
13+
`DefaultAzureCredential` is an opinionated, ordered sequence of mechanisms for authenticating to Microsoft Entra ID. Each authentication mechanism is a class derived from the `TokenCredential` class and is known as a credential. At runtime, `DefaultAzureCredential` attempts to authenticate by using the first credential. If that credential fails to acquire an access token, the next credential in the sequence is attempted, and so on, until an access token is successfully obtained. In this way, your app can use different credentials in different environments without writing environment-specific code.
1414

15-
When the preceding code runs on your local development workstation, it looks in the environment variables for an application service principal or at locally installed developer tools, such as Visual Studio, for a set of developer credentials. Either approach can be used to authenticate the app to Azure resources during local development.
15+
When the preceding code runs on your local development workstation, it looks in the environment variables for an application service principal or at locally installed developer tools, such as Visual Studio, for a set of developer credentials. You can use either approach to authenticate the app to Azure resources during local development.
1616

1717
When deployed to Azure, this same code can also authenticate your app to other Azure resources. `DefaultAzureCredential` can retrieve environment settings and managed identity configurations to authenticate to other services automatically.
1818

articles/ai-foundry/foundry-models/includes/configure-entra-id/bicep.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ author: santiagxf
44
ms.author: fasantia
55
ms.service: azure-ai-foundry
66
ms.subservice: azure-ai-foundry-model-inference
7-
ms.date: 12/15/2024
7+
ms.date: 09/26/2025
88
ms.topic: include
99
zone_pivot_groups: azure-ai-models-deployment
1010
---
@@ -19,7 +19,7 @@ zone_pivot_groups: azure-ai-models-deployment
1919

2020
## About this tutorial
2121

22-
The example in this article is based on code samples contained in the [Azure-Samples/azureai-model-inference-bicep](https://github.com/Azure-Samples/azureai-model-inference-bicep) repository. To run the commands locally without having to copy or paste file content, use the following commands to clone the repository and go to the folder for your coding language:
22+
The example in this article is based on code samples in the [Azure-Samples/azureai-model-inference-bicep](https://github.com/Azure-Samples/azureai-model-inference-bicep) repository. To run the commands locally without copying or pasting file content, use the following commands to clone the repository and go to the folder for your coding language:
2323

2424
```azurecli
2525
git clone https://github.com/Azure-Samples/azureai-model-inference-bicep
@@ -33,24 +33,24 @@ cd azureai-model-inference-bicep/infra
3333

3434
## Understand the resources
3535

36-
The tutorial helps you create:
36+
In this tutorial, you create the following resources:
3737

38-
> [!div class="checklist"]
39-
> * An Azure AI Foundry (formerly known Azure AI Services) resource with key access disabled. For simplicity, this template doesn't deploy models.
40-
> * A role-assignment for a given security principal with the role **Cognitive Services User**.
4138

42-
You are using the following assets to create those resources:
39+
* An Azure AI Foundry resource (formerly known as Azure AI Services resource) with key access disabled. For simplicity, this template doesn't deploy models.
40+
* A role-assignment for a given security principal with the role **Cognitive Services User**.
4341

44-
1. Use the template `modules/ai-services-template.bicep` to describe your Azure AI Foundry (formerly known Azure AI Services) resource:
42+
To create these resources, use the following assets:
43+
44+
1. Use the template `modules/ai-services-template.bicep` to describe your Azure AI Foundry resource:
4545

4646
__modules/ai-services-template.bicep__
4747

4848
:::code language="bicep" source="~/azureai-model-inference-bicep/infra/modules/ai-services-template.bicep":::
4949

5050
> [!TIP]
51-
> Notice that this template can take the parameter `allowKeys` which, when `false` will disable the use of keys in the resource. This configuration is optional.
51+
> This template accepts the `allowKeys` parameter. Set it to `false` to disable key access in the resource. This configuration is optional.
5252
53-
2. Use the template `modules/role-assignment-template.bicep` to describe a role assignment in Azure:
53+
1. Use the template `modules/role-assignment-template.bicep` to describe a role assignment in Azure:
5454

5555
__modules/role-assignment-template.bicep__
5656

@@ -66,36 +66,36 @@ In your console, follow these steps:
6666

6767
:::code language="bicep" source="~/azureai-model-inference-bicep/infra/deploy-entra-id.bicep":::
6868

69-
2. Log into Azure:
69+
1. Sign in to Azure:
7070

7171
```azurecli
7272
az login
7373
```
7474
75-
3. Ensure you are in the right subscription:
75+
1. Make sure you're in the right subscription:
7676
7777
```azurecli
7878
az account set --subscription "<subscription-id>"
7979
```
8080
81-
4. Run the deployment:
81+
1. Run the deployment:
8282
8383
```azurecli
8484
RESOURCE_GROUP="<resource-group-name>"
8585
SECURITY_PRINCIPAL_ID="<your-security-principal-id>"
8686
8787
az deployment group create \
8888
--resource-group $RESOURCE_GROUP \
89-
--securityPrincipalId $SECURITY_PRINCIPAL_ID
89+
--parameters securityPrincipalId=$SECURITY_PRINCIPAL_ID \
9090
--template-file deploy-entra-id.bicep
9191
```
9292
93-
7. The template outputs the Azure AI Foundry Models endpoint that you can use to consume any of the model deployments you have created.
93+
1. The template outputs the Azure AI Foundry Models endpoint that you can use to consume any of the model deployments you created.
9494
9595
9696
## Use Microsoft Entra ID in your code
9797
98-
Once you configured Microsoft Entra ID in your resource, you need to update your code to use it when consuming the inference endpoint. The following example shows how to use a chat completions model:
98+
After you configure Microsoft Entra ID in your resource, update your code to use it when consuming the inference endpoint. The following example shows how to use a chat completions model:
9999
100100
[!INCLUDE [code](../code-create-chat-client-entra.md)]
101101
@@ -107,7 +107,7 @@ Once you configured Microsoft Entra ID in your resource, you need to update your
107107
108108
## Disable key-based authentication in the resource
109109
110-
Disabling key-based authentication is advisable when you implemented Microsoft Entra ID and fully addressed compatibility or fallback concerns in all the applications that consume the service. You can achieve it by changing the property `disableLocalAuth`:
110+
Disable key-based authentication when you implement Microsoft Entra ID and fully address compatibility or fallback concerns in all the applications that consume the service. Change the `disableLocalAuth` property to disable key-based authentication:
111111
112112
__modules/ai-services-template.bicep__
113113

articles/ai-foundry/foundry-models/includes/configure-entra-id/cli.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ author: santiagxf
44
ms.author: fasantia
55
ms.service: azure-ai-foundry
66
ms.subservice: azure-ai-foundry-model-inference
7-
ms.date: 12/15/2024
7+
ms.date: 09/26/2025
88
ms.topic: include
99
zone_pivot_groups: azure-ai-models-deployment
1010
---
@@ -17,76 +17,76 @@ zone_pivot_groups: azure-ai-models-deployment
1717

1818
* Your Azure subscription ID.
1919

20-
* Your Azure AI Foundry (formerly known Azure AI Services) resource name.
20+
* Your Azure AI Foundry resource (formerly known as Azure AI Services resource) name.
2121

22-
* The resource group where the Azure AI Foundry (formerly known Azure AI Services) resource is deployed.
22+
* The resource group where you deployed the Azure AI Foundry resource.
2323

2424

2525
## Configure Microsoft Entra ID for inference
2626

2727
Follow these steps to configure Microsoft Entra ID for inference:
2828

2929

30-
1. Log in into your Azure subscription:
30+
1. Sign in to your Azure subscription.
3131

3232
```azurecli
3333
az login
3434
```
3535
36-
2. If you have more than one subscription, select the subscription where your resource is located:
36+
1. If you have more than one subscription, select the subscription where your resource is located.
3737
3838
```azurecli
3939
az account set --subscription "<subscription-id>"
4040
```
4141
42-
3. Set the following environment variables with the name of the Azure AI Foundry (formerly known Azure AI Services) resource you plan to use and resource group.
42+
1. Set the following environment variables with the name of the Azure AI Foundry resource you plan to use and resource group.
4343
4444
```azurecli
4545
ACCOUNT_NAME="<ai-services-resource-name>"
4646
RESOURCE_GROUP="<resource-group>"
4747
```
4848
49-
4. Get the full name of your resource:
49+
1. Get the full name of your resource.
5050
5151
```azurecli
52-
RESOURCE_ID=$(az resource show -g $RESOURCE_GROUP -n $ACCOUNT_NAME --resource-type "Microsoft.CognitiveServices/accounts")
52+
RESOURCE_ID=$(az resource show -g $RESOURCE_GROUP -n $ACCOUNT_NAME --resource-type "Microsoft.CognitiveServices/accounts" --query id --output tsv)
5353
```
5454
55-
5. Get the object ID of the security principal you want to assign permissions to. The following example shows how to get the object ID associated with:
55+
1. Get the object ID of the security principal you want to assign permissions to. The following example shows how to get the object ID associated with:
5656
57-
__Your own logged in account:__
57+
**Your own signed in account:**
5858
5959
```azurecli
6060
OBJECT_ID=$(az ad signed-in-user show --query id --output tsv)
6161
```
6262
63-
__A security group:__
63+
**A security group:**
6464
6565
```azurecli
6666
OBJECT_ID=$(az ad group show --group "<group-name>" --query id --output tsv)
6767
```
6868
69-
__A service principal:__
69+
**A service principal:**
7070
7171
```azurecli
7272
OBJECT_ID=$(az ad sp show --id "<service-principal-guid>" --query id --output tsv)
7373
```
7474
75-
6. Assign the **Cognitive Services User** role to the service principal (scoped to the resource). By assigning a role, you're granting service principal access to this resource.
75+
1. Assign the **Cognitive Services User** role to the service principal (scoped to the resource). By assigning a role, you grant the service principal access to this resource.
7676
7777
```azurecli
7878
az role assignment create --assignee-object-id $OBJECT_ID --role "Cognitive Services User" --scope $RESOURCE_ID
7979
```
8080
81-
8. The selected user can now use Microsoft Entra ID for inference.
81+
1. The selected user can now use Microsoft Entra ID for inference.
8282
8383
> [!TIP]
84-
> Keep in mind that Azure role assignments may take up to five minutes to propagate. Adding or removing users from a security group propagates immediately.
84+
> Keep in mind that Azure role assignments can take up to five minutes to propagate. Adding or removing users from a security group propagates immediately.
8585
8686
8787
## Use Microsoft Entra ID in your code
8888
89-
Once Microsoft Entra ID is configured in your resource, you need to update your code to use it when consuming the inference endpoint. The following example shows how to use a chat completions model:
89+
After you configure Microsoft Entra ID in your resource, update your code to use it when consuming the inference endpoint. The following example shows how to use a chat completions model:
9090
9191
[!INCLUDE [code](../code-create-chat-client-entra.md)]
9292

0 commit comments

Comments
 (0)