You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Explore our [samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-inference/src/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/java/reference) to get yourself started.
117
+
86
118
# [REST](#tab/rest)
87
119
88
-
Use the reference section to explore the API design and which parameters are available and indicate authentication token in the header `Authorization`. For example, the reference section for [Chat completions](reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions. Notice that the path `/models` is included to the root of the URL:
120
+
Use the reference section to explore the API design and which parameters are available and indicate authentication token in the header `Authorization`. For example, the reference section for [Chat completions](../../../ai-studio/reference/reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions. Notice that the path `/models` is included to the root of the URL:
89
121
90
122
__Request__
91
123
@@ -94,4 +126,10 @@ POST models/chat/completions?api-version=2024-04-01-preview
94
126
Authorization: Bearer <bearer-token>
95
127
Content-Type: application/json
96
128
```
97
-
---
129
+
---
130
+
131
+
For testing purposes, the easiest way to get a valid token for your user account is to use the Azure CLI. In a console, run the following Azure CLI command:
132
+
133
+
```azurecli
134
+
az account get-access-token --resource https://cognitiveservices.azure.com --query "accessToken" --output tsv
Models deployed to Azure AI model inference in Azure AI Services support key-less authorization using Microsoft Entra ID. It enhances security, simplifies the user experience, reduces operational complexity, and provides robust compliance support for modern development. This makes it a strong choice for organizations adopting secure and scalable identity management solutions. You can [configure Microsoft Entra ID authorization in the resource](#configure-microsoft-entra-id-for-inferenced) and, optionally, [disable key-based authentication to prevent any user to still use keys to access the service](#disable-key-based-authentication-in-the-resource).
3
+
Models deployed to Azure AI model inference in Azure AI Services support key-less authorization using Microsoft Entra ID. It enhances security, simplifies the user experience, reduces operational complexity, and provides robust compliance support for modern development. It makes it a strong choice for organizations adopting secure and scalable identity management solutions. You can configure Microsoft Entra ID authorization in the resource and, optionally, **disable key-based authentication** to prevent any user to still use keys to access the service.
4
4
5
5
This article explains how to configure Microsoft Entra ID for inference in Azure AI model inference.
6
6
7
7
## Understand roles in the context of resource in Azure
8
8
9
9
Microsoft Entra ID uses the idea of Role-based Access Control (RBAC) for authorization. Roles are central to managing access to your cloud resources. A role is essentially a collection of permissions that define what actions can be performed on specific Azure resources. By assigning roles to users, groups, service principals, or managed identities—collectively known as security principals—you control their access within your Azure environment to specific resources.
10
10
11
-
When you assign a role, you specify the security principal, the role definition, and the scope. This combination is known as a role assignment. Azure AI model inference is a capability of the Azure AI Services resources, and hence, access to the service is controlled by the roles assigned to that particular resource.
11
+
When you assign a role, you specify the security principal, the role definition, and the scope. This combination is known as a role assignment. Azure AI model inference is a capability of the Azure AI Services resources, and hence, roles assigned to that particular resource control the access for inference.
12
12
13
13
You identify two different types of access to the resources:
14
14
15
-
***Administration access**: The actions that are related with the administration of the resources. These type of operations usually change the state of the resource and its configuration. In Azure, those operations are usually considered control-plane operations and can be executed using the Azure Portal, the Azure CLI, or with infrastructure as code. Examples of these are create new model deployments, change content filtering configurations, change the version of the model served, change SKU of a deployment.
16
-
***Developer access**: The actions that are related with the consumption of the resources. These type of operations consumes the capabilities of the resource. For example, invoking the chat completions API. However, the user can't change the state of the resource and its configuration.
15
+
***Administration access**: The actions that are related with the administration of the resources. They usually change the state of the resource and its configuration. In Azure, those operations are control-plane operations and can be executed using the Azure portal, the Azure CLI, or with infrastructure as code. Examples of includes creating a new model deployments, changing content filtering configurations, changing the version of the model served, or changing SKU of a deployment.
16
+
***Developer access**: The actions that are related with the consumption of the resources. They consumes the capabilities of the resource. For example, invoking the chat completions API. However, the user can't change the state of the resource and its configuration.
17
17
18
18
In Azure, administration operations are always performed using Microsoft Entra ID. Roles like **Cognitive Services Contributor** allow you to perform those operations. On the other hand, developer operations can be performed using either access keys or/and Microsoft Entra ID. Roles like **Cognitive Services User** allow you to perform those operations.
19
19
20
20
> [!IMPORTANT]
21
-
> Having administration access to a resource doesn't necessarily grants developer access to it. Explicit access by granting roles is still required. This is analogous to how database servers work. Having administrator access to the database server doesn't mean you can read the data inside of a database.
21
+
> Having administration access to a resource doesn't necessarily grants developer access to it. Explicit access by granting roles is still required. It's analogous to how database servers work. Having administrator access to the database server doesn't mean you can read the data inside of a database.
22
22
23
23
Follow these steps to configure developer access to Azure AI model inference in the Azure AI Services resource.
24
24
@@ -30,4 +30,4 @@ To complete this article, you need:
30
30
31
31
* An Azure AI services resource. For more information, see [Create an Azure AI Services resource](/articles/ai-foundry/model-inference/how-to/quickstart-create-resources.md).
32
32
33
-
* Administrator roles for the scope of the Azure AI Services resource or the resource group where it's deployed.
33
+
* Administrator roles for the scope of the Azure AI Services resource or the resource group.
Follow these steps to configure Microsoft Entra ID for inference if you are using **projects or hubs** in Azure AI Foundry. If your are not using them, Start from step 5 using the Azure portal.
15
+
Follow these steps to configure Microsoft Entra ID for inference if you're using **projects or hubs** in Azure AI Foundry. If your are not using them, Start from step 5 using the Azure portal.
16
16
17
-
1. Go to the [Azure portal](https://portal.azure.com) and locate the Azure AI Services resource you are using. If you are using Azure AI Foundry with projects or hubs, you can navigate to it by:
17
+
1. Go to the [Azure portal](https://portal.azure.com) and locate the Azure AI Services resource you're using. If you're using Azure AI Foundry with projects or hubs, you can navigate to it by:
18
18
19
19
1. Go to [Azure AI Foundry portal](https://ai.azure.com).
20
20
21
21
2. On the landing page, select **Open management center**.
22
22
23
-
3. Go to the section **Connected resources** and select the connection to the Azure AI Services resource that you want to configure. If it's not listed, select **View all** to see the full list.
23
+
3. Go to the section **Connected resources** and select the connection to the Azure AI Services resource that you want to configure. If it isn't listed, select **View all** to see the full list.
24
24
25
25
4. On the **Connection details** section, under **Resource**, select the name of the Azure resource. A new page opens.
26
26
27
-
5. You are now in [Azure portal](https://portal.azure.com) where you can manage all the aspects of the resource itself.
27
+
5. You're now in [Azure portal](https://portal.azure.com) where you can manage all the aspects of the resource itself.
28
28
29
29
2. On the left navigation bar, select **Access control (IAM)**.
30
30
@@ -51,7 +51,7 @@ Notice that key-based access is still possible for users that already have keys
51
51
52
52
## Use Microsoft Entra ID in your code
53
53
54
-
Once Microsoft Entra ID has been configured in your resource, you need to update your code to use it when consuming the inference endpoint. The following example shows how to use a chat completions model:
54
+
Once you configured Microsoft Entra ID in your resource, you need to update your code to use it when consuming the inference endpoint. The following example shows how to use a chat completions model:
@@ -79,6 +79,6 @@ Even when your resource has Microsoft Entra ID configured, your projects may sti
79
79
80
80
## Disable key-based authentication in the resource
81
81
82
-
Disabling key-based authentication is advisable when you’ve implemented Microsoft Entra ID and fully addressed compatibility or fallback concerns in all the applications that consume the service.
82
+
Disabling key-based authentication is advisable when you implemented Microsoft Entra ID and fully addressed compatibility or fallback concerns in all the applications that consume the service.
0 commit comments