Skip to content

Commit 7979110

Browse files
authored
Merge pull request #5365 from MicrosoftDocs/main
6/4/2025 AM Publish
2 parents 2a91c44 + 3d496e9 commit 7979110

File tree

6 files changed

+25
-18
lines changed

6 files changed

+25
-18
lines changed

articles/ai-foundry/concepts/rbac-azure-ai-foundry.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.custom:
99
- build-2024
1010
- ignite-2024
1111
ms.topic: conceptual
12-
ms.date: 03/04/2025
12+
ms.date: 06/04/2025
1313
ms.reviewer: deeikele
1414
ms.author: larryfr
1515
author: Blackmist
@@ -222,10 +222,10 @@ For example, if you're trying to consume a new Blob storage, you need to ensure
222222

223223
If you're an owner of a Foundry account resource, you can add and remove roles for Azure AI Foundry. From the **Home** page in [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs), select your Foundry resource. Then select **Users** to add and remove users for the hub. You can also manage permissions from the [Azure portal](https://portal.azure.com) under **Access Control (IAM)** or through the Azure CLI.
224224

225-
For example, use the Azure CLI to assign the Azure AI User role to `[email protected]` for resource group `this-rg` with the following command:
225+
For example, the following command assigns Azure AI User role to `[email protected]` for resource group `this-rg` in the subscription with an ID of `00000000-0000-0000-0000-000000000000`:
226226

227227
```azurecli
228-
az role assignment create --role "Azure AI User" --assignee "[email protected]" --resource-group this-rg
228+
az role assignment create --role "Azure AI User" --assignee "[email protected]" --scope /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/this-rg
229229
```
230230

231231
## Create custom roles
@@ -461,10 +461,10 @@ For example, if you're trying to consume a new Blob storage, you need to ensure
461461

462462
## Manage access with roles
463463

464-
If you're an owner of a hub, you can add and remove roles for Azure AI Foundry. Go to the **Home** page in [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs) and select your hub. Then select **Users** to add and remove users for the hub. You can also manage permissions from the Azure portal under **Access Control (IAM)** or through the Azure CLI. For example, use the [Azure CLI](/cli/azure/) to assign the Azure AI Developer role to "[email protected]" for resource group "this-rg" with the following command:
464+
If you're an owner of a hub, you can add and remove roles for Azure AI Foundry. Go to the **Home** page in [Azure AI Foundry](https://ai.azure.com/?cid=learnDocs) and select your hub. Then select **Users** to add and remove users for the hub. You can also manage permissions from the Azure portal under **Access Control (IAM)** or through the Azure CLI. For example, to assign the Azure AI Developer role to "[email protected]" for resource group "this-rg" in the subscription with an ID of `00000000-0000-0000-0000-000000000000`, you can use the following Azure CLI command:
465465
466466
```azurecli-interactive
467-
az role assignment create --role "Azure AI Developer" --assignee "[email protected]" --resource-group this-rg
467+
az role assignment create --role "Azure AI Developer" --assignee "[email protected]" --scope /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/this-rg
468468
```
469469
470470
## Create custom roles

articles/ai-foundry/how-to/develop/sdk-overview.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,6 @@ The Azure AI Foundry SDK is a set of client libraries and services designed to w
4040
az login
4141
```
4242

43-
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
4443

4544
## Unified Projects client library
4645

@@ -69,6 +68,7 @@ The Azure AI Foundry Projects client library is a unified library that enables y
6968

7069
::: zone pivot="programming-language-java"
7170

71+
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
7272

7373
* Add these packages to your installation (preview):
7474
* `com.azure.ai.projects`
@@ -95,6 +95,8 @@ The Azure AI Foundry Projects client library is a unified library that enables y
9595
9696
::: zone pivot="programming-language-javascript"
9797
98+
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
99+
98100
* Install dependencies (preview):
99101
100102
```bash

articles/machine-learning/concept-customer-managed-keys.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -54,13 +54,12 @@ When you use a customer-managed key, there are two possible configurations:
5454

5555
## Service-side encryption of metadata
5656

57-
A new architecture for the customer-managed key encryption workspace is available in preview, reducing cost compared to the current architecture and mitigating likelihood of Azure policy conflicts. In this configuration, encrypted data is stored service-side on Microsoft-managed resources instead of in your subscription.
57+
In this configuration, encrypted data is stored service-side on Microsoft-managed resources instead of in your subscription. Using service-side encryption reduces costs compared to the subscription-side encryption, and mitigates the likelihood of Azure policy conflicts.
5858

59-
Data that previously was stored in Azure Cosmos DB in your subscription, is stored in multitenant Microsoft-managed resources with document-level encryption using your encryption key. Search indices that were previously stored in Azure AI Search in your subscription, are stored on Microsoft-managed resources that are provisioned dedicated for you per workspace. The cost of the Azure AI search instance is charged under your Azure Machine Learning workspace in Microsoft Cost Management.
59+
Data is stored in multitenant Microsoft-managed resources with document-level encryption using your encryption key. Search indices are stored on Microsoft-managed resources that are provisioned dedicated for you per workspace. The cost of the Azure AI search instance is charged under your Azure Machine Learning workspace in Microsoft Cost Management.
6060

61-
Pipelines metadata that previously was stored in a storage account in a managed resource group, is now stored on the storage account in your subscription that is associated to the Azure Machine Learning workspace. Since this Azure Storage resource is managed separately in your subscription, you're responsible to configure encryption settings on it.
61+
Pipelines metadata is stored on the storage account in your subscription that is associated to the Azure Machine Learning workspace. Since this Azure Storage resource is managed separately in your subscription, you're responsible to configure encryption settings on it.
6262

63-
To opt in for this preview, set the `enableServiceSideCMKEncryption` on a REST API or in your Bicep or Resource Manager template. You can also use Azure portal.
6463

6564
:::image type="content" source="./media/concept-customer-managed-keys/cmk-service-side-encryption.png" alt-text="Screenshot of the encryption tab with the option for server side encryption selected." lightbox="./media/concept-customer-managed-keys/cmk-service-side-encryption.png":::
6665

articles/machine-learning/how-to-manage-workspace.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -238,7 +238,7 @@ ml_client.workspaces.begin_create(ws)
238238

239239
# [Portal](#tab/azure-portal)
240240

241-
1. Select **Encrypt data using a ustomer-managed key**, and then select **Click to select key**. This configuration creates Azure resources used to encrypt data in your Azure subscription. Alternatively, select **Use service-side encryption (preview)** to use service-side resources for encryption. For more information, see [Customer-managed keys](concept-customer-managed-keys.md).
241+
1. Select **Encrypt data using a ustomer-managed key**, and then select **Click to select key**. This configuration creates Azure resources used to encrypt data in your Azure subscription. Alternatively, select **Use service-side encryption** to use service-side resources for encryption. For more information, see [Customer-managed keys](concept-customer-managed-keys.md).
242242

243243
:::image type="content" source="media/how-to-manage-workspace/advanced-workspace.png" alt-text="Screenshot of the customer-managed keys.":::
244244

8.98 KB
Loading

articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -86,10 +86,10 @@ The following table shows the virtual machine (VM) stock keeping units (SKUs) th
8686
| standardNCADSA100v4Family | STANDARD_NC24ADS_A100_V4 | - | NvidiaGpu | 1 | 24 | Yes |
8787
| standardNCADSA100v4Family | STANDARD_NC48ADS_A100_V4 | - | NvidiaGpu | 2 | 48 | Yes |
8888
| standardNCADSA100v4Family | STANDARD_NC96ADS_A100_V4 | - | NvidiaGpu | 4 | 96 | Yes |
89-
| Standard NCASv3_T4 Family | STANDARD_NC4AS_T4_V3 | - | NvidiaGpu | 1 | 4 | - |
90-
| Standard NCASv3_T4 Family | STANDARD_NC8AS_T4_V3 | - | NvidiaGpu | 1 | 8 | - |
91-
| Standard NCASv3_T4 Family | STANDARD_NC16AS_T4_V3 | - | NvidiaGpu | 1 | 16 | - |
92-
| Standard NCASv3_T4 Family | STANDARD_NC64AS_T4_V3 | - | NvidiaGpu | 4 | 64 | - |
89+
| standard NCASv3_T4 Family | STANDARD_NC4AS_T4_V3 | - | NvidiaGpu | 1 | 4 | - |
90+
| standard NCASv3_T4 Family | STANDARD_NC8AS_T4_V3 | - | NvidiaGpu | 1 | 8 | - |
91+
| standard NCASv3_T4 Family | STANDARD_NC16AS_T4_V3 | - | NvidiaGpu | 1 | 16 | - |
92+
| standard NCASv3_T4 Family | STANDARD_NC64AS_T4_V3 | - | NvidiaGpu | 4 | 64 | - |
9393
| standardNCSv2Family | STANDARD_NC6S_V2 | - | NvidiaGpu | 1 | 6 | - |
9494
| standardNCSv2Family | STANDARD_NC12S_V2 | - | NvidiaGpu | 2 | 12 | - |
9595
| standardNCSv2Family | STANDARD_NC24S_V2 | - | NvidiaGpu | 4 | 24 | - |
@@ -99,10 +99,16 @@ The following table shows the virtual machine (VM) stock keeping units (SKUs) th
9999
| standardNCADSH100v5Family | STANDARD_NC40ADS_H100_V5 | - | NvidiaGpu | 1 | 40 | Yes |
100100
| standardNCADSH100v5Family | STANDARD_NC80ADIS_H100_V5 | - | NvidiaGpu | 2 | 80 | Yes |
101101
| standard NDAMSv4_A100Family | STANDARD_ND96AMSR_A100_V4 | Yes | NvidiaGpu | 8 | 96 | Yes |
102-
| Standard NDASv4_A100 Family | STANDARD_ND96ASR_V4 | Yes | NvidiaGpu | 8 | 96 | Yes |
102+
| standard NDASv4_A100 Family | STANDARD_ND96ASR_V4 | Yes | NvidiaGpu | 8 | 96 | Yes |
103103
| standardNDSv2Family | STANDARD_ND40RS_V2 | Yes | NvidiaGpu | 8 | 40 | Yes |
104-
| standardNDv5H100Family | STANDARD_ND96IS_H100_v5 | - | NvidiaGpu | 8 | 96 | Yes |
105-
| standardNDv5H100Family | STANDARD_ND96ISR_H100_v5 | Yes | NvidiaGpu | 8 | 96 | Yes |
104+
| standardNDv5H100Family | STANDARD_ND96IS_H100_V5 | - | NvidiaGpu | 8 | 96 | Yes |
105+
| standardNDv5H100Family | STANDARD_ND96ISR_H100_V5 | Yes | NvidiaGpu | 8 | 96 | Yes |
106+
| standardNVADSA10v5Family | STANDARD_NV6ADS_A10_V5 | - | NvidiaGpu | 1/6 | 6 | - |
107+
| standardNVADSA10v5Family | STANDARD_NV12ADS_A10_V5 | - | NvidiaGpu | 1/3 | 12 | - |
108+
| standardNVADSA10v5Family | STANDARD_NV18ADS_A10_V5 | - | NvidiaGpu | 1/2 | 18 | - |
109+
| standardNVADSA10v5Family | STANDARD_NV36ADS_A10_V5 | - | NvidiaGpu | 1 | 36 | - |
110+
| standardNVADSA10v5Family | STANDARD_NV36ADMS_A10_V5 | - | NvidiaGpu | 1 | 36 | - |
111+
| standardNVADSA10v5Family | STANDARD_NV72ADS_A10_V5 | - | NvidiaGpu | 2 | 72 | - |
106112

107113
> [!CAUTION]
108114
> Small VM SKUs such as `Standard_DS1_v2` and `Standard_F2s_v2` may be too small for bigger models and may lead to container termination due to insufficient memory, not enough space on the disk, or probe failure as it takes too long to initiate the container. If you face [OutOfQuota errors](how-to-troubleshoot-online-endpoints.md?tabs=cli#error-outofquota) or [ResourceNotReady errors](how-to-troubleshoot-online-endpoints.md?tabs=cli#error-resourcenotready), try bigger VM SKUs. If you want to reduce the cost of deploying multiple models with managed online endpoint, see [Deployment for several local models](concept-online-deployment-model-specification.md#deployment-for-several-local-models).

0 commit comments

Comments
 (0)