Skip to content

Commit 8433d00

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into fwfreshness1
2 parents 06bc29a + 61f592b commit 8433d00

37 files changed

+113
-302
lines changed

articles/container-apps/gpu-image-generation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "Tutorial: Generate images using serverless GPUs in Azure Container Apps (preview)"
2+
title: "Tutorial: Generate images using serverless GPUs in Azure Container Apps"
33
description: Learn to run to generate images powered by serverless GPUs in Azure Container Apps.
44
services: container-apps
55
author: craigshoemaker
@@ -11,7 +11,7 @@ ms.date: 11/06/2024
1111
ms.author: cshoe
1212
---
1313

14-
# Tutorial: Generate images using serverless GPUs in Azure Container Apps (preview)
14+
# Tutorial: Generate images using serverless GPUs in Azure Container Apps
1515

1616
In this article, you learn how to create a container app that uses [serverless GPUs](gpu-serverless-overview.md) to power an AI application.
1717

articles/container-apps/gpu-serverless-overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
---
2-
title: Using serverless GPUs in Azure Container Apps (preview)
2+
title: Using serverless GPUs in Azure Container Apps
33
description: Learn to how to use GPUs with apps and jobs in Azure Container Apps.
44
services: container-apps
55
author: craigshoemaker
66
ms.service: azure-container-apps
77
ms.custom:
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 11/06/2024
10+
ms.date: 03/17/2025
1111
ms.author: cshoe
1212
---
1313

14-
# Using serverless GPUs in Azure Container Apps (preview)
14+
# Using serverless GPUs in Azure Container Apps
1515

1616
Azure Container Apps provides access to GPUs on-demand without you having to manage the underlying infrastructure. As a serverless feature, you only pay for GPUs in use. When enabled, the number of GPUs used for your app rises and falls to meet the load demands of your application. Serverless GPUs enable you to seamlessly run your workloads with automatic scaling, optimized cold start, per-second billing with scale down to zero when not in use, and reduced operational overhead.
1717

articles/container-apps/workload-profiles-overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Profiles are configured to fit the different needs of your applications.
2222
| Profile type | Description | Potential use |
2323
|--|--|--|
2424
| Consumption | Automatically added to any new environment. | Apps that don't require specific hardware requirements |
25-
| Consumption GPU (preview) | Scale-to-zero serverless GPUs are available in West US 3, Australia East, and Sweden Central regions. | Apps that require GPU |
25+
| Consumption GPU | Scale-to-zero serverless GPUs are available in West US 3, Australia East, and Sweden Central regions. | Apps that require GPU |
2626
| Dedicated (General purpose) | Balance of memory and compute resources | Apps that require larger amounts of CPU and/or memory |
2727
| Dedicated (Memory optimized) | Increased memory resources | Apps that need access to large in-memory data, in-memory machine learning models, or other high memory requirements |
2828
| Dedicated (GPU enabled) (preview) | GPU enabled with increased memory and compute resources available in West US 3 and North Europe regions. | Apps that require GPU |
@@ -55,8 +55,8 @@ There are different types and sizes of workload profiles available by region. By
5555
| Display name | Name | vCPU | Memory (GiB) | GPU | Category | Allocation | Quota name |
5656
|---|---|---|---|---|---|---|
5757
| Consumption | Consumption | 4 | 8 | - | Consumption | per replica | Managed Environment Consumption Cores |
58-
| Consumption-GPU-NC24-A100 (preview) | Consumption-GPU-NC24-A100 | 24 | 220 | 1 | Consumption GPU | per replica | Subscription Consumption NCA 100 Gpus |
59-
| Consumption-GPU-NC8as-T4 (preview) | Consumption-GPU-NC8as-T4 | 8 | 56 | 1 | Consumption GPU | per replica | Subscription Consumption T 4 Gpus |
58+
| Consumption-GPU-NC24-A100 | Consumption-GPU-NC24-A100 | 24 | 220 | 1 | Consumption GPU | per replica | Subscription Consumption NCA 100 Gpus |
59+
| Consumption-GPU-NC8as-T4 | Consumption-GPU-NC8as-T4 | 8 | 56 | 1 | Consumption GPU | per replica | Subscription Consumption T 4 Gpus |
6060
| Dedicated-D4 | D4 | 4 | 16 | - | General purpose | per node | Managed Environment General Purpose Cores |
6161
| Dedicated-D8 | D8 | 8 | 32 | - | General purpose | per node | Managed Environment General Purpose Cores |
6262
| Dedicated-D16 | D16 | 16 | 64 | - | General purpose | per node | Managed Environment General Purpose Cores |

articles/data-factory/connector-sap-hana.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -259,7 +259,7 @@ You are suggested to enable parallel copy with data partitioning especially when
259259
| Scenario | Suggested settings |
260260
| -------------------------------------------------- | ------------------------------------------------------------ |
261261
| Full load from large table. | **Partition option**: Physical partitions of table. <br><br/>During execution, the service automatically detects the physical partition type of the specified SAP HANA table, and chooses the corresponding partition strategy:<br>- **Range Partitioning**: Get the partition column and partition ranges defined for the table, then copy the data by range. <br>- **Hash Partitioning**: Use hash partition key as partition column, then partition and copy the data based on ranges calculated by the service. <br>- **Round-Robin Partitioning** or **No Partition**: Use primary key as partition column, then partition and copy the data based on ranges calculated by the service. |
262-
| Load large amount of data by using a custom query. | **Partition option**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE ?AdfHanaDynamicRangePartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to apply dynamic range partition. <br><br>During execution, the service first calculates the value ranges of the specified partition column, by evenly distributes the rows in a number of buckets according to the number of distinct partition column values the parallel copy setting, then replaces `?AdfHanaDynamicRangePartitionCondition` with filtering the partition column value range for each partition, and sends to SAP HANA.<br><br>If you want to use multiple columns as partition column, you can concatenate the values of each column as one column in the query and specify it as the partition column, like `SELECT * FROM (SELECT *, CONCAT(<KeyColumn1>, <KeyColumn2>) AS PARTITIONCOLUMN FROM <TABLENAME>) WHERE ?AdfHanaDynamicRangePartitionCondition`. |
262+
| Load large amount of data by using a custom query. | **Partition option**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TABLENAME> WHERE (?AdfHanaDynamicRangePartitionCondition) AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to apply dynamic range partition. <br><br>During execution, the service first calculates the value ranges of the specified partition column, by evenly distributes the rows in a number of buckets according to the number of distinct partition column values the parallel copy setting, then replaces `?AdfHanaDynamicRangePartitionCondition` with filtering the partition column value range for each partition, and sends to SAP HANA.<br><br>If you want to use multiple columns as partition column, you can concatenate the values of each column as one column in the query and specify it as the partition column, like `SELECT * FROM (SELECT *, CONCAT(<KeyColumn1>, <KeyColumn2>) AS PARTITIONCOLUMN FROM <TABLENAME>) WHERE (?AdfHanaDynamicRangePartitionCondition)`. |
263263

264264
**Example: query with physical partitions of a table**
265265

@@ -275,7 +275,7 @@ You are suggested to enable parallel copy with data partitioning especially when
275275
```json
276276
"source": {
277277
"type": "SapHanaSource",
278-
"query": "SELECT * FROM <TABLENAME> WHERE ?AdfHanaDynamicRangePartitionCondition AND <your_additional_where_clause>",
278+
"query": "SELECT * FROM <TABLENAME> WHERE (?AdfHanaDynamicRangePartitionCondition) AND <your_additional_where_clause>",
279279
"partitionOption": "SapHanaDynamicRange",
280280
"partitionSettings": {
281281
"partitionColumnName": "<Partition_column_name>"

articles/dev-box/how-to-configure-customization-imaging.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -119,6 +119,8 @@ In order to generate an image, you need to assign the DevCenter service the requ
119119

120120
1. Under that resource group, navigate to Access Control, and give the **Windows 365** and **Project Fidalgo** applications the roles **Storage Account Contributor**, **Storage Blob Data Contributor**, and **Reader**.
121121

122+
During the process of building an image, Dev Box creates a temporary storage account in your subscription to store a snapshot, from which Dev Box generates an image. This storage account does not allow anonymous blob access and can only be accessed by identities with the Storage Blob Reader access. This storage account must be accessible from public networks, so that the Dev Box service can export your snapshot to it. If you have Azure policies that block the creation of storage accounts with public network access, create an exception for the subscription your DevCenter project is in.
123+
122124
### Build the image
123125

124126

articles/digital-twins/concepts-3d-scenes-studio.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ To share your scenes with someone else, the recipient will need at least *Reader
4949
## Set up
5050

5151
To work with 3D Scenes Studio, you'll need the following required resources:
52-
* An [Azure Digital Twins instance](how-to-set-up-instance-cli.md)
52+
* An [Azure Digital Twins instance](how-to-set-up-instance-portal.md)
5353
* You'll need *Azure Digital Twins Data Owner* or *Azure Digital Twins Data Reader* access to the instance
5454
* The instance should be populated with [models](concepts-models.md) and [twins](concepts-twins-graph.md)
5555

articles/digital-twins/concepts-cli.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,8 @@ Some of the actions you can do using the command set include:
3030

3131
The command set is called `az dt`, and is part of the [Azure IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension). You can view the full list of commands and their usage as part of the reference documentation for the `az iot` command set: [az dt command reference](/cli/azure/dt).
3232

33+
[!INCLUDE [digital-twins-cli-issue](../../includes/digital-twins-cli-issue.md)]
34+
3335
## Uses (deploy and validate)
3436

3537
Apart from generally managing your instance, the CLI is also a useful tool for deployment and validation.

articles/digital-twins/how-to-create-app-registration.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,8 @@ ms.service: azure-digital-twins
1919

2020
This article describes how to create an [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) *app registration* that can access Azure Digital Twins. This article includes steps for the [Azure portal](https://portal.azure.com) and the [Azure CLI](/cli/azure/what-is-azure-cli).
2121

22+
[!INCLUDE [digital-twins-cli-issue](../../includes/digital-twins-cli-issue.md)]
23+
2224
When working with Azure Digital Twins, it's common to interact with your instance through client applications. Those applications need to authenticate with Azure Digital Twins, and some of the [authentication mechanisms](how-to-authenticate-client.md) that apps can use involve an app registration.
2325

2426
The app registration isn't required for all authentication scenarios. However, if you're using an authentication strategy or code sample that does require an app registration, this article shows you how to set one up and grant it permissions to the Azure Digital Twins APIs. It also covers how to collect important values that you need to use the app registration when authenticating.
@@ -240,6 +242,8 @@ The app registration should show up in the list along with the role you assigned
240242

241243
# [CLI](#tab/cli)
242244

245+
[!INCLUDE [digital-twins-cli-issue](../../includes/digital-twins-cli-issue.md)]
246+
243247
Use the [az dt role-assignment create](/cli/azure/dt/role-assignment#az-dt-role-assignment-create) command to assign the role (you must have [sufficient permissions](how-to-set-up-instance-cli.md#prerequisites-permission-requirements) in the Azure subscription). The command requires you to pass in the name of the role you want to assign, the name of your Azure Digital Twins instance, and either the name or the object ID of the app registration.
244248

245249
```azurecli-interactive

articles/digital-twins/how-to-create-data-history-connection.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -171,6 +171,8 @@ This command also creates three tables in your Azure Data Explorer database to s
171171

172172
# [CLI](#tab/cli)
173173

174+
[!INCLUDE [digital-twins-cli-issue](../../includes/digital-twins-cli-issue.md)]
175+
174176
Use the command in this section to create a data history connection and the tables in Azure Data Explorer. The command always creates a table for historized twin property updates, and it includes parameters to create the tables for relationship lifecycle and twin lifecycle events.
175177

176178
>[!NOTE]

articles/digital-twins/how-to-create-endpoints.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,8 @@ This article explains how to create an *endpoint* for Azure Digital Twin events
2020

2121
Routing [event notifications](concepts-event-notifications.md) from Azure Digital Twins to downstream services or connected compute resources is a two-step process: create endpoints, then create event routes that send data to those endpoints. This article covers the first step, setting up endpoints that can receive the events. Later, you can create [event routes](how-to-create-routes.md) that specify which events generated by Azure Digital Twins are delivered to which endpoints.
2222

23+
[!INCLUDE [digital-twins-cli-issue](../../includes/digital-twins-cli-issue.md)]
24+
2325
## Prerequisites
2426

2527
* An Azure account, which you can [set up for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
@@ -86,6 +88,8 @@ Now the Event Grid topic, event hub, or Service Bus topic is available as an end
8688

8789
# [CLI](#tab/cli)
8890

91+
[!INCLUDE [digital-twins-cli-issue](../../includes/digital-twins-cli-issue.md)]
92+
8993
The following examples show how to create endpoints using the [az dt endpoint create](/cli/azure/dt/endpoint/create) command for the [Azure Digital Twins CLI](/cli/azure/dt). Replace the placeholders in the commands with the details of your own resources.
9094

9195
To create an Event Grid endpoint:

0 commit comments

Comments
 (0)