Skip to content

Commit adece06

Browse files
authored
Merge pull request #293827 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 5acdd1e + 135e404 commit adece06

24 files changed

+74
-74
lines changed

articles/azure-compute-fleet/attribute-based-vm-selection.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -236,7 +236,7 @@ The following list of VM attributes are supported and provide examples of config
236236

237237
- Optional
238238
- The `acceleratorManufacturers` is specified as a list
239-
- Valid values are *AMD*, *Nvidia*, and *Xilinx*
239+
- Valid values are *AMD*, *NVIDIA*, and *Xilinx*
240240
- `acceleratorSupport` should be set to *Included* or *Required* to use this VM attribute
241241
- If `acceleratorSupport` is set to *Excluded*, this VM attribute can't be used
242242
- The default for `acceleratorManufacturers`, if not specified, is *ANY* of the valid values

articles/bastion/kerberos-authentication-portal.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: cherylmc
1212

1313
# Configure Bastion for Kerberos authentication using the Azure portal
1414

15-
This article shows you how to configure Azure Bastion to use Kerberos authentication. Kerberos authentication can be used with both the Basic and the Standard Bastion SKUs. For more information about Kerberos authentication, see the [Kerberos authentication overview](/windows-server/security/kerberos/kerberos-authentication-overview). For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
15+
This article shows you how to configure Azure Bastion to use Kerberos authentication. Kerberos authentication can be used with Basic SKU tier or higher for Azure Bastion. For more information about Kerberos authentication, see the [Kerberos authentication overview](/windows-server/security/kerberos/kerberos-authentication-overview). For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
1616

1717
## Considerations
1818

articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,21 +34,21 @@ When you exchange a reservation, you can change your term from one-year to three
3434

3535
Not all reservations are eligible for exchange. For example, you can't exchange the following reservations:
3636

37-
- Azure Databricks reserved capacity
37+
- Azure Databricks Pre-purchase plan
3838
- Azure OpenAI provisioned throughput
3939
- Synapse Analytics Pre-purchase plan
4040
- Red Hat plans
4141
- SUSE Linux plans
4242
- Microsoft Defender for Cloud Pre-Purchase Plan
4343
- Microsoft Sentinel Pre-Purchase Plan
4444

45-
You can also refund reservations, but the sum total of all canceled reservation commitment in your billing scope (such as EA, Microsoft Customer Agreement, and Microsoft Partner Agreement) can't exceed USD 50,000 in a 12 month rolling window.
45+
You can also refund reservations, but the sum total of all canceled reservation commitment in your billing scope (such as EA, Microsoft Customer Agreement - Billing Profile, and Microsoft Partner Agreement - Customer) can't exceed USD 50,000 in a 12 month rolling window.
4646

4747
*Microsoft is not currently charging early termination fees for reservation refunds. We might charge the fees for refunds made in the future. We currently don't have a date for enabling the fee.*
4848

4949
The following reservations aren't eligible for refunds:
5050

51-
- Azure Databricks reserved capacity
51+
- Azure Databricks Pre-purchase plan
5252
- Synapse Analytics Pre-purchase plan
5353
- Azure VMware solution by CloudSimple
5454
- Red Hat plans

articles/databox-online/TOC.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -404,7 +404,7 @@
404404
items:
405405
- name: IoT Edge on VM
406406
href: azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md
407-
- name: Nvidia DeepStream module
407+
- name: NVIDIA DeepStream module
408408
href: azure-stack-edge-deploy-nvidia-deepstream-module.md
409409
- name: Troubleshoot IoT Edge issues
410410
href: azure-stack-edge-gpu-troubleshoot-iot-edge.md

articles/databox-online/azure-stack-edge-deploy-nvidia-deepstream-module.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
2-
title: Deploy the Nvidia DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU | Microsoft Docs
3-
description: Learn how to deploy the Nvidia Deepstream module on an Ubuntu virtual machine that is running on your Azure Stack Edge Pro GPU device.
2+
title: Deploy the NVIDIA DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU | Microsoft Docs
3+
description: Learn how to deploy the NVIDIA Deepstream module on an Ubuntu virtual machine that is running on your Azure Stack Edge Pro GPU device.
44
services: databox
55
author: alkohli
66

@@ -10,11 +10,11 @@ ms.date: 06/28/2022
1010
ms.author: alkohli
1111
---
1212

13-
# Deploy the Nvidia DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU
13+
# Deploy the NVIDIA DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU
1414

1515
[!INCLUDE [applies-to-pro-gpu-and-pro-2-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-pro-2-pro-r-sku.md)]
1616

17-
This article walks you through deploying Nvidia’s DeepStream module on an Ubuntu VM running on your Azure Stack Edge device. The DeepStream module is supported only on GPU devices.
17+
This article walks you through deploying NVIDIA’s DeepStream module on an Ubuntu VM running on your Azure Stack Edge device. The DeepStream module is supported only on GPU devices.
1818

1919
## Prerequisites
2020

articles/databox-online/azure-stack-edge-gpu-configure-gpu-modules.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ ms.author: alkohli
1818
1919
Your Azure Stack Edge Pro device contains one or more Graphics Processing Unit (GPU). GPUs are a popular choice for AI computations as they offer parallel processing capabilities and are faster at image rendering than Central Processing Units (CPUs). For more information on the GPU contained in your Azure Stack Edge Pro device, go to [Azure Stack Edge Pro device technical specifications](azure-stack-edge-gpu-technical-specifications-compliance.md).
2020

21-
This article describes how to configure and run a module on the GPU on your Azure Stack Edge Pro device. In this article, you will use a publicly available container module **Digits** written for Nvidia T4 GPUs. This procedure can be used to configure any other modules published by Nvidia for these GPUs.
21+
This article describes how to configure and run a module on the GPU on your Azure Stack Edge Pro device. In this article, you will use a publicly available container module **Digits** written for NVIDIA T4 GPUs. This procedure can be used to configure any other modules published by NVIDIA for these GPUs.
2222

2323
## Prerequisites
2424

@@ -81,7 +81,7 @@ To configure a module to use the GPU on your Azure Stack Edge Pro device to run
8181

8282
10. In the **Add IoT Edge Module** tab:
8383

84-
1. Provide the **Image URI**. You will use the publicly available Nvidia module **Digits** here.
84+
1. Provide the **Image URI**. You will use the publicly available NVIDIA module **Digits** here.
8585

8686
2. Set **Restart policy** to **always**.
8787

@@ -97,7 +97,7 @@ To configure a module to use the GPU on your Azure Stack Edge Pro device to run
9797

9898
![Configure module to use GPU 11](media/azure-stack-edge-gpu-configure-gpu-modules/configure-gpu-7.png)
9999

100-
For more information on environment variables that you can use with the Nvidia GPU, go to [nVidia container runtime](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).
100+
For more information on environment variables that you can use with the NVIDIA GPU, go to [nVidia container runtime](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).
101101

102102
> [!NOTE]
103103
> A module can use one, both or no GPUs.
@@ -125,4 +125,4 @@ To configure a module to use the GPU on your Azure Stack Edge Pro device to run
125125

126126
## Next steps
127127

128-
- Learn more about [Environment variables that you can use with the Nvidia GPU](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).
128+
- Learn more about [Environment variables that you can use with the NVIDIA GPU](https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec).

articles/databox-online/azure-stack-edge-gpu-connect-powershell-interface.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ If the compute role is configured on your device, you can also get the GPU drive
6262
6363
## Enable Multi-Process Service (MPS)
6464
65-
A Multi-Process Service (MPS) on Nvidia GPUs provides a mechanism where GPUs can be shared by multiple jobs, where each job is allocated some percentage of the GPU's resources. MPS is a preview feature on your Azure Stack Edge Pro GPU device. To enable MPS on your device, follow these steps:
65+
A Multi-Process Service (MPS) on NVIDIA GPUs provides a mechanism where GPUs can be shared by multiple jobs, where each job is allocated some percentage of the GPU's resources. MPS is a preview feature on your Azure Stack Edge Pro GPU device. To enable MPS on your device, follow these steps:
6666
6767
[!INCLUDE [Enable MPS](../../includes/azure-stack-edge-gateway-enable-mps.md)]
6868

articles/databox-online/azure-stack-edge-gpu-deploy-compute-acceleration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,8 @@ Compute acceleration is a term used specifically for Azure Stack Edge devices wh
2525

2626
The article will discuss compute acceleration only using GPU or VPU for the following devices:
2727

28-
- **Azure Stack Edge Pro GPU** - These devices can have 1 or 2 Nvidia T4 Tensor Core GPU. For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/).
29-
- **Azure Stack Edge Pro R** - These devices have 1 Nvidia T4 Tensor Core GPU. For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/).
28+
- **Azure Stack Edge Pro GPU** - These devices can have 1 or 2 NVIDIA T4 Tensor Core GPU. For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/).
29+
- **Azure Stack Edge Pro R** - These devices have 1 NVIDIA T4 Tensor Core GPU. For more information, see [NVIDIA T4](https://www.nvidia.com/en-us/data-center/tesla-t4/).
3030
- **Azure Stack Edge Mini R** - These devices have 1 Intel Movidius Myriad X VPU. For more information, see [Intel Movidius Myriad X VPU](https://www.movidius.com/MyriadX).
3131

3232

articles/databox-online/azure-stack-edge-gpu-deploy-gpu-virtual-machine.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Follow these steps when deploying GPU VMs on your device via the Azure portal:
3838

3939
1. To create GPU VMs, follow all the steps in [Deploy VM on your Azure Stack Edge using Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md), with these configuration requirements:
4040

41-
- On the **Basics** tab, select a [VM size from N-series, optimized for GPUs](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized). Based on the GPU model on your device, Nvidia T4 or Nvidia A2, the dropdown list will display the corresponding supported GPU VM sizes.
41+
- On the **Basics** tab, select a [VM size from N-series, optimized for GPUs](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized). Based on the GPU model on your device, NVIDIA T4 or NVIDIA A2, the dropdown list will display the corresponding supported GPU VM sizes.
4242

4343
![Screenshot of Basics tab for "Add a virtual machine" in Azure Stack Edge. Size option, with a supported VM size for GPU VMs, is highlighted.](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/basics-vm-size-for-gpu.png)
4444

@@ -97,7 +97,7 @@ After the VM is created, you can [deploy the GPU extension using the extension t
9797

9898
## Install GPU extension after deployment
9999

100-
To take advantage of the GPU capabilities of Azure N-series VMs, Nvidia GPU drivers must be installed. From the Azure portal, you can install the GPU extension during or after VM deployment. If you're using templates, you'll install the GPU extension after you create the VM.
100+
To take advantage of the GPU capabilities of Azure N-series VMs, NVIDIA GPU drivers must be installed. From the Azure portal, you can install the GPU extension during or after VM deployment. If you're using templates, you'll install the GPU extension after you create the VM.
101101

102102
---
103103

articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Before you begin, make sure that:
2121

2222
1. You've access to a client system with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device). If using a Windows client, the system should run PowerShell 5.0 or later to access the device.
2323

24-
1. Save the following deployment `json` on your local system. You'll use information from this file to run the IoT Edge deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from Nvidia.
24+
1. Save the following deployment `json` on your local system. You'll use information from this file to run the IoT Edge deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from NVIDIA.
2525

2626
```json
2727
{
@@ -118,7 +118,7 @@ The first step is to verify that your device is running required GPU driver and
118118

119119
`Get-HcsGpuNvidiaSmi`
120120

121-
1. In the Nvidia smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
121+
1. In the NVIDIA smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
122122

123123
- GPU driver version: 460.32.03
124124
- CUDA version: 11.2
@@ -152,7 +152,7 @@ The first step is to verify that your device is running required GPU driver and
152152
[10.100.10.10]: PS>
153153
```
154154

155-
1. Keep this session open as you will use it to view the Nvidia smi output throughout the article.
155+
1. Keep this session open as you will use it to view the NVIDIA smi output throughout the article.
156156

157157

158158
## Deploy without context-sharing
@@ -216,7 +216,7 @@ For detailed instructions, see [Connect to and manage a Kubernetes cluster via k
216216

217217
### Deploy modules via portal
218218

219-
Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available Nvidia CUDA sample modules that run n-body simulation.
219+
Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available NVIDIA CUDA sample modules that run n-body simulation.
220220

221221
1. Make sure that the IoT Edge service is running on your device.
222222

@@ -316,7 +316,7 @@ Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available N
316316
```
317317
There are two pods, `cuda-sample1-97c494d7f-lnmns` and `cuda-sample2-d9f6c4688-2rld9` running on your device.
318318

319-
1. While both the containers are running the n-body simulation, view the GPU utilization from the Nvidia smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
319+
1. While both the containers are running the n-body simulation, view the GPU utilization from the NVIDIA smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
320320

321321
Here is an example output when both the containers are running the n-body simulation:
322322

@@ -349,7 +349,7 @@ Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available N
349349
```
350350
As you can see, there are two containers running with n-body simulation on GPU 0. You can also view their corresponding memory usage.
351351

352-
1. Once the simulation has completed, the Nvidia smi output will show that there are no processes running on the device.
352+
1. Once the simulation has completed, the NVIDIA smi output will show that there are no processes running on the device.
353353

354354
```powershell
355355
[10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
@@ -460,7 +460,7 @@ You can now deploy the n-body simulation on two CUDA containers when MPS is runn
460460
Created nvidia-mps.service
461461
[10.100.10.10]: PS>
462462
```
463-
1. Get the Nvidia smi output from the PowerShell interface of the device. You can see the `nvidia-cuda-mps-server` process or the MPS service is running on the device.
463+
1. Get the NVIDIA smi output from the PowerShell interface of the device. You can see the `nvidia-cuda-mps-server` process or the MPS service is running on the device.
464464

465465
Here is an example output:
466466

@@ -548,7 +548,7 @@ You can now deploy the n-body simulation on two CUDA containers when MPS is runn
548548
PS C:\WINDOWS\system32>
549549
```
550550

551-
1. Get the Nvidia smi output from the PowerShell interface of the device when both the containers are running the n-body simulation. Here is an example output. There are three processes, the `nvidia-cuda-mps-server` process (type C) corresponds to the MPS service and the `/tmp/nbody` processes (type M + C) correspond to the n-body workloads deployed by the modules.
551+
1. Get the NVIDIA smi output from the PowerShell interface of the device when both the containers are running the n-body simulation. Here is an example output. There are three processes, the `nvidia-cuda-mps-server` process (type C) corresponds to the MPS service and the `/tmp/nbody` processes (type M + C) correspond to the n-body workloads deployed by the modules.
552552

553553
```powershell
554554
[10.100.10.10]: PS>Get-HcsGpuNvidiaSmi

0 commit comments

Comments
 (0)