Skip to content

Commit 437c303

Browse files
committed
Manika comments and Acrolinx feedback
1 parent bcb3de1 commit 437c303

File tree

4 files changed

+28
-28
lines changed

4 files changed

+28
-28
lines changed
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
author: alkohli
33
ms.topic: include
4-
ms.date: 03/17/2025
4+
ms.date: 03/21/2025
55
ms.author: alkohli
66

77
---
88

99
> [!NOTE]
10-
> The recommended way to create and manage VMs on Azure Local is using [Azure Local VM management](../manage/azure-arc-vm-management-overview.md). However, since the functionality described in this article isn't yet provided by Azure Local VM management, you can use Windows Admin Center or PowerShell as described below. Note however that Azure Local VMs created this way aren't enabled by Azure Arc. They have limited manageability from the Azure Local VM management plane and fewer Azure Hybrid Benefits, such as no free use of Azure Update Manager.
10+
> The recommended way to create and manage VMs on Azure Local is using the [Azure Local VM management](../manage/azure-arc-vm-management-overview.md). However, since the functionality described in this article isn't yet provided by Azure Local VM management, you can use Windows Admin Center or PowerShell as described below. Note that VMs created this way aren't enabled by Azure Arc, have limited manageability from the Azure Arc control plane, and fewer Azure Hybrid Benefits, including usage of Azure Update Manager at no extra cost. For more information, see [Compare management capabilities of VMs on Azure Local](../concepts/compare-vm-management-capabilities.md) and [Supported operations for Azure Local VMs](../manage/virtual-machine-operations.md).

azure-local/includes/hci-arc-vm.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
22
author: alkohli
33
ms.topic: include
4-
ms.date: 03/17/2025
4+
ms.date: 03/21/2025
55
ms.author: alkohli
66

77
---
88

99
<!--- Link must remain site-relative to prevent build issues with incoming includes from the windowsserverdocs repo --->
1010

1111
> [!NOTE]
12-
> The recommended way to create and manage VMs on Azure Local is using [Azure Local VM management](/azure-stack/hci/manage/azure-arc-vm-management-overview). Use the mechanism described below to manage your VMs only if you need functionality that is not available in Azure Local VMs enabled by Azure Arc.
12+
> The recommended way to create and manage VMs on Azure Local is using the [Azure Local VM management](/azure-stack/hci/manage/azure-arc-vm-management-overview). Use the method in this article to manage your VMs if you need functionality that isn't available in Azure Local VM management.

azure-local/manage/attach-gpu-to-linux-vm.md

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: alkohli
66
ms.topic: how-to
77
ms.service: azure-local
88
ms.custom: linux-related-content
9-
ms.date: 03/17/2025
9+
ms.date: 03/21/2025
1010
---
1111

1212
# Attaching a GPU to an Ubuntu Linux VM on Azure Local
@@ -24,7 +24,7 @@ This topic provides step-by-step instructions on how to install and configure an
2424
3. Sign in using an account with administrative privileges to the machine with the NVIDIA GPU installed.
2525
4. Open **Device Manager** and navigate to the *other devices* section. You should see a device listed as "3D Video Controller."
2626
5. Right-click on "3D Video Controller" to bring up the **Properties** page. Click **Details**. From the dropdown under **Property**, select "Location paths."
27-
6. Note the value with string PCIRoot as highlighted in the screenshot below. Right-click on **Value** and copy/save it.
27+
6. Note the value with string PCIRoot as highlighted in the screenshot. Right-click on **Value** and copy/save it.
2828

2929
:::image type="content" source="media/attach-gpu-to-linux-vm/pciroot.png" alt-text="Location Path Screenshot." lightbox="media/attach-gpu-to-linux-vm/pciroot.png":::
3030

@@ -41,9 +41,9 @@ This topic provides step-by-step instructions on how to install and configure an
4141
1. Download [Ubuntu desktop release 18.04.02 ISO](http://old-releases.ubuntu.com/releases/18.04.2/).
4242
2. Open **Hyper-V Manager** on the machine in your Azure local instance with the GPU installed.
4343
> [!NOTE]
44-
> [DDA doesn't support failover](/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment). This is a VM limitation with DDA. Therefore, we recommend using **Hyper-V Manager** to deploy the VM on the machine instead of **Failover Cluster Manager**. Use of **Failover Cluster Manager** with DDA will fail with an error message indicating that the VM has a device that doesn't support high availability.
45-
3. Using the Ubuntu ISO downloaded in step 1, create a new VM using the **New Virtual Machine Wizard** in **Hyper-V Manager** to create an Ubuntu Generation 1 VM with 2GB of memory and a network card attached to it.
46-
4. In PowerShell, assign the Dismounted GPU device to the VM using the cmdlets below, replacing the *LocationPath* value with the value for your device.
44+
> [DDA doesn't support failover](/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment). This is a VM limitation with DDA. Therefore, we recommend using **Hyper-V Manager** to deploy the VM on the machine instead of **Failover Cluster Manager**. Use of **Failover Cluster Manager** with DDA fails with an error message indicating that the VM has a device that doesn't support high availability.
45+
3. Using the Ubuntu ISO downloaded in step 1, create a new VM using the **New Virtual Machine Wizard** in **Hyper-V Manager** to create an Ubuntu Generation 1 VM with 2 GB of memory and a network card attached to it.
46+
4. In PowerShell, assign the Dismounted GPU device to the VM using the cmdlets, replacing the *LocationPath* value with the value for your device.
4747
```PowerShell
4848
# Confirm that there are no DDA devices assigned to the VM
4949
Get-VMAssignableDevice -VMName Ubuntu
@@ -55,11 +55,11 @@ This topic provides step-by-step instructions on how to install and configure an
5555
Get-VMAssignableDevice -VMName Ubuntu
5656
```
5757
58-
Successful assignment of the GPU to the VM will show the output below:
58+
Successful assignment of the GPU to the VM that shows this output:
5959
6060
:::image type="content" source="media/attach-gpu-to-linux-vm/assign-gpu.png" alt-text="Assign GPU Screenshot." lightbox="media/attach-gpu-to-linux-vm/assign-gpu.png":::
6161
62-
Configure additional values following GPU documentation [here](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda):
62+
Configure other values following GPU documentation [here](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda):
6363
6464
```PowerShell
6565
# Enable Write-Combining on the CPU
@@ -69,15 +69,15 @@ This topic provides step-by-step instructions on how to install and configure an
6969
Set-VM -LowMemoryMappedIoSpace 3Gb -VMName VMName
7070
7171
# Configure greater than 32 bit MMIO space
72-
Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName VMName
72+
Set-VM -HighMemoryMappedIoSpace 33280 Mb -VMName VMName
7373
```
7474

7575
> [!NOTE]
7676
> The Value 33280Mb should suffice for most GPUs, but should be replaced with a value greater than your GPU memory.
7777
7878
5. Using Hyper-V Manager, connect to the VM and start the Ubuntu OS install. Choose the defaults to install the Ubuntu OS on the VM.
7979

80-
6. After the installation is complete, use **Hyper-V Manager** to shut down the VM and configure the **Automatic Stop Action** for the VM to shut down the guest operating system as in the screenshot below:
80+
6. After the installation is complete, use **Hyper-V Manager** to shut down the VM and configure the **Automatic Stop Action** for the VM to shut down the guest operating system as in the screenshot:
8181

8282
:::image type="content" source="media/attach-gpu-to-linux-vm/guest-shutdown.png" alt-text="Guest OS Shutdown Screenshot." lightbox="media/attach-gpu-to-linux-vm/guest-shutdown.png":::
8383

@@ -94,13 +94,13 @@ This topic provides step-by-step instructions on how to install and configure an
9494
10. Upon login through the SSH client, issue the command **lspci** and validate that the NVIDIA GPU is listed as "3D controller."
9595

9696
> [!IMPORTANT]
97-
> If The NVIDIA GPU is not seen as "3D controller," please do not proceed further. Please ensure that the steps above are followed before proceeding.
97+
> If The NVIDIA GPU is not seen as "3D controller," don't proceed further. Please ensure that the steps above are followed before proceeding.
9898
9999
11. Within the VM, search for and open **Software & Updates**. Navigate to **Additional Drivers**, then choose the latest NVIDIA GPU drivers listed. Complete the driver install by clicking the **Apply Changes** button.
100100

101101
:::image type="content" source="media/attach-gpu-to-linux-vm/driver-install.png" alt-text="Driver Install Screenshot." lightbox="media/attach-gpu-to-linux-vm/driver-install.png":::
102102

103-
12. Restart the Ubuntu VM after the driver installation completes. Once the VM starts, connect through the SSH client and issue the command **nvidia-smi** to verify that the NVIDIA GPU driver installation completed successfully. The output should be similar to the screenshot below:
103+
12. Restart the Ubuntu VM after the driver installation completes. Once the VM starts, connect through the SSH client and issue the command **nvidia-smi** to verify that the NVIDIA GPU driver installation completed successfully. The output should be similar to the screenshot:
104104

105105
:::image type="content" source="media/attach-gpu-to-linux-vm/nvidia-smi.png" alt-text="Screenshot that shows the output from the nvidia-smi command." lightbox="media/attach-gpu-to-linux-vm/nvidia-smi.png":::
106106

@@ -160,7 +160,7 @@ This topic provides step-by-step instructions on how to install and configure an
160160
161161
## Configure Azure IoT Edge
162162
163-
To prepare for this configuration, please review the FAQ contained in the [NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano](https://github.com/Azure-Samples/NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano) GitHub repo, which explains the need to install Docker instead of Moby. After reviewing, proceed to the steps below.
163+
To prepare for this configuration, please review the FAQ contained in the [NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano](https://github.com/Azure-Samples/NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano) GitHub repo, which explains the need to install Docker instead of Moby. After reviewing, proceed using the steps.
164164
165165
### Install NVIDIA Docker
166166
@@ -196,7 +196,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
196196
sudo docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
197197
```
198198
199-
Successful installation will look like the output in the screenshot below:
199+
Successful installation looks like the output in the screenshot:
200200
201201
:::image type="content" source="media/attach-gpu-to-linux-vm/docker.png" alt-text="Successful Docker Install Screenshot." lightbox="media/attach-gpu-to-linux-vm/docker.png":::
202202
@@ -263,13 +263,13 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
263263
wget -O cars-streams.tar.gz --no-check-certificate https://onedrive.live.com/download?cid=0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21588371&authkey=AAavgrxG95v9gu0
264264
```
265265
266-
Un-compress the video files:
266+
Uncompress the video files:
267267
268268
```shell
269269
tar -xzvf cars-streams.tar.gz
270270
```
271271
272-
The contents of the directory /var/deepstream/custom_streams should be similar to the screenshot below:
272+
The contents of the directory /var/deepstream/custom_streams should be similar to the screenshot:
273273
274274
:::image type="content" source="media/attach-gpu-to-linux-vm/custom-streams.png" alt-text="Custom Streams Screenshot." lightbox="media/attach-gpu-to-linux-vm/custom-streams.png":::
275275
@@ -332,7 +332,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
332332
codec=1
333333
sync=0
334334
bitrate=4000000
335-
# set below properties in case of RTSPStreaming
335+
# set properties in case of RTSPStreaming
336336
rtsp-port=8554
337337
udp-port=5400
338338
@@ -385,7 +385,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
385385
live-source=0
386386
batch-size=4
387387
##time out in usec, to wait after the first buffer is available
388-
##to push the batch even if the complete batch is not formed
388+
##to push the batch even if the complete batch isn't formed
389389
batched-push-timeout=40000
390390
## Set muxer output width and height
391391
width=1920
@@ -432,7 +432,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
432432

433433
:::image type="content" source="media/attach-gpu-to-linux-vm/iot-edge.png" alt-text="Automatic Device Management Screenshot." lightbox="media/attach-gpu-to-linux-vm/iot-edge.png":::
434434

435-
13. In the right-hand pane, select the device identity whose device connection string was used above. Click on set modules:
435+
13. In the right-hand pane, select the device identity whose device connection string was used. Click on set modules:
436436

437437
:::image type="content" source="media/attach-gpu-to-linux-vm/set-modules.png" alt-text="Set Modules Screenshot." lightbox="media/attach-gpu-to-linux-vm/set-modules.png":::
438438

@@ -464,7 +464,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
464464

465465
:::image type="content" source="media/attach-gpu-to-linux-vm/container-create-options.png" alt-text="Container Create Options Screenshot." lightbox="media/attach-gpu-to-linux-vm/container-create-options.png":::
466466

467-
Replace the configuration above with the configuration below:
467+
Replace the configuration above with the configuration:
468468

469469
```shell
470470
{
@@ -498,7 +498,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
498498
}
499499
```
500500

501-
18. Click **Review and Create**, and on the next page click **Create**. You should now see the three modules listed below for your IoT Edge device in the Azure portal:
501+
18. Click **Review and Create**, and on the next page click **Create**. You should now see the three modules listed for your IoT Edge device in the Azure portal:
502502

503503
:::image type="content" source="media/attach-gpu-to-linux-vm/edge-hub-connections.png" alt-text="Modules and IoT Edge Hub Connections Screenshot." lightbox="media/attach-gpu-to-linux-vm/edge-hub-connections.png":::
504504

@@ -517,9 +517,9 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
517517
:::image type="content" source="media/attach-gpu-to-linux-vm/verify-modules-nvidia-smi.png" alt-text="nvidia-smi screenshot." lightbox="media/attach-gpu-to-linux-vm/verify-modules-nvidia-smi.png":::
518518

519519
> [!NOTE]
520-
> It will take a few minutes for the NvidiaDeepstream Container to be downloaded. You can validate the download using the command "journalctl -u iotedge --no-pager --no-full" to look at the iotedge daemon logs.
520+
> It takes a few minutes for the NvidiaDeepstream Container to be downloaded. You can validate the download using the command "journalctl -u iotedge --no-pager --no-full" to look at the iotedge daemon logs.
521521

522-
20. Confirm that the NvdiaDeepStreem Container is operational. The command output in the screenshots below indicates success.
522+
20. Confirm that the NvdiaDeepStreem Container is operational. The command output in the screenshots indicates success.
523523

524524
```shell
525525
sudo iotedge list

azure-local/manage/gpu-manage-via-device.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: alkohli
55
ms.author: alkohli
66
ms.topic: how-to
77
ms.service: azure-local
8-
ms.date: 03/17/2025
8+
ms.date: 03/21/2025
99
---
1010

1111
# Manage GPUs via Discrete Device Assignment (preview)
@@ -28,7 +28,7 @@ Before you begin, satisfy the following prerequisites:
2828

2929
## Attach a GPU during Azure Local VM creation
3030

31-
Follow the steps outlined in [Create virtual machines on Azure Local](create-arc-virtual-machines.md?tabs=azurecli) and utilize the additional hardware profile details to add GPU to your create process.
31+
Follow the steps outlined in [Create Azure Local VMs enabled by Azure Arc](create-arc-virtual-machines.md?tabs=azurecli) and utilize the additional hardware profile details to add GPU to your create process.
3232

3333
```azurecli
3434
az stack-hci-vm create --name $vmName --resource-group $resource_group --admin-username $userName --admin-password $password --computer-name $computerName --image $imageName --location $location --authentication-type all --nics $nicName --custom-location $customLocationID --hardware-profile memory-mb="8192" processors="4" --storage-path-id $storagePathId --gpus GpuDDA

0 commit comments

Comments
 (0)