You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> The recommended way to create and manage VMs on Azure Local is using [Azure Local VM management](../manage/azure-arc-vm-management-overview.md). However, since the functionality described in this article isn't yet provided by Azure Local VM management, you can use Windows Admin Center or PowerShell as described below. Note however that Azure Local VMs created this way aren't enabled by Azure Arc. They have limited manageability from the Azure Local VM management plane and fewer Azure Hybrid Benefits, such as no free use of Azure Update Manager.
10
+
> The recommended way to create and manage VMs on Azure Local is using the [Azure Local VM management](../manage/azure-arc-vm-management-overview.md). However, since the functionality described in this article isn't yet provided by Azure Local VM management, you can use Windows Admin Center or PowerShell as described below. Note that VMs created this way aren't enabled by Azure Arc, have limited manageability from the Azure Arc control plane, and fewer Azure Hybrid Benefits, including usage of Azure Update Manager at no extra cost. For more information, see [Compare management capabilities of VMs on Azure Local](../concepts/compare-vm-management-capabilities.md) and [Supported operations for Azure Local VMs](../manage/virtual-machine-operations.md).
<!--- Link must remain site-relative to prevent build issues with incoming includes from the windowsserverdocs repo --->
10
10
11
11
> [!NOTE]
12
-
> The recommended way to create and manage VMs on Azure Local is using [Azure Local VM management](/azure-stack/hci/manage/azure-arc-vm-management-overview). Use the mechanism described below to manage your VMs only if you need functionality that is not available in Azure Local VMs enabled by Azure Arc.
12
+
> The recommended way to create and manage VMs on Azure Local is using the [Azure Local VM management](/azure-stack/hci/manage/azure-arc-vm-management-overview). Use the method in this article to manage your VMs if you need functionality that isn't available in Azure Local VM management.
Copy file name to clipboardExpand all lines: azure-local/manage/attach-gpu-to-linux-vm.md
+22-22Lines changed: 22 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.author: alkohli
6
6
ms.topic: how-to
7
7
ms.service: azure-local
8
8
ms.custom: linux-related-content
9
-
ms.date: 03/17/2025
9
+
ms.date: 03/21/2025
10
10
---
11
11
12
12
# Attaching a GPU to an Ubuntu Linux VM on Azure Local
@@ -24,7 +24,7 @@ This topic provides step-by-step instructions on how to install and configure an
24
24
3. Sign in using an account with administrative privileges to the machine with the NVIDIA GPU installed.
25
25
4. Open **Device Manager** and navigate to the *other devices* section. You should see a device listed as "3D Video Controller."
26
26
5. Right-click on "3D Video Controller" to bring up the **Properties** page. Click **Details**. From the dropdown under **Property**, select "Location paths."
27
-
6. Note the value with string PCIRoot as highlighted in the screenshot below. Right-click on **Value** and copy/save it.
27
+
6. Note the value with string PCIRoot as highlighted in the screenshot. Right-click on **Value** and copy/save it.
2. Open **Hyper-V Manager** on the machine in your Azure local instance with the GPU installed.
43
43
> [!NOTE]
44
-
> [DDA doesn't support failover](/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment). This is a VM limitation with DDA. Therefore, we recommend using **Hyper-V Manager** to deploy the VM on the machine instead of **Failover Cluster Manager**. Use of **Failover Cluster Manager** with DDA will fail with an error message indicating that the VM has a device that doesn't support high availability.
45
-
3. Using the Ubuntu ISO downloaded in step 1, create a new VM using the **New Virtual Machine Wizard** in **Hyper-V Manager** to create an Ubuntu Generation 1 VM with 2GB of memory and a network card attached to it.
46
-
4. In PowerShell, assign the Dismounted GPU device to the VM using the cmdlets below, replacing the *LocationPath* value with the value for your device.
44
+
> [DDA doesn't support failover](/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment). This is a VM limitation with DDA. Therefore, we recommend using **Hyper-V Manager** to deploy the VM on the machine instead of **Failover Cluster Manager**. Use of **Failover Cluster Manager** with DDA fails with an error message indicating that the VM has a device that doesn't support high availability.
45
+
3. Using the Ubuntu ISO downloaded in step 1, create a new VM using the **New Virtual Machine Wizard** in **Hyper-V Manager** to create an Ubuntu Generation 1 VM with 2 GB of memory and a network card attached to it.
46
+
4. In PowerShell, assign the Dismounted GPU device to the VM using the cmdlets, replacing the *LocationPath* value with the value for your device.
47
47
```PowerShell
48
48
# Confirm that there are no DDA devices assigned to the VM
49
49
Get-VMAssignableDevice -VMName Ubuntu
@@ -55,11 +55,11 @@ This topic provides step-by-step instructions on how to install and configure an
55
55
Get-VMAssignableDevice -VMName Ubuntu
56
56
```
57
57
58
-
Successful assignment of the GPU to the VM will show the output below:
58
+
Successful assignment of the GPU to the VM that shows this output:
> The Value 33280Mb should suffice for most GPUs, but should be replaced with a value greater than your GPU memory.
77
77
78
78
5. Using Hyper-V Manager, connect to the VM and start the Ubuntu OS install. Choose the defaults to install the Ubuntu OS on the VM.
79
79
80
-
6. After the installation is complete, use **Hyper-V Manager** to shut down the VM and configure the **Automatic Stop Action** for the VM to shut down the guest operating system as in the screenshot below:
80
+
6. After the installation is complete, use **Hyper-V Manager** to shut down the VM and configure the **Automatic Stop Action** for the VM to shut down the guest operating system as in the screenshot:
81
81
82
82
:::image type="content" source="media/attach-gpu-to-linux-vm/guest-shutdown.png" alt-text="Guest OS Shutdown Screenshot." lightbox="media/attach-gpu-to-linux-vm/guest-shutdown.png":::
83
83
@@ -94,13 +94,13 @@ This topic provides step-by-step instructions on how to install and configure an
94
94
10. Upon login through the SSH client, issue the command **lspci** and validate that the NVIDIA GPU is listed as "3D controller."
95
95
96
96
> [!IMPORTANT]
97
-
> If The NVIDIA GPU is not seen as "3D controller," please do not proceed further. Please ensure that the steps above are followed before proceeding.
97
+
> If The NVIDIA GPU is not seen as "3D controller," don't proceed further. Please ensure that the steps above are followed before proceeding.
98
98
99
99
11. Within the VM, search for and open **Software & Updates**. Navigate to **Additional Drivers**, then choose the latest NVIDIA GPU drivers listed. Complete the driver install by clicking the **Apply Changes** button.
12. Restart the Ubuntu VM after the driver installation completes. Once the VM starts, connect through the SSH client and issue the command **nvidia-smi** to verify that the NVIDIA GPU driver installation completed successfully. The output should be similar to the screenshot below:
103
+
12. Restart the Ubuntu VM after the driver installation completes. Once the VM starts, connect through the SSH client and issue the command **nvidia-smi** to verify that the NVIDIA GPU driver installation completed successfully. The output should be similar to the screenshot:
104
104
105
105
:::image type="content" source="media/attach-gpu-to-linux-vm/nvidia-smi.png" alt-text="Screenshot that shows the output from the nvidia-smi command." lightbox="media/attach-gpu-to-linux-vm/nvidia-smi.png":::
106
106
@@ -160,7 +160,7 @@ This topic provides step-by-step instructions on how to install and configure an
160
160
161
161
## Configure Azure IoT Edge
162
162
163
-
To prepare for this configuration, please review the FAQ contained in the [NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano](https://github.com/Azure-Samples/NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano) GitHub repo, which explains the need to install Docker instead of Moby. After reviewing, proceed to the steps below.
163
+
To prepare for this configuration, please review the FAQ contained in the [NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano](https://github.com/Azure-Samples/NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano) GitHub repo, which explains the need to install Docker instead of Moby. After reviewing, proceed using the steps.
164
164
165
165
### Install NVIDIA Docker
166
166
@@ -196,7 +196,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
196
196
sudo docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
197
197
```
198
198
199
-
Successful installation will look like the output in the screenshot below:
199
+
Successful installation looks like the output in the screenshot:
Replace the configuration above with the configuration below:
467
+
Replace the configuration above with the configuration:
468
468
469
469
```shell
470
470
{
@@ -498,7 +498,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
498
498
}
499
499
```
500
500
501
-
18. Click **Review and Create**, and on the next page click **Create**. You should now see the three modules listed below for your IoT Edge device in the Azure portal:
501
+
18. Click **Review and Create**, and on the next page click **Create**. You should now see the three modules listed foryour IoT Edge devicein the Azure portal:
> It will take a few minutes for the NvidiaDeepstream Container to be downloaded. You can validate the download using the command "journalctl -u iotedge --no-pager --no-full" to look at the iotedge daemon logs.
520
+
> It takes a few minutes for the NvidiaDeepstream Container to be downloaded. You can validate the download using the command"journalctl -u iotedge --no-pager --no-full" to look at the iotedge daemon logs.
521
521
522
-
20. Confirm that the NvdiaDeepStreem Container is operational. The command output in the screenshots below indicates success.
522
+
20. Confirm that the NvdiaDeepStreem Container is operational. The command output in the screenshots indicates success.
Copy file name to clipboardExpand all lines: azure-local/manage/gpu-manage-via-device.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ author: alkohli
5
5
ms.author: alkohli
6
6
ms.topic: how-to
7
7
ms.service: azure-local
8
-
ms.date: 03/17/2025
8
+
ms.date: 03/21/2025
9
9
---
10
10
11
11
# Manage GPUs via Discrete Device Assignment (preview)
@@ -28,7 +28,7 @@ Before you begin, satisfy the following prerequisites:
28
28
29
29
## Attach a GPU during Azure Local VM creation
30
30
31
-
Follow the steps outlined in [Create virtual machines on Azure Local](create-arc-virtual-machines.md?tabs=azurecli) and utilize the additional hardware profile details to add GPU to your create process.
31
+
Follow the steps outlined in [Create Azure Local VMs enabled by Azure Arc](create-arc-virtual-machines.md?tabs=azurecli) and utilize the additional hardware profile details to add GPU to your create process.
0 commit comments