You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> The recommended way to create and manage VMs on Azure Local is using the [Azure Arc control plane](../manage/azure-arc-vm-management-overview.md). However, since the functionality described in this article is not yet provided by Azure Arc, you can use Windows Admin Center or PowerShell as described below. Note that VMs created this way aren't Arc-enabled VMs. They have limited manageability from the Azure Arc control plane and fewer Azure Hybrid Benefits, such as no free use of Azure Update Manager.
10
+
> - The recommended way to create and manage VMs on Azure Local is using the [Azure Arc control plane](../manage/azure-arc-vm-management-overview.md). However, since the functionality described in this article isn't yet provided by Azure Arc, you can use Windows Admin Center or PowerShell as described in this article. The VMs created this way aren't enabled by Azure Arc, have limited manageability from the Azure Arc control plane, and fewer Azure Hybrid Benefits, including usage of Azure Update Manager at no extra cost.
11
+
>
12
+
> - For more information, see [Compare management capabilities of VMs on Azure Local](../concepts/compare-vm-management-capabilities.md) and [Supported operations for Azure Local VMs](../manage/virtual-machine-operations.md).
<!--- Link must remain site-relative to prevent build issues with incoming includes from the windowsserverdocs repo --->
10
10
11
11
> [!NOTE]
12
-
> The recommended way to create and manage VMs on Azure Local is using the [Azure Arc control plane](/azure-stack/hci/manage/azure-arc-vm-management-overview). Use the mechanism described below to manage your VMs only if you need functionality that is not available in Azure Arc VMs.
12
+
> - The recommended way to create and manage VMs on Azure Local is using the [Azure Arc control plane](../manage/azure-arc-vm-management-overview.md). However, since the functionality described in this article isn't yet provided by Azure Arc, you can use Windows Admin Center or PowerShell as described in this article. The VMs created this way aren't enabled by Azure Arc, have limited manageability from the Azure Arc control plane, and fewer Azure Hybrid Benefits, including usage of Azure Update Manager at no extra cost.
13
+
>
14
+
> - For more information, see [Compare management capabilities of VMs on Azure Local](../concepts/compare-vm-management-capabilities.md) and [Supported operations for Azure Local VMs](../manage/virtual-machine-operations.md).
Copy file name to clipboardExpand all lines: azure-local/manage/attach-gpu-to-linux-vm.md
+22-22Lines changed: 22 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.author: alkohli
6
6
ms.topic: how-to
7
7
ms.service: azure-local
8
8
ms.custom: linux-related-content
9
-
ms.date: 10/23/2024
9
+
ms.date: 03/28/2025
10
10
---
11
11
12
12
# Attaching a GPU to an Ubuntu Linux VM on Azure Local
@@ -24,7 +24,7 @@ This topic provides step-by-step instructions on how to install and configure an
24
24
3. Sign in using an account with administrative privileges to the machine with the NVIDIA GPU installed.
25
25
4. Open **Device Manager** and navigate to the *other devices* section. You should see a device listed as "3D Video Controller."
26
26
5. Right-click on "3D Video Controller" to bring up the **Properties** page. Click **Details**. From the dropdown under **Property**, select "Location paths."
27
-
6. Note the value with string PCIRoot as highlighted in the screenshot below. Right-click on **Value** and copy/save it.
27
+
6. Note the value with string PCIRoot as highlighted in the screenshot. Right-click on **Value** and copy/save it.
2. Open **Hyper-V Manager** on the machine in your Azure local instance with the GPU installed.
43
43
> [!NOTE]
44
-
> [DDA doesn't support failover](/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment). This is a VM limitation with DDA. Therefore, we recommend using **Hyper-V Manager** to deploy the VM on the machine instead of **Failover Cluster Manager**. Use of **Failover Cluster Manager** with DDA will fail with an error message indicating that the VM has a device that doesn't support high availability.
45
-
3. Using the Ubuntu ISO downloaded in step 1, create a new VM using the **New Virtual Machine Wizard** in **Hyper-V Manager** to create an Ubuntu Generation 1 VM with 2GB of memory and a network card attached to it.
46
-
4. In PowerShell, assign the Dismounted GPU device to the VM using the cmdlets below, replacing the *LocationPath* value with the value for your device.
44
+
> [DDA doesn't support failover](/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment). This is a VM limitation with DDA. Therefore, we recommend using **Hyper-V Manager** to deploy the VM on the machine instead of **Failover Cluster Manager**. Use of **Failover Cluster Manager** with DDA fails with an error message indicating that the VM has a device that doesn't support high availability.
45
+
3. Using the Ubuntu ISO downloaded in step 1, create a new VM using the **New Virtual Machine Wizard** in **Hyper-V Manager** to create an Ubuntu Generation 1 VM with 2 GB of memory and a network card attached to it.
46
+
4. In PowerShell, assign the Dismounted GPU device to the VM using the cmdlets, replacing the *LocationPath* value with the value for your device.
47
47
```PowerShell
48
48
# Confirm that there are no DDA devices assigned to the VM
49
49
Get-VMAssignableDevice -VMName Ubuntu
@@ -55,11 +55,11 @@ This topic provides step-by-step instructions on how to install and configure an
55
55
Get-VMAssignableDevice -VMName Ubuntu
56
56
```
57
57
58
-
Successful assignment of the GPU to the VM will show the output below:
58
+
Here's an output from the successful assignment of the GPU to the VM:
Configure additional values following GPU documentation [here](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda):
62
+
Configure other values following GPU documentation [here](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda):
63
63
64
64
```PowerShell
65
65
# Enable Write-Combining on the CPU
@@ -73,11 +73,11 @@ This topic provides step-by-step instructions on how to install and configure an
73
73
```
74
74
75
75
> [!NOTE]
76
-
> The Value 33280Mb should suffice for most GPUs, but should be replaced with a value greater than your GPU memory.
76
+
> The value `33280Mb` should suffice for most GPUs, but should be replaced with a value greater than your GPU memory.
77
77
78
78
5. Using Hyper-V Manager, connect to the VM and start the Ubuntu OS install. Choose the defaults to install the Ubuntu OS on the VM.
79
79
80
-
6. After the installation is complete, use **Hyper-V Manager** to shut down the VM and configure the **Automatic Stop Action** for the VM to shut down the guest operating system as in the screenshot below:
80
+
6. After the installation is complete, use **Hyper-V Manager** to shut down the VM and configure the **Automatic Stop Action** for the VM to shut down the guest operating system as in the screenshot:
81
81
82
82
:::image type="content" source="media/attach-gpu-to-linux-vm/guest-shutdown.png" alt-text="Guest OS Shutdown Screenshot." lightbox="media/attach-gpu-to-linux-vm/guest-shutdown.png":::
83
83
@@ -94,13 +94,13 @@ This topic provides step-by-step instructions on how to install and configure an
94
94
10. Upon login through the SSH client, issue the command **lspci** and validate that the NVIDIA GPU is listed as "3D controller."
95
95
96
96
> [!IMPORTANT]
97
-
> If The NVIDIA GPU is not seen as "3D controller," please do not proceed further. Please ensure that the steps above are followed before proceeding.
97
+
> If The NVIDIA GPU is not seen as "3D controller," don't proceed further. Please ensure that the steps above are followed before proceeding.
98
98
99
99
11. Within the VM, search for and open **Software & Updates**. Navigate to **Additional Drivers**, then choose the latest NVIDIA GPU drivers listed. Complete the driver install by clicking the **Apply Changes** button.
12. Restart the Ubuntu VM after the driver installation completes. Once the VM starts, connect through the SSH client and issue the command **nvidia-smi** to verify that the NVIDIA GPU driver installation completed successfully. The output should be similar to the screenshot below:
103
+
12. Restart the Ubuntu VM after the driver installation completes. Once the VM starts, connect through the SSH client and issue the command **nvidia-smi** to verify that the NVIDIA GPU driver installation completed successfully. The output should be similar to the screenshot:
104
104
105
105
:::image type="content" source="media/attach-gpu-to-linux-vm/nvidia-smi.png" alt-text="Screenshot that shows the output from the nvidia-smi command." lightbox="media/attach-gpu-to-linux-vm/nvidia-smi.png":::
106
106
@@ -160,7 +160,7 @@ This topic provides step-by-step instructions on how to install and configure an
160
160
161
161
## Configure Azure IoT Edge
162
162
163
-
To prepare for this configuration, please review the FAQ contained in the [NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano](https://github.com/Azure-Samples/NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano) GitHub repo, which explains the need to install Docker instead of Moby. After reviewing, proceed to the steps below.
163
+
To prepare for this configuration, please review the FAQ contained in the [NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano](https://github.com/Azure-Samples/NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano) GitHub repo, which explains the need to install Docker instead of Moby. After reviewing, proceed to the next steps.
164
164
165
165
### Install NVIDIA Docker
166
166
@@ -196,7 +196,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
196
196
sudo docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
197
197
```
198
198
199
-
Successful installation will look like the output in the screenshot below:
Replace the configuration above with the configuration below:
467
+
Replace the configuration above with the configuration:
468
468
469
469
```shell
470
470
{
@@ -498,7 +498,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
498
498
}
499
499
```
500
500
501
-
18. Click **Review and Create**, and on the next page click **Create**. You should now see the three modules listed below for your IoT Edge device in the Azure portal:
501
+
18. Select**Review and Create**, and on the next page click **Create**. You should now see the three modules listed foryour IoT Edge devicein the Azure portal:
> It will take a few minutes for the NvidiaDeepstream Container to be downloaded. You can validate the download using the command "journalctl -u iotedge --no-pager --no-full" to look at the iotedge daemon logs.
520
+
> It takes a few minutes for the NvidiaDeepstream Container to be downloaded. You can validate the download using the command`journalctl -u iotedge --no-pager --no-full` to look at the iotedge daemon logs.
521
521
522
-
20. Confirm that the NvdiaDeepStreem Container is operational. The command output in the screenshots below indicates success.
522
+
20. Confirm that the NvdiaDeepStreem Container is operational. The command output in the screenshots indicates success.
This article describes how to manage GPU DDA with Arc virtual machines (VMs) on Azure Local. For GPU DDA management on AKS enabled by Azure Arc, see [Use GPUs for compute-intensive workloads](/azure/aks/hybrid/deploy-gpu-node-pool#create-a-new-workload-cluster-with-a-gpu-enabled-node-pool).
15
+
This article describes how to manage GPU DDA with Azure Local VMs enabled by Azure Arc. For GPU DDA management on Azure Kubernetes Service (AKS) enabled by Azure Arc, see [Use GPUs for compute-intensive workloads](/azure/aks/hybrid/deploy-gpu-node-pool#create-a-new-workload-cluster-with-a-gpu-enabled-node-pool).
16
16
17
17
Discrete Device Assignment (DDA) allows you to dedicate a physical graphical processing unit (GPU) to your workload. In a DDA deployment, virtualized workloads run on the native driver and typically have full access to the GPU's functionality. DDA offers the highest level of app compatibility and potential performance.
18
18
@@ -24,17 +24,17 @@ Discrete Device Assignment (DDA) allows you to dedicate a physical graphical pro
24
24
25
25
Before you begin, satisfy the following prerequisites:
26
26
27
-
- Follow the setup instructions found at [Prepare GPUs for Azure Local](./gpu-manage-via-device.md) to prepare your Azure Local and Arc VMs, and to ensure that your GPUs are prepared for DDA.
27
+
- Follow the setup instructions found at [Prepare GPUs for Azure Local](./gpu-manage-via-device.md) to prepare your Azure Local VMs, and to ensure that your GPUs are prepared for DDA.
28
28
29
-
## Attach a GPU during Arc VM creation
29
+
## Attach a GPU during Azure Local VM creation
30
30
31
-
Follow the steps outlined in [Create Arc virtual machines on Azure Local](create-arc-virtual-machines.md?tabs=azurecli) and utilize the additional hardware profile details to add GPU to your create process.
31
+
Follow the steps outlined in [Create Azure Local VMs enabled by Azure Arc](create-arc-virtual-machines.md?tabs=azurecli) and utilize the additional hardware profile details to add GPU to your create process.
This article describes how to prepare graphical processing units (GPUs) on your Azure Local instance for computation-intensive workloads running on Arc virtual machines (VMs) and AKS enabled by Azure Arc. GPUs are used for computation-intensive workloads such as machine learning and deep learning.
15
+
This article describes how to prepare graphical processing units (GPUs) on your Azure Local instance for computation-intensive workloads running on Azure Local VMs enabled by Azure Arc and Azure Kubernetes Service (AKS) enabled by Azure Arc. GPUs are used for computation-intensive workloads such as machine learning and deep learning.
16
16
17
17
18
18
## Attaching GPUs on Azure Local
@@ -41,12 +41,12 @@ NVIDIA supports their workloads separately with their virtual GPU software. For
41
41
42
42
For AKS workloads, see [GPUs for AKS for Arc](/azure/aks/hybrid/deploy-gpu-node-pool#supported-gpu-models).
43
43
44
-
The following GPU models are supported using both DDA and GPU-P for Arc VM workloads:
44
+
The following GPU models are supported using both DDA and GPU-P for Azure Local VM workloads:
45
45
46
46
- NVIDIA A2
47
47
- NVIDIA A16
48
48
49
-
These additional GPU models are supported using GPU-P (only) for Arc VM workloads:
49
+
These additional GPU models are supported using GPU-P (only) for VM workloads:
50
50
51
51
- NVIDIA A10
52
52
- NVIDIA A40
@@ -263,7 +263,7 @@ Follow these steps to configure the GPU partition count in PowerShell:
263
263
264
264
## Guest requirements
265
265
266
-
GPU management is supported for the following Arc VM workloads:
266
+
GPU management is supported for the following VM workloads:
0 commit comments