Skip to content

Commit 9639cbb

Browse files
Merge pull request #17360 from v-sissondan/rebrand-manage-vm
Azure Arc rebranding: Manage non-Arc VMs
2 parents 6b0efcc + 177d803 commit 9639cbb

File tree

5 files changed

+41
-37
lines changed

5 files changed

+41
-37
lines changed
Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,12 @@
11
---
22
author: alkohli
33
ms.topic: include
4-
ms.date: 01/08/2025
4+
ms.date: 03/31/2025
55
ms.author: alkohli
66

77
---
88

99
> [!NOTE]
10-
> The recommended way to create and manage VMs on Azure Local is using the [Azure Arc control plane](../manage/azure-arc-vm-management-overview.md). However, since the functionality described in this article is not yet provided by Azure Arc, you can use Windows Admin Center or PowerShell as described below. Note that VMs created this way aren't Arc-enabled VMs. They have limited manageability from the Azure Arc control plane and fewer Azure Hybrid Benefits, such as no free use of Azure Update Manager.
10+
> - The recommended way to create and manage VMs on Azure Local is using the [Azure Arc control plane](../manage/azure-arc-vm-management-overview.md). However, since the functionality described in this article isn't yet provided by Azure Arc, you can use Windows Admin Center or PowerShell as described in this article. The VMs created this way aren't enabled by Azure Arc, have limited manageability from the Azure Arc control plane, and fewer Azure Hybrid Benefits, including usage of Azure Update Manager at no extra cost.
11+
>
12+
> - For more information, see [Compare management capabilities of VMs on Azure Local](../concepts/compare-vm-management-capabilities.md) and [Supported operations for Azure Local VMs](../manage/virtual-machine-operations.md).

azure-local/includes/hci-arc-vm.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
11
---
22
author: alkohli
33
ms.topic: include
4-
ms.date: 01/08/2025
4+
ms.date: 03/31/2025
55
ms.author: alkohli
66

77
---
88

99
<!--- Link must remain site-relative to prevent build issues with incoming includes from the windowsserverdocs repo --->
1010

1111
> [!NOTE]
12-
> The recommended way to create and manage VMs on Azure Local is using the [Azure Arc control plane](/azure-stack/hci/manage/azure-arc-vm-management-overview). Use the mechanism described below to manage your VMs only if you need functionality that is not available in Azure Arc VMs.
12+
> - The recommended way to create and manage VMs on Azure Local is using the [Azure Arc control plane](../manage/azure-arc-vm-management-overview.md). However, since the functionality described in this article isn't yet provided by Azure Arc, you can use Windows Admin Center or PowerShell as described in this article. The VMs created this way aren't enabled by Azure Arc, have limited manageability from the Azure Arc control plane, and fewer Azure Hybrid Benefits, including usage of Azure Update Manager at no extra cost.
13+
>
14+
> - For more information, see [Compare management capabilities of VMs on Azure Local](../concepts/compare-vm-management-capabilities.md) and [Supported operations for Azure Local VMs](../manage/virtual-machine-operations.md).

azure-local/manage/attach-gpu-to-linux-vm.md

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: alkohli
66
ms.topic: how-to
77
ms.service: azure-local
88
ms.custom: linux-related-content
9-
ms.date: 10/23/2024
9+
ms.date: 03/28/2025
1010
---
1111

1212
# Attaching a GPU to an Ubuntu Linux VM on Azure Local
@@ -24,7 +24,7 @@ This topic provides step-by-step instructions on how to install and configure an
2424
3. Sign in using an account with administrative privileges to the machine with the NVIDIA GPU installed.
2525
4. Open **Device Manager** and navigate to the *other devices* section. You should see a device listed as "3D Video Controller."
2626
5. Right-click on "3D Video Controller" to bring up the **Properties** page. Click **Details**. From the dropdown under **Property**, select "Location paths."
27-
6. Note the value with string PCIRoot as highlighted in the screenshot below. Right-click on **Value** and copy/save it.
27+
6. Note the value with string PCIRoot as highlighted in the screenshot. Right-click on **Value** and copy/save it.
2828

2929
:::image type="content" source="media/attach-gpu-to-linux-vm/pciroot.png" alt-text="Location Path Screenshot." lightbox="media/attach-gpu-to-linux-vm/pciroot.png":::
3030

@@ -41,9 +41,9 @@ This topic provides step-by-step instructions on how to install and configure an
4141
1. Download [Ubuntu desktop release 18.04.02 ISO](http://old-releases.ubuntu.com/releases/18.04.2/).
4242
2. Open **Hyper-V Manager** on the machine in your Azure local instance with the GPU installed.
4343
> [!NOTE]
44-
> [DDA doesn't support failover](/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment). This is a VM limitation with DDA. Therefore, we recommend using **Hyper-V Manager** to deploy the VM on the machine instead of **Failover Cluster Manager**. Use of **Failover Cluster Manager** with DDA will fail with an error message indicating that the VM has a device that doesn't support high availability.
45-
3. Using the Ubuntu ISO downloaded in step 1, create a new VM using the **New Virtual Machine Wizard** in **Hyper-V Manager** to create an Ubuntu Generation 1 VM with 2GB of memory and a network card attached to it.
46-
4. In PowerShell, assign the Dismounted GPU device to the VM using the cmdlets below, replacing the *LocationPath* value with the value for your device.
44+
> [DDA doesn't support failover](/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment). This is a VM limitation with DDA. Therefore, we recommend using **Hyper-V Manager** to deploy the VM on the machine instead of **Failover Cluster Manager**. Use of **Failover Cluster Manager** with DDA fails with an error message indicating that the VM has a device that doesn't support high availability.
45+
3. Using the Ubuntu ISO downloaded in step 1, create a new VM using the **New Virtual Machine Wizard** in **Hyper-V Manager** to create an Ubuntu Generation 1 VM with 2 GB of memory and a network card attached to it.
46+
4. In PowerShell, assign the Dismounted GPU device to the VM using the cmdlets, replacing the *LocationPath* value with the value for your device.
4747
```PowerShell
4848
# Confirm that there are no DDA devices assigned to the VM
4949
Get-VMAssignableDevice -VMName Ubuntu
@@ -55,11 +55,11 @@ This topic provides step-by-step instructions on how to install and configure an
5555
Get-VMAssignableDevice -VMName Ubuntu
5656
```
5757
58-
Successful assignment of the GPU to the VM will show the output below:
58+
Here's an output from the successful assignment of the GPU to the VM:
5959
6060
:::image type="content" source="media/attach-gpu-to-linux-vm/assign-gpu.png" alt-text="Assign GPU Screenshot." lightbox="media/attach-gpu-to-linux-vm/assign-gpu.png":::
6161
62-
Configure additional values following GPU documentation [here](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda):
62+
Configure other values following GPU documentation [here](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda):
6363
6464
```PowerShell
6565
# Enable Write-Combining on the CPU
@@ -73,11 +73,11 @@ This topic provides step-by-step instructions on how to install and configure an
7373
```
7474

7575
> [!NOTE]
76-
> The Value 33280Mb should suffice for most GPUs, but should be replaced with a value greater than your GPU memory.
76+
> The value `33280Mb` should suffice for most GPUs, but should be replaced with a value greater than your GPU memory.
7777
7878
5. Using Hyper-V Manager, connect to the VM and start the Ubuntu OS install. Choose the defaults to install the Ubuntu OS on the VM.
7979

80-
6. After the installation is complete, use **Hyper-V Manager** to shut down the VM and configure the **Automatic Stop Action** for the VM to shut down the guest operating system as in the screenshot below:
80+
6. After the installation is complete, use **Hyper-V Manager** to shut down the VM and configure the **Automatic Stop Action** for the VM to shut down the guest operating system as in the screenshot:
8181

8282
:::image type="content" source="media/attach-gpu-to-linux-vm/guest-shutdown.png" alt-text="Guest OS Shutdown Screenshot." lightbox="media/attach-gpu-to-linux-vm/guest-shutdown.png":::
8383

@@ -94,13 +94,13 @@ This topic provides step-by-step instructions on how to install and configure an
9494
10. Upon login through the SSH client, issue the command **lspci** and validate that the NVIDIA GPU is listed as "3D controller."
9595

9696
> [!IMPORTANT]
97-
> If The NVIDIA GPU is not seen as "3D controller," please do not proceed further. Please ensure that the steps above are followed before proceeding.
97+
> If The NVIDIA GPU is not seen as "3D controller," don't proceed further. Please ensure that the steps above are followed before proceeding.
9898
9999
11. Within the VM, search for and open **Software & Updates**. Navigate to **Additional Drivers**, then choose the latest NVIDIA GPU drivers listed. Complete the driver install by clicking the **Apply Changes** button.
100100

101101
:::image type="content" source="media/attach-gpu-to-linux-vm/driver-install.png" alt-text="Driver Install Screenshot." lightbox="media/attach-gpu-to-linux-vm/driver-install.png":::
102102

103-
12. Restart the Ubuntu VM after the driver installation completes. Once the VM starts, connect through the SSH client and issue the command **nvidia-smi** to verify that the NVIDIA GPU driver installation completed successfully. The output should be similar to the screenshot below:
103+
12. Restart the Ubuntu VM after the driver installation completes. Once the VM starts, connect through the SSH client and issue the command **nvidia-smi** to verify that the NVIDIA GPU driver installation completed successfully. The output should be similar to the screenshot:
104104

105105
:::image type="content" source="media/attach-gpu-to-linux-vm/nvidia-smi.png" alt-text="Screenshot that shows the output from the nvidia-smi command." lightbox="media/attach-gpu-to-linux-vm/nvidia-smi.png":::
106106

@@ -160,7 +160,7 @@ This topic provides step-by-step instructions on how to install and configure an
160160
161161
## Configure Azure IoT Edge
162162
163-
To prepare for this configuration, please review the FAQ contained in the [NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano](https://github.com/Azure-Samples/NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano) GitHub repo, which explains the need to install Docker instead of Moby. After reviewing, proceed to the steps below.
163+
To prepare for this configuration, please review the FAQ contained in the [NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano](https://github.com/Azure-Samples/NVIDIA-Deepstream-Azure-IoT-Edge-on-a-NVIDIA-Jetson-Nano) GitHub repo, which explains the need to install Docker instead of Moby. After reviewing, proceed to the next steps.
164164
165165
### Install NVIDIA Docker
166166
@@ -196,7 +196,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
196196
sudo docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
197197
```
198198
199-
Successful installation will look like the output in the screenshot below:
199+
Here's an output from a successful installation:
200200

201201
:::image type="content" source="media/attach-gpu-to-linux-vm/docker.png" alt-text="Successful Docker Install Screenshot." lightbox="media/attach-gpu-to-linux-vm/docker.png":::
202202

@@ -263,13 +263,13 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
263263
wget -O cars-streams.tar.gz --no-check-certificate https://onedrive.live.com/download?cid=0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21588371&authkey=AAavgrxG95v9gu0
264264
```
265265

266-
Un-compress the video files:
266+
Uncompress the video files:
267267

268268
```shell
269269
tar -xzvf cars-streams.tar.gz
270270
```
271271

272-
The contents of the directory /var/deepstream/custom_streams should be similar to the screenshot below:
272+
The contents of the directory /var/deepstream/custom_streams should be similar to the screenshot:
273273

274274
:::image type="content" source="media/attach-gpu-to-linux-vm/custom-streams.png" alt-text="Custom Streams Screenshot." lightbox="media/attach-gpu-to-linux-vm/custom-streams.png":::
275275

@@ -332,7 +332,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
332332
codec=1
333333
sync=0
334334
bitrate=4000000
335-
# set below properties in case of RTSPStreaming
335+
# set properties in case of RTSPStreaming
336336
rtsp-port=8554
337337
udp-port=5400
338338
@@ -385,7 +385,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
385385
live-source=0
386386
batch-size=4
387387
##time out in usec, to wait after the first buffer is available
388-
##to push the batch even if the complete batch is not formed
388+
##to push the batch even if the complete batch isn't formed
389389
batched-push-timeout=40000
390390
## Set muxer output width and height
391391
width=1920
@@ -432,7 +432,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
432432

433433
:::image type="content" source="media/attach-gpu-to-linux-vm/iot-edge.png" alt-text="Automatic Device Management Screenshot." lightbox="media/attach-gpu-to-linux-vm/iot-edge.png":::
434434

435-
13. In the right-hand pane, select the device identity whose device connection string was used above. Click on set modules:
435+
13. In the right-hand pane, select the device identity whose device connection string was used. Click on set modules:
436436

437437
:::image type="content" source="media/attach-gpu-to-linux-vm/set-modules.png" alt-text="Set Modules Screenshot." lightbox="media/attach-gpu-to-linux-vm/set-modules.png":::
438438

@@ -464,7 +464,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
464464

465465
:::image type="content" source="media/attach-gpu-to-linux-vm/container-create-options.png" alt-text="Container Create Options Screenshot." lightbox="media/attach-gpu-to-linux-vm/container-create-options.png":::
466466

467-
Replace the configuration above with the configuration below:
467+
Replace the configuration above with the configuration:
468468

469469
```shell
470470
{
@@ -498,7 +498,7 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
498498
}
499499
```
500500

501-
18. Click **Review and Create**, and on the next page click **Create**. You should now see the three modules listed below for your IoT Edge device in the Azure portal:
501+
18. Select **Review and Create**, and on the next page click **Create**. You should now see the three modules listed for your IoT Edge device in the Azure portal:
502502

503503
:::image type="content" source="media/attach-gpu-to-linux-vm/edge-hub-connections.png" alt-text="Modules and IoT Edge Hub Connections Screenshot." lightbox="media/attach-gpu-to-linux-vm/edge-hub-connections.png":::
504504

@@ -517,9 +517,9 @@ To prepare for this configuration, please review the FAQ contained in the [NVIDI
517517
:::image type="content" source="media/attach-gpu-to-linux-vm/verify-modules-nvidia-smi.png" alt-text="nvidia-smi screenshot." lightbox="media/attach-gpu-to-linux-vm/verify-modules-nvidia-smi.png":::
518518

519519
> [!NOTE]
520-
> It will take a few minutes for the NvidiaDeepstream Container to be downloaded. You can validate the download using the command "journalctl -u iotedge --no-pager --no-full" to look at the iotedge daemon logs.
520+
> It takes a few minutes for the NvidiaDeepstream Container to be downloaded. You can validate the download using the command `journalctl -u iotedge --no-pager --no-full` to look at the iotedge daemon logs.
521521

522-
20. Confirm that the NvdiaDeepStreem Container is operational. The command output in the screenshots below indicates success.
522+
20. Confirm that the NvdiaDeepStreem Container is operational. The command output in the screenshots indicates success.
523523

524524
```shell
525525
sudo iotedge list

azure-local/manage/gpu-manage-via-device.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,14 +5,14 @@ author: alkohli
55
ms.author: alkohli
66
ms.topic: how-to
77
ms.service: azure-local
8-
ms.date: 10/21/2024
8+
ms.date: 03/28/2025
99
---
1010

1111
# Manage GPUs via Discrete Device Assignment (preview)
1212

1313
[!INCLUDE [applies-to](../includes/hci-applies-to-23h2.md)]
1414

15-
This article describes how to manage GPU DDA with Arc virtual machines (VMs) on Azure Local. For GPU DDA management on AKS enabled by Azure Arc, see [Use GPUs for compute-intensive workloads](/azure/aks/hybrid/deploy-gpu-node-pool#create-a-new-workload-cluster-with-a-gpu-enabled-node-pool).
15+
This article describes how to manage GPU DDA with Azure Local VMs enabled by Azure Arc. For GPU DDA management on Azure Kubernetes Service (AKS) enabled by Azure Arc, see [Use GPUs for compute-intensive workloads](/azure/aks/hybrid/deploy-gpu-node-pool#create-a-new-workload-cluster-with-a-gpu-enabled-node-pool).
1616

1717
Discrete Device Assignment (DDA) allows you to dedicate a physical graphical processing unit (GPU) to your workload. In a DDA deployment, virtualized workloads run on the native driver and typically have full access to the GPU's functionality. DDA offers the highest level of app compatibility and potential performance.
1818

@@ -24,17 +24,17 @@ Discrete Device Assignment (DDA) allows you to dedicate a physical graphical pro
2424

2525
Before you begin, satisfy the following prerequisites:
2626

27-
- Follow the setup instructions found at [Prepare GPUs for Azure Local](./gpu-manage-via-device.md) to prepare your Azure Local and Arc VMs, and to ensure that your GPUs are prepared for DDA.
27+
- Follow the setup instructions found at [Prepare GPUs for Azure Local](./gpu-manage-via-device.md) to prepare your Azure Local VMs, and to ensure that your GPUs are prepared for DDA.
2828

29-
## Attach a GPU during Arc VM creation
29+
## Attach a GPU during Azure Local VM creation
3030

31-
Follow the steps outlined in [Create Arc virtual machines on Azure Local](create-arc-virtual-machines.md?tabs=azurecli) and utilize the additional hardware profile details to add GPU to your create process.
31+
Follow the steps outlined in [Create Azure Local VMs enabled by Azure Arc](create-arc-virtual-machines.md?tabs=azurecli) and utilize the additional hardware profile details to add GPU to your create process.
3232

3333
```azurecli
3434
az stack-hci-vm create --name $vmName --resource-group $resource_group --admin-username $userName --admin-password $password --computer-name $computerName --image $imageName --location $location --authentication-type all --nics $nicName --custom-location $customLocationID --hardware-profile memory-mb="8192" processors="4" --storage-path-id $storagePathId --gpus GpuDDA
3535
```
3636

37-
## Attach a GPU after Arc VM creation
37+
## Attach a GPU after VM creation
3838

3939
Use the following CLI command to attach the GPU:
4040

azure-local/manage/gpu-preparation.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,15 @@ description: Learn how to prepare GPUs for an Azure Local instance.
44
author: alkohli
55
ms.author: alkohli
66
ms.topic: how-to
7-
ms.date: 03/03/2025
7+
ms.date: 03/28/2025
88
ms.service: azure-local
99
---
1010

1111
# Prepare GPUs for Azure Local
1212

1313
[!INCLUDE [applies-to](../includes/hci-applies-to-23h2.md)]
1414

15-
This article describes how to prepare graphical processing units (GPUs) on your Azure Local instance for computation-intensive workloads running on Arc virtual machines (VMs) and AKS enabled by Azure Arc. GPUs are used for computation-intensive workloads such as machine learning and deep learning.
15+
This article describes how to prepare graphical processing units (GPUs) on your Azure Local instance for computation-intensive workloads running on Azure Local VMs enabled by Azure Arc and Azure Kubernetes Service (AKS) enabled by Azure Arc. GPUs are used for computation-intensive workloads such as machine learning and deep learning.
1616

1717

1818
## Attaching GPUs on Azure Local
@@ -41,12 +41,12 @@ NVIDIA supports their workloads separately with their virtual GPU software. For
4141

4242
For AKS workloads, see [GPUs for AKS for Arc](/azure/aks/hybrid/deploy-gpu-node-pool#supported-gpu-models).
4343

44-
The following GPU models are supported using both DDA and GPU-P for Arc VM workloads:
44+
The following GPU models are supported using both DDA and GPU-P for Azure Local VM workloads:
4545

4646
- NVIDIA A2
4747
- NVIDIA A16
4848

49-
These additional GPU models are supported using GPU-P (only) for Arc VM workloads:
49+
These additional GPU models are supported using GPU-P (only) for VM workloads:
5050

5151
- NVIDIA A10
5252
- NVIDIA A40
@@ -263,7 +263,7 @@ Follow these steps to configure the GPU partition count in PowerShell:
263263
264264
## Guest requirements
265265
266-
GPU management is supported for the following Arc VM workloads:
266+
GPU management is supported for the following VM workloads:
267267
268268
- Generation 2 VMs
269269

0 commit comments

Comments
 (0)