You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ Before you begin, make sure that:
21
21
22
22
1. You've access to a client system with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device). If using a Windows client, the system should run PowerShell 5.0 or later to access the device.
23
23
24
-
1. Save the following deployment `json` on your local system. You'll use information from this file to run the IoT Edge deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from Nvidia.
24
+
1. Save the following deployment `json` on your local system. You'll use information from this file to run the IoT Edge deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from NVIDIA.
25
25
26
26
```json
27
27
{
@@ -118,7 +118,7 @@ The first step is to verify that your device is running required GPU driver and
118
118
119
119
`Get-HcsGpuNvidiaSmi`
120
120
121
-
1. In the Nvidia smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
121
+
1. In the NVIDIA smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
122
122
123
123
- GPU driver version: 460.32.03
124
124
- CUDA version: 11.2
@@ -152,7 +152,7 @@ The first step is to verify that your device is running required GPU driver and
152
152
[10.100.10.10]: PS>
153
153
```
154
154
155
-
1. Keep this session open as you will use it to view the Nvidia smi output throughout the article.
155
+
1. Keep this session open as you will use it to view the NVIDIA smi output throughout the article.
156
156
157
157
158
158
## Deploy without context-sharing
@@ -216,7 +216,7 @@ For detailed instructions, see [Connect to and manage a Kubernetes cluster via k
216
216
217
217
### Deploy modules via portal
218
218
219
-
Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available Nvidia CUDA sample modules that run n-body simulation.
219
+
Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available NVIDIA CUDA sample modules that run n-body simulation.
220
220
221
221
1. Make sure that the IoT Edge service is running on your device.
222
222
@@ -316,7 +316,7 @@ Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available N
316
316
```
317
317
There are two pods, `cuda-sample1-97c494d7f-lnmns` and `cuda-sample2-d9f6c4688-2rld9` running on your device.
318
318
319
-
1. While both the containers are running the n-body simulation, view the GPU utilization from the Nvidia smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
319
+
1. While both the containers are running the n-body simulation, view the GPU utilization from the NVIDIA smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
320
320
321
321
Here is an example output when both the containers are running the n-body simulation:
322
322
@@ -349,7 +349,7 @@ Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available N
349
349
```
350
350
As you can see, there are two containers running with n-body simulation on GPU 0. You can also view their corresponding memory usage.
351
351
352
-
1. Once the simulation has completed, the Nvidia smi output will show that there are no processes running on the device.
352
+
1. Once the simulation has completed, the NVIDIA smi output will show that there are no processes running on the device.
353
353
354
354
```powershell
355
355
[10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
@@ -460,7 +460,7 @@ You can now deploy the n-body simulation on two CUDA containers when MPS is runn
460
460
Created nvidia-mps.service
461
461
[10.100.10.10]: PS>
462
462
```
463
-
1. Get the Nvidia smi output from the PowerShell interface of the device. You can see the `nvidia-cuda-mps-server` process or the MPS service is running on the device.
463
+
1. Get the NVIDIA smi output from the PowerShell interface of the device. You can see the `nvidia-cuda-mps-server` process or the MPS service is running on the device.
464
464
465
465
Here is an example output:
466
466
@@ -548,7 +548,7 @@ You can now deploy the n-body simulation on two CUDA containers when MPS is runn
548
548
PS C:\WINDOWS\system32>
549
549
```
550
550
551
-
1. Get the Nvidia smi output from the PowerShell interface of the device when both the containers are running the n-body simulation. Here is an example output. There are three processes, the `nvidia-cuda-mps-server` process (type C) corresponds to the MPS service and the `/tmp/nbody` processes (type M + C) correspond to the n-body workloads deployed by the modules.
551
+
1. Get the NVIDIA smi output from the PowerShell interface of the device when both the containers are running the n-body simulation. Here is an example output. There are three processes, the `nvidia-cuda-mps-server` process (type C) corresponds to the MPS service and the `/tmp/nbody` processes (type M + C) correspond to the n-body workloads deployed by the modules.
0 commit comments