Skip to content

Commit 8f7677d

Browse files
authored
Merge pull request #125627 from changeworld/patch-143
Fix typo
2 parents 3689760 + f52ccf6 commit 8f7677d

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-gpu-sharing.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Before you begin, make sure that:
2121

2222
1. You've access to a client system with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device). If using a Windows client, the system should run PowerShell 5.0 or later to access the device.
2323

24-
1. Save the following deployment `json` on your local system. You'll use information from this file to run the IoT Edge deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from Nvidia.
24+
1. Save the following deployment `json` on your local system. You'll use information from this file to run the IoT Edge deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from NVIDIA.
2525

2626
```json
2727
{
@@ -118,7 +118,7 @@ The first step is to verify that your device is running required GPU driver and
118118

119119
`Get-HcsGpuNvidiaSmi`
120120

121-
1. In the Nvidia smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
121+
1. In the NVIDIA smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
122122

123123
- GPU driver version: 460.32.03
124124
- CUDA version: 11.2
@@ -152,7 +152,7 @@ The first step is to verify that your device is running required GPU driver and
152152
[10.100.10.10]: PS>
153153
```
154154

155-
1. Keep this session open as you will use it to view the Nvidia smi output throughout the article.
155+
1. Keep this session open as you will use it to view the NVIDIA smi output throughout the article.
156156

157157

158158
## Deploy without context-sharing
@@ -216,7 +216,7 @@ For detailed instructions, see [Connect to and manage a Kubernetes cluster via k
216216

217217
### Deploy modules via portal
218218

219-
Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available Nvidia CUDA sample modules that run n-body simulation.
219+
Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available NVIDIA CUDA sample modules that run n-body simulation.
220220

221221
1. Make sure that the IoT Edge service is running on your device.
222222

@@ -316,7 +316,7 @@ Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available N
316316
```
317317
There are two pods, `cuda-sample1-97c494d7f-lnmns` and `cuda-sample2-d9f6c4688-2rld9` running on your device.
318318

319-
1. While both the containers are running the n-body simulation, view the GPU utilization from the Nvidia smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
319+
1. While both the containers are running the n-body simulation, view the GPU utilization from the NVIDIA smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
320320

321321
Here is an example output when both the containers are running the n-body simulation:
322322

@@ -349,7 +349,7 @@ Deploy IoT Edge modules via the Azure portal. You'll deploy publicly available N
349349
```
350350
As you can see, there are two containers running with n-body simulation on GPU 0. You can also view their corresponding memory usage.
351351

352-
1. Once the simulation has completed, the Nvidia smi output will show that there are no processes running on the device.
352+
1. Once the simulation has completed, the NVIDIA smi output will show that there are no processes running on the device.
353353

354354
```powershell
355355
[10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
@@ -460,7 +460,7 @@ You can now deploy the n-body simulation on two CUDA containers when MPS is runn
460460
Created nvidia-mps.service
461461
[10.100.10.10]: PS>
462462
```
463-
1. Get the Nvidia smi output from the PowerShell interface of the device. You can see the `nvidia-cuda-mps-server` process or the MPS service is running on the device.
463+
1. Get the NVIDIA smi output from the PowerShell interface of the device. You can see the `nvidia-cuda-mps-server` process or the MPS service is running on the device.
464464

465465
Here is an example output:
466466

@@ -548,7 +548,7 @@ You can now deploy the n-body simulation on two CUDA containers when MPS is runn
548548
PS C:\WINDOWS\system32>
549549
```
550550

551-
1. Get the Nvidia smi output from the PowerShell interface of the device when both the containers are running the n-body simulation. Here is an example output. There are three processes, the `nvidia-cuda-mps-server` process (type C) corresponds to the MPS service and the `/tmp/nbody` processes (type M + C) correspond to the n-body workloads deployed by the modules.
551+
1. Get the NVIDIA smi output from the PowerShell interface of the device when both the containers are running the n-body simulation. Here is an example output. There are three processes, the `nvidia-cuda-mps-server` process (type C) corresponds to the MPS service and the `/tmp/nbody` processes (type M + C) correspond to the n-body workloads deployed by the modules.
552552

553553
```powershell
554554
[10.100.10.10]: PS>Get-HcsGpuNvidiaSmi

0 commit comments

Comments
 (0)