Skip to content

Commit 98a6c78

Browse files
authored
Merge pull request #125629 from changeworld/patch-145
Fix typo
2 parents 51fd7ce + 4b17f43 commit 98a6c78

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

articles/databox-online/azure-stack-edge-gpu-deploy-kubernetes-gpu-sharing.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Before you begin, make sure that:
2323

2424
1. You have created a namespace and a user. You have also granted user the access to this namespace. You have the kubeconfig file of this namespace installed on the client system that you'll use to access your device. For detailed instructions, see [Connect to and manage a Kubernetes cluster via kubectl on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-create-kubernetes-cluster.md#configure-cluster-access-via-kubernetes-rbac).
2525

26-
1. Save the following deployment `yaml` on your local system. You'll use this file to run Kubernetes deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from Nvidia.
26+
1. Save the following deployment `yaml` on your local system. You'll use this file to run Kubernetes deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from NVIDIA.
2727

2828
```yml
2929
apiVersion: batch/v1
@@ -81,7 +81,7 @@ The first step is to verify that your device is running required GPU driver and
8181
Get-HcsGpuNvidiaSmi
8282
```
8383

84-
1. In the Nvidia smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
84+
1. In the NVIDIA smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
8585

8686
- GPU driver version: 460.32.03
8787
- CUDA version: 11.2
@@ -115,7 +115,7 @@ The first step is to verify that your device is running required GPU driver and
115115
[10.100.10.10]: PS>
116116
```
117117

118-
1. Keep this session open as you will use it to view the Nvidia smi output throughout the article.
118+
1. Keep this session open as you will use it to view the NVIDIA smi output throughout the article.
119119

120120

121121

@@ -248,7 +248,7 @@ You'll run the first job to deploy an application on your device in the namespac
248248
```
249249
The output indicates that both the pods were successfully created by the job.
250250

251-
1. While both the containers are running the n-body simulation, view the GPU utilization from the Nvidia smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
251+
1. While both the containers are running the n-body simulation, view the GPU utilization from the NVIDIA smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
252252

253253
Here is an example output when both the containers are running the n-body simulation:
254254

@@ -342,7 +342,7 @@ You'll run the first job to deploy an application on your device in the namespac
342342
= 1969.517 single-precision GFLOP/s at 20 flops per interaction
343343
PS C:\WINDOWS\system32>
344344
```
345-
1. There should be no processes running on the GPU at this time. You can verify this by viewing the GPU utilization using the Nvidia smi output.
345+
1. There should be no processes running on the GPU at this time. You can verify this by viewing the GPU utilization using the NVIDIA smi output.
346346

347347
```powershell
348348
[10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
@@ -473,7 +473,7 @@ You'll run the second job to deploy the n-body simulation on two CUDA containers
473473
PS C:\WINDOWS\system32>
474474
```
475475

476-
1. While the simulation is running, you can view the Nvidia smi output. The output shows processes corresponding to the cuda containers (M + C type) with n-body simulation and the MPS service (C type) as running. All these processes share GPU 0.
476+
1. While the simulation is running, you can view the NVIDIA smi output. The output shows processes corresponding to the cuda containers (M + C type) with n-body simulation and the MPS service (C type) as running. All these processes share GPU 0.
477477

478478
```powershell
479479
PS>Get-HcsGpuNvidiaSmi
@@ -540,7 +540,7 @@ You'll run the second job to deploy the n-body simulation on two CUDA containers
540540
= 2164.987 single-precision GFLOP/s at 20 flops per interaction
541541
PS C:\WINDOWS\system32>
542542
```
543-
1. After the simulation is complete, you can view the Nvidia smi output again. Only the nvidia-cuda-mps-server process for the MPS service shows as running.
543+
1. After the simulation is complete, you can view the NVIDIA smi output again. Only the nvidia-cuda-mps-server process for the MPS service shows as running.
544544

545545
```powershell
546546
PS>Get-HcsGpuNvidiaSmi

0 commit comments

Comments
 (0)