You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/databox-online/azure-stack-edge-gpu-deploy-kubernetes-gpu-sharing.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ Before you begin, make sure that:
23
23
24
24
1. You have created a namespace and a user. You have also granted user the access to this namespace. You have the kubeconfig file of this namespace installed on the client system that you'll use to access your device. For detailed instructions, see [Connect to and manage a Kubernetes cluster via kubectl on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-create-kubernetes-cluster.md#configure-cluster-access-via-kubernetes-rbac).
25
25
26
-
1. Save the following deployment `yaml` on your local system. You'll use this file to run Kubernetes deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from Nvidia.
26
+
1. Save the following deployment `yaml` on your local system. You'll use this file to run Kubernetes deployment. This deployment is based on [Simple CUDA containers](https://docs.nvidia.com/cuda/wsl-user-guide/index.html#running-simple-containers) that are publicly available from NVIDIA.
27
27
28
28
```yml
29
29
apiVersion: batch/v1
@@ -81,7 +81,7 @@ The first step is to verify that your device is running required GPU driver and
81
81
Get-HcsGpuNvidiaSmi
82
82
```
83
83
84
-
1. In the Nvidia smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
84
+
1. In the NVIDIA smi output, make a note of the GPU version and the CUDA version on your device. If you are running Azure Stack Edge 2102 software, this version would correspond to the following driver versions:
85
85
86
86
- GPU driver version: 460.32.03
87
87
- CUDA version: 11.2
@@ -115,7 +115,7 @@ The first step is to verify that your device is running required GPU driver and
115
115
[10.100.10.10]: PS>
116
116
```
117
117
118
-
1. Keep this session open as you will use it to view the Nvidia smi output throughout the article.
118
+
1. Keep this session open as you will use it to view the NVIDIA smi output throughout the article.
119
119
120
120
121
121
@@ -248,7 +248,7 @@ You'll run the first job to deploy an application on your device in the namespac
248
248
```
249
249
The output indicates that both the pods were successfully created by the job.
250
250
251
-
1. While both the containers are running the n-body simulation, view the GPU utilization from the Nvidia smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
251
+
1. While both the containers are running the n-body simulation, view the GPU utilization from the NVIDIA smi output. Go to the PowerShell interface of the device and run `Get-HcsGpuNvidiaSmi`.
252
252
253
253
Here is an example output when both the containers are running the n-body simulation:
254
254
@@ -342,7 +342,7 @@ You'll run the first job to deploy an application on your device in the namespac
342
342
= 1969.517 single-precision GFLOP/s at 20 flops per interaction
343
343
PS C:\WINDOWS\system32>
344
344
```
345
-
1. There should be no processes running on the GPU at this time. You can verify this by viewing the GPU utilization using the Nvidia smi output.
345
+
1. There should be no processes running on the GPU at this time. You can verify this by viewing the GPU utilization using the NVIDIA smi output.
346
346
347
347
```powershell
348
348
[10.100.10.10]: PS>Get-HcsGpuNvidiaSmi
@@ -473,7 +473,7 @@ You'll run the second job to deploy the n-body simulation on two CUDA containers
473
473
PS C:\WINDOWS\system32>
474
474
```
475
475
476
-
1. While the simulation is running, you can view the Nvidia smi output. The output shows processes corresponding to the cuda containers (M + C type) with n-body simulation and the MPS service (C type) as running. All these processes share GPU 0.
476
+
1. While the simulation is running, you can view the NVIDIA smi output. The output shows processes corresponding to the cuda containers (M + C type) with n-body simulation and the MPS service (C type) as running. All these processes share GPU 0.
477
477
478
478
```powershell
479
479
PS>Get-HcsGpuNvidiaSmi
@@ -540,7 +540,7 @@ You'll run the second job to deploy the n-body simulation on two CUDA containers
540
540
= 2164.987 single-precision GFLOP/s at 20 flops per interaction
541
541
PS C:\WINDOWS\system32>
542
542
```
543
-
1. After the simulation is complete, you can view the Nvidia smi output again. Only the nvidia-cuda-mps-server process for the MPS service shows as running.
543
+
1. After the simulation is complete, you can view the NVIDIA smi output again. Only the nvidia-cuda-mps-server process for the MPS service shows as running.
0 commit comments