Skip to content

Commit 3a3507f

Browse files
committed
Fix typo
Nvidia -> NVIDIA
1 parent 13b38f3 commit 3a3507f

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

articles/databox-online/azure-stack-edge-gpu-sharing.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: alkohli
1212

1313
# GPU sharing on your Azure Stack Edge Pro GPU device
1414

15-
Graphics processing unit (GPU) is a specialized processor designed to accelerate graphics rendering. GPUs can process many pieces of data simultaneously, making them useful for machine learning, video editing, and gaming applications. In addition to CPU for general purpose compute, your Azure Stack Edge Pro GPU devices can contain one or two Nvidia Tesla T4 GPUs for compute-intensive workloads such as hardware accelerated inferencing. For more information, see [Nvidia's Tesla T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/).
15+
Graphics processing unit (GPU) is a specialized processor designed to accelerate graphics rendering. GPUs can process many pieces of data simultaneously, making them useful for machine learning, video editing, and gaming applications. In addition to CPU for general purpose compute, your Azure Stack Edge Pro GPU devices can contain one or two NVIDIA Tesla T4 GPUs for compute-intensive workloads such as hardware accelerated inferencing. For more information, see [NVIDIA's Tesla T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/).
1616

1717

1818
## About GPU sharing
@@ -32,11 +32,11 @@ If you are deploying containerized workloads, a GPU can be shared in more than o
3232

3333
- You can specify one or both or no GPUs with this method. It is not possible to specify fractional usage.
3434
- Multiple modules can map to one GPU but the same module cannot be mapped to more than one GPU.
35-
- With the Nvidia SMI output, you can see the overall GPU utilization including the memory utilization.
35+
- With the NVIDIA SMI output, you can see the overall GPU utilization including the memory utilization.
3636

3737
For more information, see how to [Deploy an IoT Edge module that uses GPU](azure-stack-edge-gpu-configure-gpu-modules.md) on your device.
3838

39-
- The second approach requires you to enable the Multi-Process Service on your Nvidia GPUs. MPS is a runtime service that lets multiple processes using CUDA to run concurrently on a single shared GPU. MPS allows overlapping of kernel and memcopy operations from different processes on the GPU to achieve maximum utilization. For more information, see [Multi-Process Service](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf).
39+
- The second approach requires you to enable the Multi-Process Service on your NVIDIA GPUs. MPS is a runtime service that lets multiple processes using CUDA to run concurrently on a single shared GPU. MPS allows overlapping of kernel and memcopy operations from different processes on the GPU to achieve maximum utilization. For more information, see [Multi-Process Service](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf).
4040

4141
Consider the following caveats when using this approach:
4242

@@ -53,9 +53,9 @@ If you are deploying containerized workloads, a GPU can be shared in more than o
5353
5454
## GPU utilization
5555
56-
When you share GPU on containerized workloads deployed on your device, you can use the Nvidia System Management Interface (nvidia-smi). Nvidia-smi is a command-line utility that helps you manage and monitor Nvidia GPU devices. For more information, see [Nvidia System Management Interface](https://developer.nvidia.com/nvidia-system-management-interface).
56+
When you share GPU on containerized workloads deployed on your device, you can use the NVIDIA System Management Interface (nvidia-smi). Nvidia-smi is a command-line utility that helps you manage and monitor NVIDIA GPU devices. For more information, see [NVIDIA System Management Interface](https://developer.nvidia.com/nvidia-system-management-interface).
5757
58-
To view GPU usage, first connect to the PowerShell interface of the device. Run the `Get-HcsNvidiaSmi` command and view the Nvidia SMI output. You can also view how the GPU utilization changes by enabling MPS and then deploying multiple workloads on the device. For more information, see [Enable Multi-Process Service](azure-stack-edge-gpu-connect-powershell-interface.md#enable-multi-process-service-mps).
58+
To view GPU usage, first connect to the PowerShell interface of the device. Run the `Get-HcsNvidiaSmi` command and view the NVIDIA SMI output. You can also view how the GPU utilization changes by enabling MPS and then deploying multiple workloads on the device. For more information, see [Enable Multi-Process Service](azure-stack-edge-gpu-connect-powershell-interface.md#enable-multi-process-service-mps).
5959

6060

6161
## Next steps

0 commit comments

Comments
 (0)