You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/databox-online/azure-stack-edge-gpu-sharing.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.author: alkohli
12
12
13
13
# GPU sharing on your Azure Stack Edge Pro GPU device
14
14
15
-
Graphics processing unit (GPU) is a specialized processor designed to accelerate graphics rendering. GPUs can process many pieces of data simultaneously, making them useful for machine learning, video editing, and gaming applications. In addition to CPU for general purpose compute, your Azure Stack Edge Pro GPU devices can contain one or two Nvidia Tesla T4 GPUs for compute-intensive workloads such as hardware accelerated inferencing. For more information, see [Nvidia's Tesla T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/).
15
+
Graphics processing unit (GPU) is a specialized processor designed to accelerate graphics rendering. GPUs can process many pieces of data simultaneously, making them useful for machine learning, video editing, and gaming applications. In addition to CPU for general purpose compute, your Azure Stack Edge Pro GPU devices can contain one or two NVIDIA Tesla T4 GPUs for compute-intensive workloads such as hardware accelerated inferencing. For more information, see [NVIDIA's Tesla T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/).
16
16
17
17
18
18
## About GPU sharing
@@ -32,11 +32,11 @@ If you are deploying containerized workloads, a GPU can be shared in more than o
32
32
33
33
- You can specify one or both or no GPUs with this method. It is not possible to specify fractional usage.
34
34
- Multiple modules can map to one GPU but the same module cannot be mapped to more than one GPU.
35
-
- With the Nvidia SMI output, you can see the overall GPU utilization including the memory utilization.
35
+
- With the NVIDIA SMI output, you can see the overall GPU utilization including the memory utilization.
36
36
37
37
For more information, see how to [Deploy an IoT Edge module that uses GPU](azure-stack-edge-gpu-configure-gpu-modules.md) on your device.
38
38
39
-
- The second approach requires you to enable the Multi-Process Service on your Nvidia GPUs. MPS is a runtime service that lets multiple processes using CUDA to run concurrently on a single shared GPU. MPS allows overlapping of kernel and memcopy operations from different processes on the GPU to achieve maximum utilization. For more information, see [Multi-Process Service](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf).
39
+
- The second approach requires you to enable the Multi-Process Service on your NVIDIA GPUs. MPS is a runtime service that lets multiple processes using CUDA to run concurrently on a single shared GPU. MPS allows overlapping of kernel and memcopy operations from different processes on the GPU to achieve maximum utilization. For more information, see [Multi-Process Service](https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf).
40
40
41
41
Consider the following caveats when using this approach:
42
42
@@ -53,9 +53,9 @@ If you are deploying containerized workloads, a GPU can be shared in more than o
53
53
54
54
## GPU utilization
55
55
56
-
When you share GPU on containerized workloads deployed on your device, you can use the Nvidia System Management Interface (nvidia-smi). Nvidia-smi is a command-line utility that helps you manage and monitor Nvidia GPU devices. For more information, see [Nvidia System Management Interface](https://developer.nvidia.com/nvidia-system-management-interface).
56
+
When you share GPU on containerized workloads deployed on your device, you can use the NVIDIA System Management Interface (nvidia-smi). Nvidia-smi is a command-line utility that helps you manage and monitor NVIDIA GPU devices. For more information, see [NVIDIA System Management Interface](https://developer.nvidia.com/nvidia-system-management-interface).
57
57
58
-
To view GPU usage, first connect to the PowerShell interface of the device. Run the `Get-HcsNvidiaSmi` command and view the Nvidia SMI output. You can also view how the GPU utilization changes by enabling MPS and then deploying multiple workloads on the device. For more information, see [Enable Multi-Process Service](azure-stack-edge-gpu-connect-powershell-interface.md#enable-multi-process-service-mps).
58
+
To view GPU usage, first connect to the PowerShell interface of the device. Run the `Get-HcsNvidiaSmi` command and view the NVIDIA SMI output. You can also view how the GPU utilization changes by enabling MPS and then deploying multiple workloads on the device. For more information, see [Enable Multi-Process Service](azure-stack-edge-gpu-connect-powershell-interface.md#enable-multi-process-service-mps).
0 commit comments