You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The `Distributed Jacobian Solver SYCL/MPI` demonstrates using GPU-aware MPI-3, one-sided communications available in the Intel® MPI Library.
4
4
@@ -13,33 +13,114 @@ see the [Intel® MPI Library Documentation](https://www.intel.com/content/www/us
13
13
14
14
## Purpose
15
15
16
-
The sample demonstrates an actual use case (Jacobian solver) for MPI-3 one-sided communications allowing to overlap compute kernel and communications. The sample illustrated how to use host- and device-initiated onesided communication with SYCL kernels.
16
+
The sample demonstrates an actual use case (Jacobian solver) for MPI-3 one-sided communications allowing to overlap compute kernel and communications. The sample illustrates how to use host- and device-initiated one-sided communication with SYCL kernels.
17
17
18
18
## Prerequisites
19
19
20
20
| Optimized for | Description
21
21
|:--- |:---
22
22
| OS | Linux*
23
-
| Hardware | 4th Generation Intel® Xeon® Scalable Processors <br> Intel® Data Center GPU Max Series
23
+
| Hardware | 4th Generation Intel® Xeon® Scalable processors <br> Intel® Data Center GPU Max Series
24
24
| Software | Intel® MPI Library 2021.11
25
25
26
26
## Key Implementation Details
27
27
28
-
This sample implements a well-known distributed 2D Jacobian solver with 1D data distribution. The sampple uses Intel® MPI [GPU Support](https://www.intel.com/content/www/us/en/docs/mpi-library/developer-reference-linux/current/gpu-support.html).
28
+
This sample implements a well-known distributed 2D Jacobi solver with 1D data distribution. The sample uses Intel® MPI [GPU Support](https://www.intel.com/content/www/us/en/docs/mpi-library/developer-reference-linux/current/gpu-support.html).
29
29
30
30
The sample has three variants demonstrating different approaches to the Jacobi solver.
31
31
32
-
### `01_jacobian_host_mpi_one-sided`
32
+
### `Data layout description`
33
+
34
+
The data layout is a 2D grid of size (Nx+2) x (Ny+2), distributed across MPI processes along the Y-axis.
35
+
The first and last rows/columns are constant and used for boundary conditions.
36
+
Each process handles Nx x (Ny/comm_size) subarray.
| | | | | |X| |X| <- First row of i+1 subarray from the previous iteration used for calculation
57
+
| | V | | ------------------------------------------------
58
+
------------------------------------------------
59
+
Bottom border-> |X| |X|
60
+
------------------------------------------------
61
+
```
33
62
34
-
This program demonstrates baseline implementation of the distributed Jacobian solver. In this sample you will see the basic idea of the algorithm, as well as how to implement the halo-exchange using MPI-3 one-sided primitives required for this solver.
63
+
### `01_jacobian_host_mpi_one-sided`
35
64
36
-
The solver is an iterative algorithm where each iteration of the program recalculates border values first, then border values transfer to neighbor processes, which are used in next iteration of algorithm. Each process recalculate internal points values for the next iteration in parallel with communication. After a number of iterations, the algorithm reports NORM values for validation purposes.
65
+
This program demonstrates a baseline implementation of the distributed Jacobian solver. In this sample you will see the basic idea of the algorithm, as well as how to implement the halo-exchange using MPI-3 one-sided primitives required for this solver.
66
+
67
+
The solver is an iterative algorithm where each iteration of the program recalculates border values first, then border values transfer to neighbor processes, which are used in next iteration of algorithm. Each process recalculates internal point values for the next iteration in parallel with communication. After a number of iterations, the algorithm reports norm values for validation purposes.
68
+
69
+
```mermaid
70
+
sequenceDiagram
71
+
participant APP as Application
72
+
participant HC as Host compute
73
+
participant COMM as Communication
74
+
participant GC as GPU compute
75
+
76
+
loop Solver: batch iterations
77
+
loop Solver: single iteration
78
+
APP ->>+ HC: Calculate values on the edges
79
+
HC ->>- APP: edge values
80
+
APP ->>+ COMM: transfer data to neighbours using MPI_Put
81
+
APP ->>+ HC: Recalculate internal points
82
+
HC ->> HC: Main compute loop
83
+
HC ->>- APP: Updated internal points
84
+
APP ->> COMM: RMA window synchronization
85
+
COMM ->>- APP: RMA synchronization completion
86
+
end
87
+
APP ->>+ HC: start compute of local norm
88
+
HC ->>- APP: local norm value
89
+
APP ->>+ COMM: Collect global norm using MPI_Reduce
90
+
COMM ->>- APP: global norm value
91
+
end
92
+
```
37
93
38
94
### `02_jacobian_device_mpi_one-sided_gpu_aware`
39
95
40
-
This program demonstrates how the same algorithm can be modified to add GPU offload capability. The program comes in two versions: OpenMP and SYCL. The program illustrates how device memory can be passed directly to MPI one-sided primitives. In particular, device memory may be passed to `MPI_Win_create` call to create an RMA Window placed on a device. Also, aside from a device RMA-window placement, device memory can be passed to `MPI_Put`/`MPI_Get` primitives as a target or origin buffer.
96
+
This program demonstrates how the same algorithm can be modified to add GPU offload capability. The program comes in two versions: OpenMP and SYCL. The program illustrates how device memory can be passed directly to MPI one-sided primitives. In particular, device memory can be passed to `MPI_Win_create` call to create an RMA Window placed on a device. Also, aside from a device RMA-window placement, device memory can be passed to `MPI_Put`/`MPI_Get` primitives as a target or origin buffer.
97
+
98
+
```mermaid
99
+
sequenceDiagram
100
+
participant APP as Application
101
+
participant HC as Host compute
102
+
participant GC as GPU compute
103
+
participant COMM as Communication
104
+
105
+
loop Solver: batch iterations
106
+
loop Solver: single iteration
107
+
APP ->>+ GC: Calculate values on the edges
108
+
GC ->>- APP: edge values
109
+
APP ->>+ COMM: transfer data to neighbours using MPI_Put
110
+
APP ->>+ GC: Recalculate internal points
111
+
GC ->> GC: Main compute loop
112
+
GC ->>- APP: Updated internal points
113
+
APP ->> COMM: RMA window synchronization
114
+
COMM ->>- APP: RMA synchronization completion
115
+
end
116
+
APP ->>+ GC: start compute of local norm
117
+
GC ->>- APP: local norm value
118
+
APP ->>+ COMM: Collect global norm using MPI_Reduce
119
+
COMM ->>- APP: global norm value
120
+
end
121
+
```
41
122
42
-
> **Note**: Only contigouous MPI datatypes are supported.
123
+
> **Note**: Only contiguous MPI datatypes are supported.
Intel® MPI Library 2021.13 is minimaly required version to run this sample.
168
+
Intel® MPI Library 2021.14 or later is recommended version to run this sample.
169
+
170
+
---
171
+
172
+
This program demonstrates how to initiate one-sided communications directly from the offloaded code. The Intel® MPI Library allows calls to some communication primitives directly from the offloaded code (SYCL or OpenMP). In contrast to the prior example, this one demonstrates the usage of one-sided communications with notification (extension of MPI-4.1 standard).
173
+
174
+
To enable device-initiated communications, you must set an extra environment variable: `I_MPI_OFFLOAD_ONESIDED_DEVICE_INITIATED=1`.
175
+
176
+
```mermaid
177
+
sequenceDiagram
178
+
participant APP as Application
179
+
participant HC as Host compute
180
+
participant GC as GPU compute
181
+
participant COMM as Communication
182
+
183
+
loop Solver: batch iterations
184
+
APP ->>+ GC: Start fused kernel
185
+
loop Solver: single iteration
186
+
GC ->> GC: Calculate values on the edges
187
+
GC ->>+ COMM: transfer data to neighbours using MPI_Put_notify
188
+
GC ->> GC: Recalculate internal points
189
+
COMM -->>- GC: notification from the remote rank
190
+
end
191
+
GC ->>- APP: Fused kernel completion
192
+
APP ->>+ GC: start compute of local norm
193
+
GC ->>- APP: local norm value
194
+
APP ->>+ COMM: Collect global norm using MPI_Reduce
195
+
COMM ->>- APP: global norm value
196
+
end
197
+
```
198
+
57
199
## Build the `Distributed Jacobian Solver SYCL/MPI` Sample
58
200
59
201
> **Note**: If you have not already done so, set up your CLI
@@ -104,9 +246,9 @@ If you receive an error message, troubleshoot the problem using the Diagnostics
Device-initiated communications requires that you set an extra environment variable: `I_MPI_OFFLOAD_ONESIDED_DEVICE_INITIATED=1`.
249
+
Device-initiated communications require to set an extra environment variable: `I_MPI_OFFLOAD_ONESIDED_DEVICE_INITIATED=1`.
108
250
109
-
If everything worked, the Jacobi solver started an iterative computation for defined number of iterations. By default, the sample reports NORM values after every 10 computation iterations and reports the overall solver time at the end.
251
+
If everything worked, the Jacobi solver started an iterative computation for a defined number of iterations. By default, the sample reports norm values after every 10 computation iterations and reports the overall solver time at the end.
0 commit comments