You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| | | | | |X| |X| <- First row of of i+1 subarray from previous iterarion used for calculation
57
+
| | V | | ------------------------------------------------
58
+
------------------------------------------------
59
+
Bottom border-> |X| |X|
60
+
------------------------------------------------
61
+
```
62
+
32
63
### `01_jacobian_host_mpi_one-sided`
33
64
34
65
This program demonstrates baseline implementation of the distributed Jacobian solver. In this sample you will see the basic idea of the algorithm, as well as how to implement the halo-exchange using MPI-3 one-sided primitives required for this solver.
35
66
36
67
The solver is an iterative algorithm where each iteration of the program recalculates border values first, then border values transfer to neighbor processes, which are used in next iteration of algorithm. Each process recalculate internal points values for the next iteration in parallel with communication. After a number of iterations, the algorithm reports NORM values for validation purposes.
37
68
69
+
```mermaid
70
+
sequenceDiagram
71
+
participant APP as Application
72
+
participant HC as Host compute
73
+
participant COMM as Communication
74
+
participant GC as GPU compute
75
+
76
+
loop Solever: batch iterations
77
+
loop Solver: single iteration
78
+
APP ->>+ HC: Calculate values on the edges
79
+
HC ->>- APP: edge values
80
+
APP ->>+ COMM: transfer data to neighbours using MPI_Put
81
+
APP ->>+ HC: Recalculate internal points
82
+
HC ->> HC: Main compute loop
83
+
HC ->>- APP: Updated internal points
84
+
APP ->> COMM: RMA window synchronization
85
+
COMM ->>- APP: RMA syncronization completion
86
+
end
87
+
APP ->>+ HC: start compute of local norm
88
+
HC ->>- APP: local norm value
89
+
APP ->>+ COMM: Collect global norm using MPI_Reduce
90
+
COMM ->>- APP: global norm value
91
+
end
92
+
```
93
+
38
94
### `02_jacobian_device_mpi_one-sided_gpu_aware`
39
95
40
96
This program demonstrates how the same algorithm can be modified to add GPU offload capability. The program comes in two versions: OpenMP and SYCL. The program illustrates how device memory can be passed directly to MPI one-sided primitives. In particular, device memory may be passed to `MPI_Win_create` call to create an RMA Window placed on a device. Also, aside from a device RMA-window placement, device memory can be passed to `MPI_Put`/`MPI_Get` primitives as a target or origin buffer.
41
97
98
+
```mermaid
99
+
sequenceDiagram
100
+
participant APP as Application
101
+
participant HC as Host compute
102
+
participant GC as GPU compute
103
+
participant COMM as Communication
104
+
105
+
loop Solever: batch iterations
106
+
loop Solver: single iteration
107
+
APP ->>+ GC: Calculate values on the edges
108
+
GC ->>- APP: edge values
109
+
APP ->>+ COMM: transfer data to neighbours using MPI_Put
110
+
APP ->>+ GC: Recalculate internal points
111
+
GC ->> GC: Main compute loop
112
+
GC ->>- APP: Updated internal points
113
+
APP ->> COMM: RMA window synchronization
114
+
COMM ->>- APP: RMA syncronization completion
115
+
end
116
+
APP ->>+ GC: start compute of local norm
117
+
GC ->>- APP: local norm value
118
+
APP ->>+ COMM: Collect global norm using MPI_Reduce
119
+
COMM ->>- APP: global norm value
120
+
end
121
+
```
122
+
42
123
> **Note**: Only contigouous MPI datatypes are supported.
This program demonstrates how to initiate one-sided communications directly from the offloaded code. The Intel® MPI Library allows calls to some communication primitives directly from the offloaded code (SYCL or OpenMP). In contrast to prior example, this one demonstrates usage of one-sided communications with notification (extention of MPI-4.1 standard).
60
166
61
167
To enable device-initiated communications, you must set an extra environment variable: `I_MPI_OFFLOAD_ONESIDED_DEVICE_INITIATED=1`.
62
168
169
+
```mermaid
170
+
sequenceDiagram
171
+
participant APP as Application
172
+
participant HC as Host compute
173
+
participant GC as GPU compute
174
+
participant COMM as Communication
175
+
176
+
loop Solver: batch iterations
177
+
APP ->>+ GC: Start fused kernel
178
+
loop Solver: single iteration
179
+
GC ->> GC: Calculate values on the edges
180
+
GC ->>+ COMM: transfer data to neighbours using MPI_Put_notify
181
+
GC ->> GC: Recalculate internal points
182
+
COMM -->>- GC: notification from the remote rank
183
+
end
184
+
GC ->>- APP: Fused kernel completion
185
+
APP ->>+ GC: start compute of local norm
186
+
GC ->>- APP: local norm value
187
+
APP ->>+ COMM: Collect global norm using MPI_Reduce
188
+
COMM ->>- APP: global norm value
189
+
end
190
+
```
191
+
63
192
## Build the `Distributed Jacobian Solver SYCL/MPI` Sample
64
193
65
194
> **Note**: If you have not already done so, set up your CLI
0 commit comments