You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: joss/paper.bib
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ @article{Mortensen:2019
29
29
number = {36},
30
30
pages = {1340},
31
31
author = {Mortensen, Mikael and Dalcin, Lisandro and Keyes, David Elliot},
32
-
title = {mpi4py-fft: Parallel Fast Fourier Transforms with MPI for Python},
32
+
title = {mpi4py-fft: Parallel Fast {F}ourier Transforms with MPI for {P}ython},
33
33
journal = {Journal of Open Source Software}
34
34
}
35
35
@@ -42,7 +42,7 @@ @article{Morgan:2024
42
42
number = {96},
43
43
pages = {6415},
44
44
author = {Joanna Piper Morgan and Ilham Variansyah and Samuel L. Pasmann and Kayla B. Clements and Braxton Cuneo and Alexander Mote and Charles Goodman and Caleb Shaw and Jordan Northrop and Rohan Pankaj and Ethan Lame and Benjamin Whewell and Ryan G. McClarren and Todd S. Palmer and Lizhong Chen and Dmitriy Y. Anistratov and C. T. Kelley and Camille J. Palmer and Kyle E. Niemeyer},
45
-
title = {Monte Carlo / Dynamic Code (MC/DC): An accelerated Python package for fully transient neutron transport and rapid methods development},
45
+
title = {{M}onte {C}arlo / Dynamic Code (MC/DC): An accelerated {P}ython package for fully transient neutron transport and rapid methods development},
Copy file name to clipboardExpand all lines: joss/paper.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,18 +39,18 @@ in scientific inverse problems can be decomposed into a series of computational
39
39
40
40
When addressing distributed inverse problems, we identify three distinct families of problems:
41
41
42
-
-**1. Fully distributed models and data**: Both model and data are split across nodes, with each node processing its own portion of the model and data. This leads to minimal
42
+
1.**Fully distributed models and data**: Both model and data are split across nodes, with each node processing its own portion of the model and data. This leads to minimal
43
43
communication, mainly when performing dot products in the solver or in the regularization terms.
44
44
45
-
-**2. Distributed data, model available on all nodes**: Data is distributed across nodes, whilst the model is available on all nodes.
45
+
2.**Distributed data, model available on all nodes**: Data is distributed across nodes, whilst the model is available on all nodes.
46
46
Communication happens during the adjoint pass to sum models and in the solver for data vector operations.
47
47
48
-
-**3. Model and data available on all nodes**: All nodes have identical copies of the data and model. Communication only happens within
48
+
3.**Model and data available on all nodes**: All nodes have identical copies of the data and model. Communication only happens within
49
49
the operator, with no communication in solver needed.
50
50
51
51
MPI for Python (mpi4py [@Dalcin:2021]) provides Python bindings for the MPI standard, allowing applications to leverage multiple
52
52
processors. Projects like mpi4py-fft [@Mortensen:2019], mcdc [@Morgan:2024], and mpi4jax [@mpi4jax]
53
-
utilize mpi4py to provide distributed computing capabilities. Similarly, PyLops-MPI, which is built on top of PyLops [@Ravasi:2020] leverages mpi4py to solve large-scale problems in a distributed fashion.
53
+
utilize mpi4py to provide distributed computing capabilities. Similarly, PyLops-MPI, which is built on top of PyLops [@Ravasi:2020], leverages mpi4py to solve large-scale problems in a distributed fashion.
54
54
Its intuitive API provide functionalities to scatter and broadcast data and model vector across nodes and allows various mathematical operations (e.g., summation, subtraction, norms)
55
55
to be performed. Additionally, a suite of MPI-powered linear operators and solvers is offered, and its flexible design eases the integration of custom operators and solvers.
56
56
@@ -59,7 +59,7 @@ to be performed. Additionally, a suite of MPI-powered linear operators and solve
59
59
PyLops-MPI is designed to tackle large-scale linear inverse problems that are difficult to solve using a single process
60
60
(due to either extremely high computational cost or memory requirements).
61
61
62
-

62
+

63
63
64
64
Fig. 1 illustrates the main components of the library, emphasizing the relationship between the DistributedArray class, stacked operators, and MPI-powered solvers.
65
65
@@ -71,8 +71,8 @@ NumPy [@Harris:2020] or CuPy [@cupy] arrays across multiple processes. It also s
71
71
## HStack, VStack, BlockDiag Operators
72
72
73
73
PyLops facilitates the combinations of multiple linear operators via horizontal, vertical, or diagonal stacking. PyLops-MPI provides
74
-
distributed versions of these operations-e.g., `pylops_mpi.MPIBlockDiag` applies different operators in parallel on separate portions of the model
75
-
and data.`pylops_mpi.MPIVStack`applies multiple operators in parallel to the whole model, with its adjoint applies the adjoint of each individual operator to portions of the data vector and sums the individual output. `pylops_mpi.MPIHStack` is the adjoint of MPIVStack.
74
+
distributed versions of these operations Examples include `pylops_mpi.MPIBlockDiag`, which applies different operators in parallel on separate portions of the model
75
+
and data,`pylops_mpi.MPIVStack`, which applies multiple operators in parallel to the whole model, with its adjoint applies the adjoint of each individual operator to portions of the data vector and sums the individual output, and `pylops_mpi.MPIHStack`, which is the adjoint of MPIVStack.
0 commit comments