Skip to content

Commit 8ad515d

Browse files
authored
minor changes in paper.md
1 parent d92f48c commit 8ad515d

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

joss/paper.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -39,18 +39,18 @@ in scientific inverse problems can be decomposed into a series of computational
3939

4040
When addressing distributed inverse problems, we identify three distinct families of problems:
4141

42-
- **1. Fully distributed models and data**: Both model and data are split across nodes, with each node processing its own portion of the model and data. This leads to minimal
42+
1. **Fully distributed models and data**: Both model and data are split across nodes, with each node processing its own portion of the model and data. This leads to minimal
4343
communication, mainly when performing dot products in the solver or in the regularization terms.
4444

45-
- **2. Distributed data, model available on all nodes**: Data is distributed across nodes, whilst the model is available on all nodes.
45+
2. **Distributed data, model available on all nodes**: Data is distributed across nodes, whilst the model is available on all nodes.
4646
Communication happens during the adjoint pass to sum models and in the solver for data vector operations.
4747

48-
- **3. Model and data available on all nodes**: All nodes have identical copies of the data and model. Communication only happens within
48+
3. **Model and data available on all nodes**: All nodes have identical copies of the data and model. Communication only happens within
4949
the operator, with no communication in solver needed.
5050

5151
MPI for Python (mpi4py [@Dalcin:2021]) provides Python bindings for the MPI standard, allowing applications to leverage multiple
5252
processors. Projects like mpi4py-fft [@Mortensen:2019], mcdc [@Morgan:2024], and mpi4jax [@mpi4jax]
53-
utilize mpi4py to provide distributed computing capabilities. Similarly, PyLops-MPI, which is built on top of PyLops [@Ravasi:2020] leverages mpi4py to solve large-scale problems in a distributed fashion.
53+
utilize mpi4py to provide distributed computing capabilities. Similarly, PyLops-MPI, which is built on top of PyLops [@Ravasi:2020], leverages mpi4py to solve large-scale problems in a distributed fashion.
5454
Its intuitive API provide functionalities to scatter and broadcast data and model vector across nodes and allows various mathematical operations (e.g., summation, subtraction, norms)
5555
to be performed. Additionally, a suite of MPI-powered linear operators and solvers is offered, and its flexible design eases the integration of custom operators and solvers.
5656

@@ -59,7 +59,7 @@ to be performed. Additionally, a suite of MPI-powered linear operators and solve
5959
PyLops-MPI is designed to tackle large-scale linear inverse problems that are difficult to solve using a single process
6060
(due to either extremely high computational cost or memory requirements).
6161

62-
![Software Framework representation of the ``PyLops-MPI`` API.](figs/soft_framework.png)
62+
![Software framework representation of the ``PyLops-MPI`` API.](figs/soft_framework.png)
6363

6464
Fig. 1 illustrates the main components of the library, emphasizing the relationship between the DistributedArray class, stacked operators, and MPI-powered solvers.
6565

@@ -71,8 +71,8 @@ NumPy [@Harris:2020] or CuPy [@cupy] arrays across multiple processes. It also s
7171
## HStack, VStack, BlockDiag Operators
7272

7373
PyLops facilitates the combinations of multiple linear operators via horizontal, vertical, or diagonal stacking. PyLops-MPI provides
74-
distributed versions of these operations-e.g., `pylops_mpi.MPIBlockDiag` applies different operators in parallel on separate portions of the model
75-
and data. `pylops_mpi.MPIVStack` applies multiple operators in parallel to the whole model, with its adjoint applies the adjoint of each individual operator to portions of the data vector and sums the individual output. `pylops_mpi.MPIHStack` is the adjoint of MPIVStack.
74+
distributed versions of these operations Examples include `pylops_mpi.MPIBlockDiag`, which applies different operators in parallel on separate portions of the model
75+
and data, `pylops_mpi.MPIVStack`, which applies multiple operators in parallel to the whole model, with its adjoint applies the adjoint of each individual operator to portions of the data vector and sums the individual output, and `pylops_mpi.MPIHStack`, which is the adjoint of MPIVStack.
7676

7777
## Halo Exchange
7878

0 commit comments

Comments
 (0)