-
Couldn't load subscription status.
- Fork 5
Description
Problem
There may be scenarios where the input array that is broadcasted across ranks has a size that exceeds 2GB. This line
pylops-mpi/pylops_mpi/DistributedArray.py
Line 137 in 57b793e
| self.local_array[index] = self.base_comm.bcast(value) |
will generate an error of the kind in mpi4py/mpi4py#119
Solution
One option is to look into what suggested in the linked issue. However, I am wondering if (and when) we really need to do broadcasting... for example, when we define a MPILinearOperator that wraps a pylops operator, we know that the input at each rank will be the same, and each rank will perform the same operation, so could we avoid all together to do this:
pylops-mpi/pylops_mpi/LinearOperator.py
Line 86 in 57b793e
| y[:] = self.Op._matvec(x.local_array) |
This will also reduce a lot of (perhaps useless) communication time?