Skip to content

Commit 40ddb34

Browse files
committed
typo fix for code rendering
1 parent 51bc70c commit 40ddb34

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

tutorials_nccl/lsm_nccl.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -93,11 +93,11 @@
9393
# We create a :py:class:`pylops.waveeqprocessing.LSM` at each rank and then push them
9494
# into a :py:class:`pylops_mpi.basicoperators.MPIVStack` to perform a matrix-vector
9595
# product with the broadcasted reflectivity at every location on the subsurface.
96-
# Note that we must use :code`engine="cuda"` and move the wavelet wav to the GPU prior to creating the operator.
97-
# Moreover, we allocate the traveltime tables (:code`lsm.Demop.trav_srcs`, and :code`lsm.Demop.trav_recs`)
96+
# Note that we must use :code:`engine="cuda"` and move the wavelet wav to the GPU prior to creating the operator.
97+
# Moreover, we allocate the traveltime tables (:code:`lsm.Demop.trav_srcs`, and :code:`lsm.Demop.trav_recs`)
9898
# to the GPU prior to applying the operator to avoid incurring in the penalty of performing
99-
# host-to-device memory copies every time the operator is applied. Moreover, we must pass :code`nccl_comm`
100-
# to the DistributedArray constructor used to create :code`refl_dist` in order to use NCCL for communications.
99+
# host-to-device memory copies every time the operator is applied. Moreover, we must pass :code:`nccl_comm`
100+
# to the DistributedArray constructor used to create :code:`refl_dist` in order to use NCCL for communications.
101101

102102
# Wavelet
103103
nt = 651

0 commit comments

Comments
 (0)