You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/src/manual/distributed-computing.md
+17-12Lines changed: 17 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -813,16 +813,18 @@ will always operate on copies of arguments.
813
813
814
814
## [Shared Arrays](@id man-shared-arrays)
815
815
816
-
Shared Arrays use system shared memory to map the same array across many processes. While there
817
-
are some similarities to a [`DArray`](https://github.com/JuliaParallel/DistributedArrays.jl), the
818
-
behavior of a [`SharedArray`](@ref) is quite different. In a [`DArray`](https://github.com/JuliaParallel/DistributedArrays.jl),
819
-
each process has local access to just a chunk of the data, and no two processes share the same
820
-
chunk; in contrast, in a [`SharedArray`](@ref) each "participating" process has access to the
821
-
entire array. A [`SharedArray`](@ref) is a good choice when you want to have a large amount of
822
-
data jointly accessible to two or more processes on the same machine.
823
-
824
-
Shared Array support is available via module `SharedArrays` which must be explicitly loaded on
825
-
all participating workers.
816
+
Shared Arrays use system shared memory to map the same array across many processes. A
817
+
[`SharedArray`](@ref) is a good choice when you want to have a large amount of data jointly
818
+
accessible to two or more processes on the same machine. Shared Array support is available via the
819
+
module `SharedArrays`, which must be explicitly loaded on all participating workers.
820
+
821
+
A complementary data structure is provided by the external package
822
+
[`DistributedArrays.jl`](https://github.com/JuliaParallel/DistributedArrays.jl) in the form of a
823
+
`DArray`. While there are some similarities to a [`SharedArray`](@ref), the behavior of a
824
+
[`DArray`](https://github.com/JuliaParallel/DistributedArrays.jl) is quite different. In a
825
+
[`SharedArray`](@ref), each "participating" process has access to the entire array; in contrast, in
826
+
a [`DArray`](https://github.com/JuliaParallel/DistributedArrays.jl), each process has local access
827
+
to just a chunk of the data, and no two processes share the same chunk.
826
828
827
829
[`SharedArray`](@ref) indexing (assignment and accessing values) works just as with regular arrays,
828
830
and is efficient because the underlying memory is available to the local process. Therefore,
@@ -1326,8 +1328,11 @@ in future releases.
1326
1328
## Noteworthy external packages
1327
1329
1328
1330
Outside of Julia parallelism there are plenty of external packages that should be mentioned.
1329
-
For example [MPI.jl](https://github.com/JuliaParallel/MPI.jl) is a Julia wrapper for the `MPI` protocol, [Dagger.jl](https://github.com/JuliaParallel/Dagger.jl) provides functionality similar to Python's [Dask](https://dask.org/), and
1330
-
[DistributedArrays.jl](https://github.com/JuliaParallel/Distributedarrays.jl) provides array operations distributed across workers, as presented in [Shared Arrays](@ref).
1331
+
For example, [`MPI.jl`](https://github.com/JuliaParallel/MPI.jl) is a Julia wrapper for the `MPI`
1332
+
protocol, [`Dagger.jl`](https://github.com/JuliaParallel/Dagger.jl) provides functionality similar to
0 commit comments