You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: optimizing/index.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -372,9 +372,9 @@ Some widely used parallel programming packages like [LoopVectorization.jl](https
372
372
373
373
### Distributed computing
374
374
375
-
Julia's multiprocessing and distributed relies on the standard library `Distributed`.
376
-
The main difference with multi-threading is that data isn't shared between worker processes.
377
-
Once Julia is started, processes can be added with `addprocs`, and their can be queried with `nworkers`.
375
+
Julia's multiprocessing and distributed computing relies on the standard library `Distributed`.
376
+
The main difference compared to multi-threading is that data isn't shared between worker processes.
377
+
Once Julia is started, processes can be added with `addprocs`, and they can be queried with `nworkers`.
378
378
379
379
The macro `Distributed.@distributed` is a _syntactic_ equivalent for `Threads.@threads`.
380
380
Hence, we can use `@distributed` to parallelise a for loop as before, but we have to additionally deal with sharing and recombining the `results` array.
0 commit comments