@@ -210,7 +210,7 @@ Numerical Results of Distributed Computations
210210
211211Floating point arithmetic is not associative and this comes up
212212when performing distributed computations over ` DArray ` s. All ` DArray `
213- operations are performed over the ` localpart ` chunks and then aggregated.
213+ operations are performed over the localparts and then aggregated.
214214The change in ordering of the operations will change the numeric result as
215215seen in this simple example:
216216
@@ -233,29 +233,29 @@ julia> sum(A) == sum(DA)
233233false
234234```
235235
236- The ultimate ordering of operations will be dependent on how the Array is distributed.
236+ The ultimate ordering of operations will be dependent on how the ` Array ` is distributed.
237237
238238
239239
240- Garbage Collection and DArrays
240+ Garbage Collection and ` DArray ` s
241241------------------------------
242242
243- When a DArray is constructed (typically on the master process), the returned DArray objects stores information on how the
244- array is distributed, which processor holds which indices and so on. When the DArray object
243+ When a ` DArray ` is constructed (typically on the master process), the returned ` DArray ` objects stores information on how the
244+ array is distributed, which processor holds which indices and so on. When the ` DArray ` object
245245on the master process is garbage collected, all participating workers are notified and
246- localparts of the DArray freed on each worker.
246+ localparts of the ` DArray ` freed on each worker.
247247
248- Since the size of the DArray object itself is small, a problem arises as ` gc ` on the master faces no memory pressure to
249- collect the DArray immediately. This results in a delay of the memory being released on the participating workers.
248+ Since the size of the ` DArray object itself is small, a problem arises as ` gc` on the master faces no memory pressure to
249+ collect the ` DArray ` immediately. This results in a delay of the memory being released on the participating workers.
250250
251251Therefore it is highly recommended to explicitly call ` close(d::DArray) ` as soon as user code
252252has finished working with the distributed array.
253253
254- It is also important to note that the localparts of the DArray is collected from all participating workers
255- when the DArray object on the process creating the DArray is collected. It is therefore important to maintain
256- a reference to a DArray object on the creating process for as long as it is being computed upon.
254+ It is also important to note that the localparts of the ` DArray ` is collected from all participating workers
255+ when the ` DArray ` object on the process creating the ` DArray ` is collected. It is therefore important to maintain
256+ a reference to a ` DArray ` object on the creating process for as long as it is being computed upon.
257257
258- ` d_closeall() ` is another useful function to manage distributed memory. It releases all darrays created from
258+ ` d_closeall() ` is another useful function to manage distributed memory. It releases all ` DArrays ` created from
259259the calling process, including any temporaries created during computation.
260260
261261
@@ -275,7 +275,7 @@ Argument `data` if supplied is distributed over the `pids`. `length(data)` must
275275If the multiple is 1, returns a ` DArray{T,1,T} ` where T is ` eltype(data) ` . If the multiple is greater than 1,
276276returns a ` DArray{T,1,Array{T,1}} ` , i.e., it is equivalent to calling ` distribute(data) ` .
277277
278- ` gather{T}(d::DArray{T,1,T}) ` returns an Array{T,1} consisting of all distributed elements of ` d `
278+ ` gather{T}(d::DArray{T,1,T}) ` returns an ` Array{T,1} ` consisting of all distributed elements of ` d ` .
279279
280280Given a ` DArray{T,1,T} ` object ` d ` , ` d[:L] ` returns the localpart on a worker. ` d[i] ` returns the ` localpart `
281281on the ith worker that ` d ` is distributed over.
@@ -284,7 +284,7 @@ on the ith worker that `d` is distributed over.
284284
285285SPMD Mode (An MPI Style SPMD mode with MPI like primitives, requires Julia 0.6)
286286-------------------------------------------------------------------------------
287- SPMD, i.e., a Single Program Multiple Data mode is implemented by submodule ` DistributedArrays.SPMD ` . In this mode the same function is executed in parallel on all participating nodes. This is a typical style of MPI programs where the same program is executed on all processors. A basic subset of MPI-like primitives are currently supported. As a programming model it should be familiar to folks with an MPI background.
287+ SPMD, i.e., a Single Program Multiple Data mode, is implemented by submodule ` DistributedArrays.SPMD ` . In this mode the same function is executed in parallel on all participating nodes. This is a typical style of MPI programs where the same program is executed on all processors. A basic subset of MPI-like primitives are currently supported. As a programming model it should be familiar to folks with an MPI background.
288288
289289The same block of code is executed concurrently on all workers using the ` spmd ` function.
290290
0 commit comments