Skip to content

Commit e6915d2

Browse files
authored
Merge pull request #177 from r-barnes/patch-2
Update README.md
2 parents 4689b5f + 42e2559 commit e6915d2

File tree

1 file changed

+79
-44
lines changed

1 file changed

+79
-44
lines changed

docs/src/index.md

Lines changed: 79 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,11 @@ Common kinds of arrays can be constructed with functions beginning with
2424
`d`:
2525

2626
```julia
27-
dzeros(100,100,10)
28-
dones(100,100,10)
29-
drand(100,100,10)
30-
drandn(100,100,10)
31-
dfill(x,100,100,10)
27+
dzeros(100,100,10)
28+
dones(100,100,10)
29+
drand(100,100,10)
30+
drandn(100,100,10)
31+
dfill(x,100,100,10)
3232
```
3333

3434
In the last case, each element will be initialized to the specified
@@ -37,7 +37,7 @@ For more control, you can specify which processes to use, and how the
3737
data should be distributed:
3838

3939
```julia
40-
dzeros((100,100), workers()[1:4], [1,4])
40+
dzeros((100,100), workers()[1:4], [1,4])
4141
```
4242

4343
The second argument specifies that the array should be created on the first
@@ -79,7 +79,7 @@ Constructing Distributed Arrays
7979
The primitive `DArray` constructor has the following somewhat elaborate signature:
8080

8181
```julia
82-
DArray(init, dims[, procs, dist])
82+
DArray(init, dims[, procs, dist])
8383
```
8484

8585
`init` is a function that accepts a tuple of index ranges. This function should
@@ -96,7 +96,7 @@ As an example, here is how to turn the local array constructor `fill`
9696
into a distributed array constructor:
9797

9898
```julia
99-
dfill(v, args...) = DArray(I->fill(v, map(length,I)), args...)
99+
dfill(v, args...) = DArray(I->fill(v, map(length,I)), args...)
100100
```
101101

102102
In this case the `init` function only needs to call `fill` with the
@@ -123,6 +123,28 @@ julia> @DArray [i+j for i = 1:5, j = 1:5]
123123
6 7 8 9 10
124124
```
125125

126+
### Construction from arrays generated on separate processes
127+
`DArray`s can also be constructed from arrays that have been constructed on separate processes, as demonstrated below:
128+
```julia
129+
ras = [@spawnat p rand(30,30) for p in workers()[1:4]]
130+
ras = reshape(ras,(2,2))
131+
D = DArray(ras)
132+
```
133+
An alternative syntax is:
134+
```julia
135+
r1 = DistributedArrays.remotecall(() -> rand(10,10), workers()[1])
136+
r2 = DistributedArrays.remotecall(() -> rand(10,10), workers()[2])
137+
r3 = DistributedArrays.remotecall(() -> rand(10,10), workers()[3])
138+
r4 = DistributedArrays.remotecall(() -> rand(10,10), workers()[4])
139+
D = DArray(reshape([r1 r2 r3 r4], (2,2)))
140+
```
141+
The distribution of indices across workers can be checked with
142+
```julia
143+
[@fetchfrom p localindices(D) for p in workers()]
144+
```
145+
146+
147+
126148
Distributed Array Operations
127149
----------------------------
128150

@@ -135,27 +157,27 @@ each process needs the immediate neighbor cells of its local chunk. The
135157
following code accomplishes this::
136158

137159
```julia
138-
function life_step(d::DArray)
139-
DArray(size(d),procs(d)) do I
140-
top = mod(first(I[1])-2,size(d,1))+1
141-
bot = mod( last(I[1]) ,size(d,1))+1
142-
left = mod(first(I[2])-2,size(d,2))+1
143-
right = mod( last(I[2]) ,size(d,2))+1
144-
145-
old = Array{Bool}(undef, length(I[1])+2, length(I[2])+2)
146-
old[1 , 1 ] = d[top , left] # left side
147-
old[2:end-1, 1 ] = d[I[1], left]
148-
old[end , 1 ] = d[bot , left]
149-
old[1 , 2:end-1] = d[top , I[2]]
150-
old[2:end-1, 2:end-1] = d[I[1], I[2]] # middle
151-
old[end , 2:end-1] = d[bot , I[2]]
152-
old[1 , end ] = d[top , right] # right side
153-
old[2:end-1, end ] = d[I[1], right]
154-
old[end , end ] = d[bot , right]
155-
156-
life_rule(old)
157-
end
160+
function life_step(d::DArray)
161+
DArray(size(d),procs(d)) do I
162+
top = mod(first(I[1])-2,size(d,1))+1
163+
bot = mod( last(I[1]) ,size(d,1))+1
164+
left = mod(first(I[2])-2,size(d,2))+1
165+
right = mod( last(I[2]) ,size(d,2))+1
166+
167+
old = Array{Bool}(undef, length(I[1])+2, length(I[2])+2)
168+
old[1 , 1 ] = d[top , left] # left side
169+
old[2:end-1, 1 ] = d[I[1], left]
170+
old[end , 1 ] = d[bot , left]
171+
old[1 , 2:end-1] = d[top , I[2]]
172+
old[2:end-1, 2:end-1] = d[I[1], I[2]] # middle
173+
old[end , 2:end-1] = d[bot , I[2]]
174+
old[1 , end ] = d[top , right] # right side
175+
old[2:end-1, end ] = d[I[1], right]
176+
old[end , end ] = d[bot , right]
177+
178+
life_rule(old)
158179
end
180+
end
159181
```
160182

161183
As you can see, we use a series of indexing expressions to fetch
@@ -166,21 +188,23 @@ to the data, yielding the needed `DArray` chunk. Nothing about `life_rule`
166188
is `DArray`\ -specific, but we list it here for completeness::
167189

168190
```julia
169-
function life_rule(old)
170-
m, n = size(old)
171-
new = similar(old, m-2, n-2)
172-
for j = 2:n-1
173-
for i = 2:m-1
174-
nc = +(old[i-1,j-1], old[i-1,j], old[i-1,j+1],
175-
old[i ,j-1], old[i ,j+1],
176-
old[i+1,j-1], old[i+1,j], old[i+1,j+1])
177-
new[i-1,j-1] = (nc == 3 || nc == 2 && old[i,j])
178-
end
191+
function life_rule(old)
192+
m, n = size(old)
193+
new = similar(old, m-2, n-2)
194+
for j = 2:n-1
195+
for i = 2:m-1
196+
nc = +(old[i-1,j-1], old[i-1,j], old[i-1,j+1],
197+
old[i ,j-1], old[i ,j+1],
198+
old[i+1,j-1], old[i+1,j], old[i+1,j+1])
199+
new[i-1,j-1] = (nc == 3 || nc == 2 && old[i,j])
179200
end
180-
new
181201
end
202+
new
203+
end
182204
```
183205

206+
207+
184208
Numerical Results of Distributed Computations
185209
---------------------------------------------
186210

@@ -211,6 +235,8 @@ false
211235

212236
The ultimate ordering of operations will be dependent on how the Array is distributed.
213237

238+
239+
214240
Garbage Collection and DArrays
215241
------------------------------
216242

@@ -232,6 +258,8 @@ a reference to a DArray object on the creating process for as long as it is bein
232258
`d_closeall()` is another useful function to manage distributed memory. It releases all darrays created from
233259
the calling process, including any temporaries created during computation.
234260

261+
262+
235263
Working with distributed non-array data (requires Julia 0.6)
236264
------------------------------------------------------------
237265

@@ -252,13 +280,15 @@ returns a `DArray{T,1,Array{T,1}}`, i.e., it is equivalent to calling `distribut
252280
Given a `DArray{T,1,T}` object `d`, `d[:L]` returns the localpart on a worker. `d[i]` returns the `localpart`
253281
on the ith worker that `d` is distributed over.
254282

283+
284+
255285
SPMD Mode (An MPI Style SPMD mode with MPI like primitives, requires Julia 0.6)
256286
-------------------------------------------------------------------------------
257287
SPMD, i.e., a Single Program Multiple Data mode is implemented by submodule `DistributedArrays.SPMD`. In this mode the same function is executed in parallel on all participating nodes. This is a typical style of MPI programs where the same program is executed on all processors. A basic subset of MPI-like primitives are currently supported. As a programming model it should be familiar to folks with an MPI background.
258288

259289
The same block of code is executed concurrently on all workers using the `spmd` function.
260290

261-
```
291+
```julia
262292
# define foo() on all workers
263293
@everywhere function foo(arg1, arg2)
264294
....
@@ -299,12 +329,14 @@ consecutive `bcast` calls.
299329
import it explcitly, or prefix functions that can can only be used in spmd mode with `SPMD.`, for example,
300330
`SPMD.sendto`.
301331

332+
333+
302334
Example
303335
-------
304336

305337
This toy example exchanges data with each of its neighbors `n` times.
306338

307-
```
339+
```julia
308340
using Distributed
309341
using DistributedArrays
310342
addprocs(8)
@@ -348,6 +380,8 @@ println(d_in)
348380
println(d_out)
349381
```
350382

383+
384+
351385
SPMD Context
352386
------------
353387

@@ -368,12 +402,13 @@ on all participating `pids`. Else they will be released when the context object
368402
on the node that created it.
369403

370404

405+
371406
Nested `spmd` calls
372407
-------------------
373408
As `spmd` executes the the specified function on all participating nodes, we need to be careful with nesting `spmd` calls.
374409

375410
An example of an unsafe(wrong) way:
376-
```
411+
```julia
377412
function foo(.....)
378413
......
379414
spmd(bar, ......)
@@ -391,7 +426,7 @@ spmd(foo,....)
391426
In the above example, `foo`, `bar` and `baz` are all functions wishing to leverage distributed computation. However, they themselves may be currenty part of a `spmd` call. A safe way to handle such a scenario is to only drive parallel computation from the master process.
392427

393428
The correct way (only have the driver process initiate `spmd` calls):
394-
```
429+
```julia
395430
function foo()
396431
......
397432
myid()==1 && spmd(bar, ......)
@@ -408,7 +443,7 @@ spmd(foo,....)
408443
```
409444

410445
This is also true of functions which automatically distribute computation on DArrays.
411-
```
446+
```julia
412447
function foo(d::DArray)
413448
......
414449
myid()==1 && map!(bar, d)

0 commit comments

Comments
 (0)