Skip to content

Commit 574e403

Browse files
NaereenGaika
authored andcommitted
Fix format error on the README.md (#179)
1 parent 6ac7e3c commit 574e403

File tree

1 file changed

+58
-54
lines changed

1 file changed

+58
-54
lines changed

README.md

Lines changed: 58 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,13 @@
33
[![codecov](https://codecov.io/gh/JuliaComputing/ArrayFire.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/JuliaComputing/ArrayFire.jl)
44

55

6-
[ArrayFire](http://arrayfire.com) is a library for GPU and accelerated computing. ArrayFire.jl wraps the ArrayFire library for Julia, and provides a Julian interface.
6+
[ArrayFire](http://ArrayFire.com) is a library for GPU and accelerated computing. ArrayFire.jl wraps the ArrayFire library for [Julia](https://JuliaLang.org), and provides a Julia interface.
77

88
## Installation
99

1010
### OSX
1111

12-
If you are on OSX, the easiest way to install arrayfire is by doing
12+
If you are on OSX, the easiest way to install arrayfire is by using [brew](https://brew.sh/):
1313
```
1414
brew install arrayfire
1515
```
@@ -30,14 +30,17 @@ Now, start Julia, and do:
3030
```julia
3131
Pkg.add("ArrayFire")
3232
```
33+
3334
You can also get the latest nightly version of `ArrayFire.jl` by doing:
3435
```julia
3536
Pkg.checkout("ArrayFire")
3637
```
38+
3739
Check if `ArrayFire.jl` works by running the tests:
3840
```julia
3941
Pkg.test("ArrayFire")
4042
```
43+
4144
If you have any issues getting `ArrayFire.jl` to work, please check the Troubleshooting section below. If it still doesn't work, please file an issue.
4245

4346
### Windows
@@ -50,72 +53,77 @@ Pkg.test("ArrayFire")
5053
```
5154

5255
Arrayfire requires vcomp120.dll. If you do not have Visual Studio installed, install the [Visual C++ redistributable](https://www.microsoft.com/en-us/download/details.aspx?id=40784).
56+
5357
## Simple Usage
5458
Congratulations, you've now installed `ArrayFire.jl`! Now what can you do?
5559

5660
Let's say you have a simple Julia array on the CPU:
5761
```julia
5862
a = rand(10, 10)
5963
```
64+
6065
You can transfer this array to the device by calling the `AFArray` constructor on it.
6166
```julia
62-
using ArrayFire # Don't forget to load the library
67+
using ArrayFire # Don't forget to load the library
6368
ad = AFArray(a)
6469
```
70+
6571
Now let us perform some simple arithmetic on it:
6672
```julia
6773
bd = (ad + 1) / 5
6874
```
75+
6976
Of course, you can do much more than just add and divide numbers. Check the supported functions section for more information.
7077

7178
Now that you're done with all your device computation, you can bring your array back to the CPU (or host):
7279
```julia
7380
b = Array(bd)
7481
```
82+
7583
Here are other examples of simple usage:
7684

7785
```julia
7886
using ArrayFire
7987

80-
#Random number generation
88+
# Random number generation
8189
a = rand(AFArray{Float64}, 100, 100)
8290
b = randn(AFArray{Float64}, 100, 100)
8391

84-
#Transfer to device from the CPU
92+
# Transfer to device from the CPU
8593
host_to_device = AFArray(rand(100,100))
8694

87-
#Transfer back to CPU
95+
# Transfer back to CPU
8896
device_to_host = Array(host_to_device)
8997

90-
#Basic arithmetic operations
98+
# Basic arithmetic operations
9199
c = sin(a) + 0.5
92100
d = a * 5
93101

94-
#Logical operations
102+
# Logical operations
95103
c = a .> b
96104
any_trues = any(c)
97105

98-
#Reduction operations
106+
# Reduction operations
99107
total_max = maximum(a)
100108
colwise_min = min(a,2)
101109

102-
#Matrix operations
110+
# Matrix operations
103111
determinant = det(a)
104112
b_positive = abs(b)
105113
product = a * b
106114
dot_product = a .* b
107115
transposer = a'
108116

109-
#Linear Algebra
117+
# Linear Algebra
110118
lu_fact = lu(a)
111-
cholesky_fact = chol(a*a') #Multiplied to create a positive definite matrix
119+
cholesky_fact = chol(a*a') # Multiplied to create a positive definite matrix
112120
qr_fact = qr(a)
113121
svd_fact = svd(a)
114122

115-
#FFT
123+
# FFT
116124
fast_fourier = fft(a)
117-
118125
```
126+
119127
## The Execution Model
120128
`ArrayFire.jl` introduces an `AFArray` type that is a subtype of `AbstractArray`. Operations on `AFArrays` create other `AFArrays`, so data always remains on the device unless it is specifically transferred back. This wrapper provides a simple Julian interface that aims to mimic Base Julia's versatility and ease of use.
121129

@@ -125,17 +133,13 @@ fast_fourier = fft(a)
125133

126134
The library also performs some kernel fusions on elementary arithmetic operations (see the arithmetic section of the Supported Functions). `arrayfire` has an intelligent runtime JIT compliation engine which converts array expressions into the smallest number of OpenCL/CUDA kernels. Kernel fusion not only decreases the number of kernel calls, but also avoids extraneous global memory operations. This asynchronous behaviour ends only when a non-JIT operation is called or an explicit synchronization barrier `sync(array)` is called.
127135

128-
**Garbage collection and memory management**: `arrayfire` is using its own memory management that relies on Julia
129-
garbage collector releasing refences to unused arrays. Sometimes it could be a bottleneck as Julia garbage collector
130-
can be slow and not even notice the pressure in GPU memory usage. The best way to avoid it is to use `@afgc` macro
131-
that would free all unused `AFArray` references when leaving the scope of a function or a block. The alternative is to
132-
call afgc() periodically.
136+
**Garbage collection and memory management**: `arrayfire` is using its own memory management that relies on Julia garbage collector releasing refences to unused arrays. Sometimes it could be a bottleneck as Julia garbage collector can be slow and not even notice the pressure in GPU memory usage. The best way to avoid it is to use `@afgc` macro that would free all unused `AFArray` references when leaving the scope of a function or a block. The alternative is to call `afgc()` periodically.
133137

134-
**A note on benchmarking** : In Julia, one would use the `@time` macro to time execution times of functions. However, in this particular case, `@time` would simply time the function call, and the library would execute asynchronously in the background. This would often lead to misleading timings. Therefore, the right way to time individual operations is to run them multiple times, place an explicit synchronization barrier at the end, and take the average of multiple runs.
138+
**A note on benchmarking**: In Julia, one would use the `@time` macro to time execution times of functions. However, in this particular case, `@time` would simply time the function call, and the library would execute asynchronously in the background. This would often lead to misleading timings. Therefore, the right way to time individual operations is to run them multiple times, place an explicit synchronization barrier at the end, and take the average of multiple runs.
135139

136-
Also, note that this doesn't affect how the user writes code. Users can simply write normal Julia code using `ArrayFire.jl` and this asynchronous behaviour is abstracted out. Whenever the data is needed back onto the CPU, an implicit barrier ensures that the computatation is complete, and the values are transferred back.
140+
Also, note that this does not affect how the user writes code. Users can simply write normal Julia code using `ArrayFire.jl` and this asynchronous behaviour is abstracted out. Whenever the data is needed back onto the CPU, an implicit barrier ensures that the computatation is complete, and the values are transferred back.
137141

138-
**operations between CPU and device arrays**: Consider the following code. It will return an error:
142+
**Operations between CPU and device arrays**: Consider the following code. It will return an error:
139143
```julia
140144
a = rand(Float32, 10, 10)
141145
b = AFArray(a)
@@ -149,7 +153,7 @@ AFArray(a) - b # This works too!
149153

150154
**A note on correctness**: Sometimes, `ArrayFire.jl` and Base Julia might return marginally different values from their computation. This is because Julia and `ArrayFire.jl` sometimes use different lower level libraries for BLAS, FFT, etc. For example, Julia uses OpenBLAS for BLAS operations, but `ArrayFire.jl` would use clBLAS for the OpenCL backend and CuBLAS for the CUDA backend, and these libraries might not always the exact same values as OpenBLAS after a certain decimal point. In light of this, users are encouraged to keep testing their codes for correctness.
151155

152-
**A note on performance**: Some operations can be slow due to Base's generic implementations. This is intentional, to enable a "make it work, then make it fast" workflow. When you're ready you can disable slow fallback methods:
156+
**A note on performance**: Some operations can be slow due to `Base`'s generic implementations. This is intentional, to enable a "make it work, then make it fast" workflow. When you're ready you can disable slow fallback methods:
153157

154158
```julia
155159
julia> allowslow(AFArray, false)
@@ -161,55 +165,55 @@ ERROR: getindex is disabled
161165
## Supported Functions
162166

163167
### Creating AFArrays
164-
* `rand, randn, convert, diagm, eye, range, zeros, ones, trues, falses`
165-
* `constant, getSeed, setSeed, iota`
168+
* `rand`, `randn`, `convert`, `diagm`, `eye`, `range`, `zeros`, `ones`, `trues`, `falses`
169+
* `constant`, `getSeed`, `setSeed`, `iota`
166170

167171
### Arithmetic
168-
* `+, -, *, /, ^, &, $, | `
169-
* `.+, .-, .*, ./, .>, .>=, .<, .<=, .==, .!=, `
170-
* `complex, conj, real, imag, max, min, abs, round, floor, hypot`
172+
* `+`, `-`, `*`, `/`, `^`, `&`, `$`, `|`
173+
* `.+`, `.-`, `.*`, `./`, `.>`, `.>=`, `.<`, `.<=`, `.==`, `.!=, `
174+
* `complex`, `conj`, `real`, `imag`, `max`, `min`, `abs`, `round`, `floor`, `hypot`
171175
* `sigmoid`
172176
* `signbit` (works only in vectorized form on Julia v0.5 - Ref issue #109)
173177

174178
### Linear Algebra
175-
* `chol, svd, lu, qr, lufact!, qrfact!, svdfact!`
176-
* `*(matmul), A_mul_Bt, At_mul_B, At_mul_Bt, Ac_mul_B, A_mul_Bc, Ac_mul_Bc`
177-
* `transpose, transpose!, ctranspose, ctranspose!`
178-
* `det, inv, rank, norm, dot, diag, \`
179-
* `isLAPACKAvailable, chol!, solveLU, upper, lower`
179+
* `chol`, `svd`, `lu`, `qr`, `svdfact!`, `lufact!`, `qrfact!`
180+
* `*(matmul)`, `A_mul_Bt`, `At_mul_B`, `At_mul_Bt`, `Ac_mul_B`, `A_mul_Bc`, `Ac_mul_Bc`
181+
* `transpose`, `transpose!`, `ctranspose`, `ctranspose!`
182+
* `det`, `inv`, `rank`, `norm`, `dot`, `diag`, `\`
183+
* `isLAPACKAvailable`, `chol!`, `solveLU`, `upper`, `lower`
180184

181185
### Signal Processing
182-
* `fft, ifft, fft!, ifft!`
183-
* `conv, conv2`
184-
* `fftC2R, fftR2C, conv3, convolve, fir, iir, approx1, approx2`
186+
* `fft`, `ifft`, `fft!`, `ifft!`
187+
* `conv`, `conv2`
188+
* `fftC2R`, `fftR2C`, `conv3`, `convolve`, `fir, `iir`, `approx1`, `approx2`
185189

186190
### Statistics
187-
* `mean, median, std, var, cov`
188-
* `meanWeighted, varWeighted, corrcoef`
191+
* `mean`, `median`, `std`, `var`, `cov`
192+
* `meanWeighted`, `varWeighted`, `corrcoef`
189193

190194
### Vector Algorithms
191-
* `sum, min, max, minimum, maximum, findmax, findmin`
192-
* `countnz, any, all, sort, union, find, cumsum, diff`
193-
* `sortIndex, sortByKey, diff2, minidx, maxidx`
195+
* `sum`, `min`, `max`, `minimum`, `maximum`, `findmax`, `findmin`
196+
* `countnz`, `any`, `all`, `sort`, `union`, `find`, `cumsum`, `diff`
197+
* `sortIndex`, `sortByKey`, `diff2`, `minidx`, `maxidx`
194198

195199
### Backend Functions
196-
* `getActiveBackend, getBackendCount, getAvailableBackends, setBackend, getBackendId, sync, getActiveBackendId`
200+
* `getActiveBackend`, `getBackendCount`, `getAvailableBackends`, `setBackend`, `getBackendId`, `sync`, `getActiveBackendId`
197201

198202
### Device Functions
199203
* `getDevice`, `setDevice`, `getNumDevices`
200204

201205
### Image Processing
202-
* `scale, hist`
203-
* `loadImage, saveImage`
206+
* `scale`, `hist`
207+
* `loadImage`, `saveImage`
204208
* `isImageIOAvailable`
205-
* `colorspace, gray2rgb, rgb2gray, rgb2hsv, rgb2ycbcr, ycbcr2rgb, hsv2rgb`
206-
* `regions, SAT`
207-
* `bilateral, maxfilt, meanshift, medfilt, minfilt, sobel, histequal`
208-
* `resize, rotate, skew, transform, transformCoordinates, translate`
209-
* `dilate, erode, dilate3d, erode3d, gaussiankernel`
209+
* `colorspace`, `gray2rgb`, `rgb2gray`, `rgb2hsv`, `rgb2ycbcr`, `ycbcr2rgb`, `hsv2rgb`
210+
* `regions`, `SAT`
211+
* `bilateral`, `maxfilt`, `meanshift`, `medfilt`, `minfilt`, `sobel`, `histequal`
212+
* `resize`, `rotate`, `skew`, `transform`, `transformCoordinates`, `translate`
213+
* `dilate`, `erode`, `dilate3d`, `erode3d`, `gaussiankernel`
210214

211215
### Computer Vision
212-
* `orb, sift, gloh, diffOfGaussians, fast, harris, susan, hammingMatcher, nearestNeighbour, matchTemplate`
216+
* `orb`, `sift`, `gloh`, `diffOfGaussians`, `fast`, `harris`, `susan`, `hammingMatcher`, `nearestNeighbour`, `matchTemplate`
213217

214218
## Performance
215219
ArrayFire was benchmarked on commonly used operations.
@@ -244,10 +248,10 @@ backend. `ArrayFire.jl` starts up with the unified backend.
244248

245249
If the backend selected by ArrayFire by default (depends on the available drivers) is not the desired one (depending on the available hardware), you can override the default by setting the environment variable `$JULIA_ARRAYFIRE_BACKEND` before starting Julia (more specifically, before loading the `ArrayFire` module). Possible values for `$JULIA_ARRAYFIRE_BACKEND` are `cpu`, `cuda` and `opencl`.
246250

247-
You may also change the backend at runtime via, e.g., `set_backend(AF_BACKEND_CPU)` (resp. `AF_BACKEND_CUDA` or
248-
`AF_BACKEND_OPENCL`). The unified backend isn't a computational backend by itself but represents an interface to switch
251+
You may also change the backend at runtime via, e.g., `set_backend(AF_BACKEND_CPU)` (resp. `AF_BACKEND_CUDA` or `AF_BACKEND_OPENCL`).
252+
The unified backend isn't a computational backend by itself but represents an interface to switch
249253
between different backends at runtime. `ArrayFire.jl` starts up with the unified backend, but `get_active_backend()`
250-
will return either a particular default backend, depending on how you've installed the library. For example, if you've
254+
will return either a particular default backend, depending on how you have installed the library. For example, if you have
251255
built `ArrayFire.jl` with the CUDA backend, `get_active_backend()` will return `AF_BACKEND_CUDA` backend.
252256

253257

@@ -264,7 +268,7 @@ If you're using the CUDA backend, try checking if `libcudart` and `libnvvm` are
264268
265269
If you want to use the CUDA backend, check if you have installed CUDA for your platform. If you've installed CUDA, simply downloaded a binary and it still doens't work, try adding `libnvvm`, `libcudart` to your path.
266270

267-
> `ArrayFire.jl` doesn't work with Atom.
271+
> `ArrayFire.jl` does not work with Atom.
268272
269273
Create a file in your home directory called `.juliarc.jl` and write `ENV["LD_LIBRARY_PATH"] = "/usr/local/lib/"` (or the path to `libaf`) in it. Atom should now be able to load it.
270274

0 commit comments

Comments
 (0)