Skip to content

Commit cd8e5a1

Browse files
committed
resolve conflict
2 parents 1cd3ad8 + 0dbd7f7 commit cd8e5a1

File tree

3 files changed

+9
-15
lines changed

3 files changed

+9
-15
lines changed

README.md

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ Julia offers countless advantages for a GPU array package.
1515
E.g., we can use Julia's JIT to generate optimized kernels for map/broadcast operations.
1616

1717
This works even for things like complex arithmetic, since we can compile what's already in Julia Base.
18+
This isn't restricted to Julia Base, GPUArrays works with all kind of user defined types and functions!
1819

1920
GPUArrays relies heavily on Julia's dot broadcasting.
2021
The great thing about dot broadcasting in Julia is, that it
@@ -40,6 +41,7 @@ Checkout the examples, to see how this can be used to emit specialized code whil
4041

4142
In theory, we could go as far as inspecting user defined callbacks (we can get the complete AST), count operations and estimate register usage and use those numbers to optimize our kernels!
4243

44+
4345
### Automatic Differentiation
4446

4547
Because of neuronal netorks, automatic differentiation is super hyped right now!
@@ -49,17 +51,7 @@ Making this work with GPUArrays will be a bit more involved, but the
4951
first [prototype](https://github.com/JuliaGPU/GPUArrays.jl/blob/master/examples/logreg.jl) looks already promising!
5052
There is also [ReverseDiffSource](https://github.com/JuliaDiff/ReverseDiffSource.jl), which should already work for simple functions.
5153

52-
#### Main type:
53-
54-
```Julia
55-
type GPUArray{T, N, B, C} <: DenseArray{T, N}
56-
buffer::B # GPU buffer, allocated by context
57-
size::NTuple{N, Int} # size of the array
58-
context::C # GPU context
59-
end
60-
```
61-
62-
#### Scope
54+
# Scope
6355

6456
Current backends: OpenCL, CUDA, Julia Threaded
6557

@@ -123,7 +115,7 @@ So please treat these numbers with care!
123115

124116
[source](https://github.com/JuliaGPU/GPUArrays.jl/blob/master/examples/blackscholes.jl)
125117

126-
![blackscholes](https://cdn.rawgit.com/JuliaGPU/GPUArrays.jl/efb9d2e0/examples/blackscholes.svg)
118+
![blackscholes](https://cdn.rawgit.com/JuliaGPU/GPUArrays.jl/91678a36/examples/blackscholes.svg)
127119

128120
Interestingly, on the GTX950, the CUDAnative backend outperforms the OpenCL backend by a factor of 10.
129121
This is most likely due to the fact, that LLVM is great at unrolling and vectorizing loops,

appveyor.yml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ environment:
55
matrix:
66
allow_failures:
77
- JULIA_URL: "https://julialangnightlies-s3.julialang.org/bin/winnt/x64/julia-latest-win64.exe"
8-
8+
99
branches:
1010
only:
1111
- master
@@ -18,9 +18,10 @@ notifications:
1818
on_build_status_changed: false
1919

2020
install:
21+
- ps: "[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12"
2122
# Download most recent Julia Windows binary
2223
- ps: (new-object net.webclient).DownloadFile(
23-
$("http://s3.amazonaws.com/"+$env:JULIAVERSION),
24+
$env:JULIA_URL,
2425
"C:\projects\julia-binary.exe")
2526
# Run installer silently, output to C:\projects\julia
2627
- C:\projects\julia-binary.exe /S /D=C:\projects\julia

examples/juliaset.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# julia set
22
# (the familiar mandelbrot set is obtained by setting c==z initially)
3-
# works only on 0.6 because of a stupid bug
3+
4+
# generated functions allow you to emit specialized code for the argument types.
45
@generated function julia{N}(z, maxiter::Val{N} = Val{16}())
56
unrolled = Expr(:block)
67
for i=1:N

0 commit comments

Comments
 (0)