Skip to content

Commit 48bf9b7

Browse files
authored
Merge pull request #73 from JuliaGPU/sd/documentation
documentation
2 parents b533f3c + 923284c commit 48bf9b7

File tree

2 files changed

+8
-8
lines changed

2 files changed

+8
-8
lines changed

docs/src/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,14 +67,14 @@ you want to test.
6767

6868
You can run the test suite like this:
6969

70-
```@example
70+
```Julia
7171
using GPUArrays, GPUArrays.TestSuite
7272
TestSuite.run_tests(MyGPUArrayType)
7373
```
7474
If you don't want to run the whole suite, you can also run parts of it:
7575

7676

77-
```@example
77+
```Julia
7878
Typ = JLArray
7979
GPUArrays.allowslow(false) # fail tests when slow indexing path into Array type is used.
8080

src/abstract_gpu_interface.jl

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ end
3838
3939
Macro form of `linear_index`, which calls return when out of bounds.
4040
So it can be used like this:
41-
```
41+
```jldoctest
4242
function kernel(state, A)
4343
idx = @linear_index A state
4444
# from here on it's save to index into A with idx
@@ -61,7 +61,7 @@ end
6161
"""
6262
cartesianidx(A, statesym = :state)
6363
64-
Like `@linearidx`, but returns an N-dimensional `NTuple{ndim(A), Cuint}` as index
64+
Like [`@linearidx(A, statesym = :state)`](@ref), but returns an N-dimensional `NTuple{ndim(A), Cuint}` as index
6565
"""
6666
macro cartesianidx(A, statesym = :state)
6767
quote
@@ -109,9 +109,9 @@ end
109109

110110

111111
"""
112-
gpu_call(f, A::GPUArray, args::Tuple, configuration = length(A))
112+
gpu_call(kernel::Function, A::GPUArray, args::Tuple, configuration = length(A))
113113
114-
Calls function `f` on the GPU.
114+
Calls function `kernel` on the GPU.
115115
`A` must be an GPUArray and will help to dispatch to the correct GPU backend
116116
and supplies queues and contexts.
117117
Calls the kernel function with `kernel(state, args...)`, where state is dependant on the backend
@@ -123,7 +123,7 @@ Optionally, a launch configuration can be supplied in the following way:
123123
2) Pass a tuple of integer tuples to define blocks and threads per blocks!
124124
125125
"""
126-
function gpu_call(f, A::GPUArray, args::Tuple, configuration = length(A))
126+
function gpu_call(kernel, A::GPUArray, args::Tuple, configuration = length(A))
127127
ITuple = NTuple{N, Integer} where N
128128
# If is a single integer, we assume it to be the global size / total number of threads one wants to launch
129129
thread_blocks = if isa(configuration, Integer)
@@ -147,7 +147,7 @@ function gpu_call(f, A::GPUArray, args::Tuple, configuration = length(A))
147147
`linear_index` will be inbetween 1:prod((blocks..., threads...))
148148
""")
149149
end
150-
_gpu_call(f, A, args, thread_blocks)
150+
_gpu_call(kernel, A, args, thread_blocks)
151151
end
152152

153153
# Internal GPU call function, that needs to be overloaded by the backends.

0 commit comments

Comments
 (0)