Skip to content

Commit 79f3a13

Browse files
authored
Document GPU support (#145)
1 parent d9ea6eb commit 79f3a13

File tree

2 files changed

+44
-9
lines changed

2 files changed

+44
-9
lines changed

docs/src/literate/example.jl

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -133,3 +133,36 @@ heatmap(expl)
133133

134134
# For more information on heatmapping batches,
135135
# refer to the [heatmapping documentation](@ref docs-heatmapping-batches).
136+
137+
# ## [GPU support](@id gpu-docs)
138+
# All analyzers support GPU backends,
139+
# building on top of [Flux.jl's GPU support](https://fluxml.ai/Flux.jl/stable/gpu/).
140+
# Using a GPU only requires moving the input array and model weights to the GPU.
141+
#
142+
# For example, using [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl):
143+
144+
# ```julia
145+
# using CUDA, cuDNN
146+
# using Flux
147+
# using ExplainableAI
148+
#
149+
# # move input array and model weights to GPU
150+
# input = input |> gpu # or gpu(input)
151+
# model = model |> gpu # or gpu(model)
152+
#
153+
# # analyzers don't require calling `gpu`
154+
# analyzer = LRP(model)
155+
#
156+
# # explanations are computed on the GPU
157+
# expl = analyze(input, analyzer)
158+
# ```
159+
160+
# Some operations, like saving, require moving explanations back to the CPU.
161+
# This can be done using Flux's `cpu` function:
162+
163+
# ```julia
164+
# val = expl.val |> cpu # or cpu(expl.val)
165+
#
166+
# using BSON
167+
# BSON.@save "explanation.bson" val
168+
# ```

docs/src/literate/lrp/basics.jl

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -118,18 +118,20 @@ expl = analyze(input, analyzer; layerwise_relevances=true)
118118
expl.extras.layerwise_relevances
119119

120120
# ## [Performance tips](@id docs-lrp-performance)
121+
# ### Using LRP with a GPU
122+
# Like all other analyzers, LRP can be used on GPUs.
123+
# Follow the instructions on [*GPU support*](@ref gpu-docs).
124+
#
121125
# ### Using LRP without a GPU
122-
# Since ExplainableAI.jl's LRP implementation makes use of
123-
# [Tullio.jl](https://github.com/mcabbott/Tullio.jl),
124-
# analysis can be accelerated by loading either
125-
# - a package from the [JuliaGPU](https://juliagpu.org) ecosystem,
126-
# e.g. [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl), if a GPU is available
127-
# - [LoopVectorization.jl](https://github.com/JuliaSIMD/LoopVectorization.jl)
128-
# if only a CPU is available.
126+
# Using Julia's package extension mechanism,
127+
# ExplainableAI.jl's LRP implementation can optionally make use of
128+
# [Tullio.jl](https://github.com/mcabbott/Tullio.jl) and
129+
# [LoopVectorization.jl](https://github.com/JuliaSIMD/LoopVectorization.jl)
130+
# for faster LRP rules on dense layers.
129131
#
130-
# This only requires loading the LoopVectorization.jl package before ExplainableAI.jl:
132+
# This only requires loading the packages before loading ExplainableAI.jl:
131133
# ```julia
132-
# using LoopVectorization
134+
# using LoopVectorization, Tullio
133135
# using ExplainableAI
134136
# ```
135137
#

0 commit comments

Comments
 (0)