11
11
# md # !!! note "Supported models"
12
12
# md #
13
13
# md # ExplainableAI.jl can be used on any differentiable classifier.
14
- # md #
15
- # md # Only LRP requires models from Flux.jl.
16
-
17
- # ## Preparing the model
18
- # For models with softmax activations on the output,
19
- # it is necessary to call [`strip_softmax`](@ref) before analyzing.
20
- model = strip_softmax (model);
21
14
22
15
# ## Preparing the input data
23
16
# We use MLDatasets to load a single image from the MNIST dataset:
@@ -44,7 +37,7 @@ input = reshape(x, 28, 28, 1, :);
44
37
# ## Explanations
45
38
# We can now select an analyzer of our choice and call [`analyze`](@ref)
46
39
# to get an [`Explanation`](@ref):
47
- analyzer = LRP (model)
40
+ analyzer = InputTimesGradient (model)
48
41
expl = analyze (input, analyzer);
49
42
50
43
# The return value `expl` is of type [`Explanation`](@ref) and bundles the following data:
@@ -57,13 +50,12 @@ expl = analyze(input, analyzer);
57
50
# * `expl.extras`: optional named tuple that can be used by analyzers
58
51
# to return additional information.
59
52
#
60
- # We used an LRP analyzer , so `expl.analyzer` is `:LRP `.
53
+ # We used `InputTimesGradient` , so `expl.analyzer` is `:InputTimesGradient `.
61
54
expl. analyzer
62
55
63
56
# By default, the explanation is computed for the maximally activated output neuron.
64
57
# Since our digit is a 9 and Julia's indexing is 1-based,
65
58
# the output neuron at index `10` of our trained model is maximally activated.
66
- expl. output_selection
67
59
68
60
# Finally, we obtain the result of the analyzer in form of an array.
69
61
expl. val
@@ -81,29 +73,6 @@ heatmap(input, analyzer)
81
73
# refer to the [heatmapping section](@ref docs-heatmapping).
82
74
83
75
# ## [List of analyzers](@id docs-analyzers-list)
84
- # Currently, the following analyzers are implemented:
85
- # - [`Gradient`](@ref)
86
- # - [`InputTimesGradient`](@ref)
87
- # - [`SmoothGrad`](@ref)
88
- # - [`IntegratedGradients`](@ref)
89
- # - [`LRP`](@ref)
90
- # - Rules
91
- # - [`ZeroRule`](@ref)
92
- # - [`EpsilonRule`](@ref)
93
- # - [`GammaRule`](@ref)
94
- # - [`GeneralizedGammaRule`](@ref)
95
- # - [`WSquareRule`](@ref)
96
- # - [`FlatRule`](@ref)
97
- # - [`ZBoxRule`](@ref)
98
- # - [`ZPlusRule`](@ref)
99
- # - [`AlphaBetaRule`](@ref)
100
- # - [`PassRule`](@ref)
101
- # - [`Composite`](@ref)
102
- # - [`EpsilonGammaBox`](@ref)
103
- # - [`EpsilonPlus`](@ref)
104
- # - [`EpsilonPlusFlat`](@ref)
105
- # - [`EpsilonAlpha2Beta1`](@ref)
106
- # - [`EpsilonAlpha2Beta1Flat`](@ref)
107
76
108
77
# ## Neuron selection
109
78
# By passing an additional index to our call to [`analyze`](@ref),
@@ -135,36 +104,3 @@ heatmap(expl)
135
104
136
105
# For more information on heatmapping batches,
137
106
# refer to the [heatmapping documentation](@ref docs-heatmapping-batches).
138
-
139
- # ## [GPU support](@id gpu-docs)
140
- # All analyzers support GPU backends,
141
- # building on top of [Flux.jl's GPU support](https://fluxml.ai/Flux.jl/stable/gpu/).
142
- # Using a GPU only requires moving the input array and model weights to the GPU.
143
- #
144
- # For example, using [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl):
145
-
146
- # ```julia
147
- # using CUDA, cuDNN
148
- # using Flux
149
- # using ExplainableAI
150
- #
151
- # # move input array and model weights to GPU
152
- # input = input |> gpu # or gpu(input)
153
- # model = model |> gpu # or gpu(model)
154
- #
155
- # # analyzers don't require calling `gpu`
156
- # analyzer = LRP(model)
157
- #
158
- # # explanations are computed on the GPU
159
- # expl = analyze(input, analyzer)
160
- # ```
161
-
162
- # Some operations, like saving, require moving explanations back to the CPU.
163
- # This can be done using Flux's `cpu` function:
164
-
165
- # ```julia
166
- # val = expl.val |> cpu # or cpu(expl.val)
167
- #
168
- # using BSON
169
- # BSON.@save "explanation.bson" val
170
- # ```
0 commit comments