You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# and can be implemented via automatic differentiation (AD).
207
+
#
208
+
# This equation is implemented in ExplainabilityMethods as the default method
209
+
# for all layer types that don't have a specialized implementation.
210
+
# We will refer to it as the "AD fallback".
211
+
#
212
+
# [^1]: G. Montavon et al., [Layer-Wise Relevance Propagation: An Overview](https://link.springer.com/chapter/10.1007/978-3-030-28954-6_10)
213
+
# [^2]: W. Samek et al., [Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications](https://ieeexplore.ieee.org/document/9369420)
214
+
215
+
# ### AD fallback
216
+
# The default LRP fallback for unknown layers uses AD via [Zygote](https://github.com/FluxML/Zygote.jl).
217
+
# For `lrp!`, we end up with something that looks very similar to the previous four step computation:
175
218
# ```julia
176
219
# function lrp!(rule, layer, Rₖ, aₖ, Rₖ₊₁)
177
220
# layerᵨ = modify_layer(rule, layer)
178
221
# c = gradient(aₖ) do a
179
222
# z = layerᵨ(a)
180
223
# s = Zygote.@ignore Rₖ₊₁ ./ modify_denominator(rule, z)
181
224
# z ⋅ s
182
-
#end |> only
225
+
#end |> only
183
226
# Rₖ .= aₖ .* c
184
227
# end
185
228
# ```
186
229
#
187
-
#Here you can clearly see how this AD-fallback dispatches on `modify_layer` and `modify_denominator`
188
-
#based on the rule and layer type. This is how we implemented our own `MyGammaRule`!
230
+
#You can see how `modify_layer` and `modify_denominator` dispatch on the rule and layer type.
231
+
# This is how we implemented our own `MyGammaRule`.
189
232
# Unknown layers that are registered in the `LRP_CONFIG` use this exact function.
233
+
234
+
# ### Specialized implementations
235
+
# We can also implement specialized versions of `lrp!` based on the type of `layer`,
236
+
# e.g. reshaping layers.
190
237
#
191
-
# We can also implement versions of `lrp!` that are specialized for specific layer type.
192
-
# For example, reshaping layers don't affect attributions, therefore no AD is required.
193
-
# ExplainabilityMethods implements:
238
+
# Reshaping layers don't affect attributions. We can therefore avoid the computational
239
+
# overhead of AD by writing a specialized implementation that simply reshapes back:
194
240
# ```julia
195
-
# function lrp!(rule, ::ReshapingLayer, Rₖ, aₖ, Rₖ₊₁)
241
+
# function lrp!(::AbstractLRPRule, ::ReshapingLayer, Rₖ, aₖ, Rₖ₊₁)
196
242
# Rₖ .= reshape(Rₖ₊₁, size(aₖ))
197
243
# end
198
244
# ```
199
245
#
200
-
# Even Dense layers have a specialized implementation:
246
+
# Since the rule type didn't matter in this case, we didn't specify it.
247
+
#
248
+
# We can even implement the generic rule as a specialized implementation for `Dense` layers:
201
249
# ```julia
202
-
# function lrp!(rule, layer::Dense, Rₖ, aₖ, Rₖ₊₁)
250
+
# function lrp!(rule::AbstractLRPRule, layer::Dense, Rₖ, aₖ, Rₖ₊₁)
0 commit comments