115
115
116
116
LRP-``0`` rule. Commonly used on upper layers.
117
117
118
+ # Definition
119
+ Propagates relevance ``R^{k+1}`` at layer output to ``R^k`` at layer input according to
120
+ ```math
121
+ R_j^k = \\ sum_i \\ frac{w_{ij}a_j^k}{\\ sum_l w_{il}a_l^k+b_i} R_i^{k+1}
122
+ ```
123
+
118
124
# References
119
125
- $REF_BACH_LRP
120
126
"""
@@ -129,7 +135,13 @@ get_layer_resetter(::ZeroRule, layer) = Returns(nothing)
129
135
130
136
LRP-``ϵ`` rule. Commonly used on middle layers.
131
137
132
- # Arguments:
138
+ # Definition
139
+ Propagates relevance ``R^{k+1}`` at layer output to ``R^k`` at layer input according to
140
+ ```math
141
+ R_j^k = \\ sum_i\\ frac{w_{ij}a_j^k}{\\ epsilon +\\ sum_{l}w_{il}a_l^k+b_i} R_i^{k+1}
142
+ ```
143
+
144
+ # Optional arguments
133
145
- `ϵ`: Optional stabilization parameter, defaults to `1f-6`.
134
146
135
147
# References
@@ -150,7 +162,14 @@ get_layer_resetter(::EpsilonRule, layer) = Returns(nothing)
150
162
151
163
LRP-``γ`` rule. Commonly used on lower layers.
152
164
153
- # Arguments:
165
+ # Definition
166
+ Propagates relevance ``R^{k+1}`` at layer output to ``R^k`` at layer input according to
167
+ ```math
168
+ R_j^k = \\ sum_i\\ frac{(w_{ij}+\\ gamma w_{ij}^+)a_j^k}
169
+ {\\ sum_l(w_{il}+\\ gamma w_{il}^+)a_l^k+b_i} R_i^{k+1}
170
+ ```
171
+
172
+ # Optional arguments
154
173
- `γ`: Optional multiplier for added positive weights, defaults to `0.25`.
155
174
156
175
# References
169
188
"""
170
189
WSquareRule()
171
190
172
- LRP-``W^2`` rule. Commonly used on the first layer when values are unbounded.
191
+ LRP-``w²`` rule. Commonly used on the first layer when values are unbounded.
192
+
193
+ # Definition
194
+ Propagates relevance ``R^{k+1}`` at layer output to ``R^k`` at layer input according to
195
+ ```math
196
+ R_j^k = \\ sum_i\\ frac{w_{ij}^2}{\\ sum_l w_{il}^2+b_i^2} R_i^{k+1}
197
+ ```
173
198
174
199
# References
175
200
- $REF_MONTAVON_DTD
@@ -184,6 +209,13 @@ modify_input(::WSquareRule, input) = ones_like(input)
184
209
LRP-Flat rule. Similar to the [`WSquareRule`](@ref), but with all weights set to one
185
210
and all bias terms set to zero.
186
211
212
+ # Definition
213
+ Propagates relevance ``R^{k+1}`` at layer output to ``R^k`` at layer input according to
214
+ ```math
215
+ R_j^k = \\ sum_i\\ frac{1}{\\ sum_l 1} R_i^{k+1} = \\ frac{1}{n}\\ sum_i R_i^{k+1}
216
+ ```
217
+ where ``n`` is the number of input neurons connected to the output neuron at index ``i``.
218
+
187
219
# References
188
220
- $REF_LAPUSCHKIN_CLEVER_HANS
189
221
"""
@@ -196,7 +228,14 @@ modify_input(::FlatRule, input) = ones_like(input)
196
228
PassRule()
197
229
198
230
Pass-through rule. Passes relevance through to the lower layer.
199
- Supports reshaping layers.
231
+
232
+ Supports layers with constant input and output shapes, e.g. reshaping layers.
233
+
234
+ # Definition
235
+ Propagates relevance ``R^{k+1}`` at layer output to ``R^k`` at layer input according to
236
+ ```math
237
+ R_j^k = R_j^{k+1}
238
+ ```
200
239
"""
201
240
struct PassRule <: AbstractLRPRule end
202
241
function lrp! (Rₖ, :: PassRule , layer, aₖ, Rₖ₊₁)
@@ -212,12 +251,19 @@ check_compat(::PassRule, layer) = nothing
212
251
"""
213
252
ZBoxRule(low, high)
214
253
215
- LRP-``z^{ \\ mathcal{B}} ``-rule. Commonly used on the first layer for pixel input.
254
+ LRP-``zᴮ ``-rule. Commonly used on the first layer for pixel input.
216
255
217
256
The parameters `low` and `high` should be set to the lower and upper bounds
218
257
of the input features, e.g. `0.0` and `1.0` for raw image data.
219
258
It is also possible to provide two arrays of that match the input size.
220
259
260
+ # Definition
261
+ Propagates relevance ``R^{k+1}`` at layer output to ``R^k`` at layer input according to
262
+ ```math
263
+ R_j^k=\\ sum_i \\ frac{w_{ij}a_j^k - w_{ij}^{+}l_j - w_{ij}^{-}h_j}
264
+ {\\ sum_l w_{il}a_l^k+b_i - \\ left(w_{il}^{+}l_l+b_i^{+}\\ right) - \\ left(w_{il}^{-}h_l+b_i^{-}\\ right)} R_i^{k+1}
265
+ ```
266
+
221
267
# References
222
268
- $REF_MONTAVON_OVERVIEW
223
269
"""
@@ -264,16 +310,24 @@ function zbox_input(in::AbstractArray{T}, A::AbstractArray) where {T}
264
310
end
265
311
266
312
"""
267
- AlphaBetaRule(alpha, beta)
268
- AlphaBetaRule([alpha=2.0], [beta=1.0])
313
+ AlphaBetaRule([α=2.0], [β=1.0])
269
314
270
- LRP-``\\ alpha \\ beta `` rule. Weights positive and negative contributions according to the
271
- parameters `alpha ` and `beta ` respectively. The difference `alpha - beta ` must be equal one.
315
+ LRP-``αβ `` rule. Weights positive and negative contributions according to the
316
+ parameters `α ` and `β ` respectively. The difference `α-β ` must be equal to one.
272
317
Commonly used on lower layers.
273
318
274
- # Arguments:
275
- - `alpha`: Multiplier for the positive output term, defaults to `2.0`.
276
- - `beta`: Multiplier for the negative output term, defaults to `1.0`.
319
+ # Definition
320
+ Propagates relevance ``R^{k+1}`` at layer output to ``R^k`` at layer input according to
321
+ ```math
322
+ R_j^k = \\ sum_i\\ left(
323
+ \\ alpha\\ frac{\\ left(w_{ij}a_j^k\\ right)^+}{\\ sum_l\\ left(w_{il}a_l^k+b_i\\ right)^+}
324
+ -\\ beta\\ frac{\\ left(w_{ij}a_j^k\\ right)^-}{\\ sum_l\\ left(w_{il}a_l^k+b_i\\ right)^-}
325
+ \\ right) R_i^{k+1}
326
+ ```
327
+
328
+ # Optional arguments
329
+ - `α`: Multiplier for the positive output term, defaults to `2.0`.
330
+ - `β`: Multiplier for the negative output term, defaults to `1.0`.
277
331
278
332
# References
279
333
- $REF_BACH_LRP
@@ -331,14 +385,20 @@ end
331
385
"""
332
386
ZPlusRule()
333
387
334
- LRP-``z^{+} `` rule. Commonly used on lower layers.
388
+ LRP-``z⁺ `` rule. Commonly used on lower layers.
335
389
336
390
Equivalent to `AlphaBetaRule(1.0f0, 0.0f0)`, but slightly faster.
337
391
See also [`AlphaBetaRule`](@ref).
338
392
393
+ # Definition
394
+ Propagates relevance ``R^{k+1}`` at layer output to ``R^k`` at layer input according to
395
+ ```math
396
+ R_j^k = \\ sum_i\\ frac{\\ left(w_{ij}a_j^k\\ right)^+}{\\ sum_l\\ left(w_{il}a_l^k+b_i\\ right)^+} R_i^{k+1}
397
+ ```
398
+
339
399
# References
340
- - [1] $REF_BACH_LRP
341
- - [2] $REF_MONTAVON_DTD
400
+ - $REF_BACH_LRP
401
+ - $REF_MONTAVON_DTD
342
402
"""
343
403
struct ZPlusRule <: AbstractLRPRule end
344
404
function lrp! (Rₖ, rule:: ZPlusRule , layer:: L , aₖ, Rₖ₊₁) where {L}
0 commit comments