@@ -123,7 +123,7 @@ get_layer_resetter(::ZeroRule, layer) = Returns(nothing)
123
123
124
124
LRP-``ϵ`` rule. Commonly used on middle layers.
125
125
126
- Arguments:
126
+ # Arguments:
127
127
- `ϵ`: Optional stabilization parameter, defaults to `1f-6`.
128
128
129
129
# References
@@ -145,7 +145,7 @@ get_layer_resetter(::EpsilonRule, layer) = Returns(nothing)
145
145
146
146
LRP-``γ`` rule. Commonly used on lower layers.
147
147
148
- Arguments:
148
+ # Arguments:
149
149
- `γ`: Optional multiplier for added positive weights, defaults to `0.25`.
150
150
151
151
# References
@@ -211,7 +211,7 @@ The parameters `low` and `high` should be set to the lower and upper bounds of t
211
211
e.g. `0.0` and `1.0` for raw image data.
212
212
It is also possible to provide two arrays of that match the input size.
213
213
214
- ## References
214
+ # References
215
215
[1]: G. Montavon et al., Explaining nonlinear classification decisions with deep Taylor decomposition
216
216
"""
217
217
struct ZBoxRule{T} <: AbstractLRPRule
@@ -255,17 +255,18 @@ end
255
255
AlphaBetaRule(alpha, beta)
256
256
AlphaBetaRule([alpha=2.0], [beta=1.0])
257
257
258
- LRP-``\a lpha\b eta`` rule. Weights positive and negative contributions according to the
258
+ LRP-``\\ alpha\ \ beta`` rule. Weights positive and negative contributions according to the
259
259
parameters `alpha` and `beta` respectively. The difference `alpha - beta` must be equal one.
260
260
Commonly used on lower layers.
261
261
262
- Arguments:
262
+ # Arguments:
263
263
- `alpha`: Multiplier for the positive output term, defaults to `2.0`.
264
264
- `beta`: Multiplier for the negative output term, defaults to `1.0`.
265
265
266
266
# References
267
267
[1]: S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by
268
268
Layer-Wise Relevance Propagation
269
+
269
270
[2]: G. Montavon et al., Layer-Wise Relevance Propagation: An Overview
270
271
"""
271
272
struct AlphaBetaRule{T} <: AbstractLRPRule
0 commit comments