Skip to content

Commit 438ee97

Browse files
committed
Update references in LRP-rule docstrings
1 parent ac52d64 commit 438ee97

File tree

1 file changed

+17
-15
lines changed

1 file changed

+17
-15
lines changed

src/lrp/rules.jl

Lines changed: 17 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,12 @@
11
# https://adrhill.github.io/ExplainableAI.jl/stable/generated/advanced_lrp/#How-it-works-internally
22
abstract type AbstractLRPRule end
33

4+
# Bibliography
5+
const REF_BACH_LRP = "S. Bach et al., *On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation*"
6+
const REF_LAPUSCHKIN_CLEVER_HANS = "S. Lapuschkin et al., *Unmasking Clever Hans predictors and assessing what machines really learn*"
7+
const REF_MONTAVON_DTD = "G. Montavon et al., *Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition*"
8+
const REF_MONTAVON_OVERVIEW = "G. Montavon et al., *Layer-Wise Relevance Propagation: An Overview*"
9+
410
# Generic LRP rule. Since it uses autodiff, it is used as a fallback for layer types
511
# without custom implementations.
612
function lrp!(Rₖ, rule::R, layer::L, aₖ, Rₖ₊₁) where {R<:AbstractLRPRule,L}
@@ -106,11 +112,10 @@ end
106112
"""
107113
ZeroRule()
108114
109-
LRP-0 rule. Commonly used on upper layers.
115+
LRP-``0`` rule. Commonly used on upper layers.
110116
111117
# References
112-
[1]: S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by
113-
Layer-Wise Relevance Propagation
118+
- $REF_BACH_LRP
114119
"""
115120
struct ZeroRule <: AbstractLRPRule end
116121
check_compat(::ZeroRule, layer) = nothing
@@ -127,8 +132,7 @@ LRP-``ϵ`` rule. Commonly used on middle layers.
127132
- `ϵ`: Optional stabilization parameter, defaults to `1f-6`.
128133
129134
# References
130-
[1]: S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by
131-
Layer-Wise Relevance Propagation
135+
- $REF_BACH_LRP
132136
"""
133137
struct EpsilonRule{T} <: AbstractLRPRule
134138
ϵ::T
@@ -149,7 +153,7 @@ LRP-``γ`` rule. Commonly used on lower layers.
149153
- `γ`: Optional multiplier for added positive weights, defaults to `0.25`.
150154
151155
# References
152-
[1]: G. Montavon et al., Layer-Wise Relevance Propagation: An Overview
156+
- $REF_MONTAVON_OVERVIEW
153157
"""
154158
struct GammaRule{T} <: AbstractLRPRule
155159
γ::T
@@ -167,7 +171,7 @@ end
167171
LRP-``W^2`` rule. Commonly used on the first layer when values are unbounded.
168172
169173
# References
170-
[1]: G. Montavon et al., Explaining nonlinear classification decisions with deep Taylor decomposition
174+
- $REF_MONTAVON_DTD
171175
"""
172176
struct WSquareRule <: AbstractLRPRule end
173177
modify_param!(::WSquareRule, p) = p .^= 2
@@ -179,7 +183,7 @@ modify_input(::WSquareRule, input) = ones_like(input)
179183
LRP-Flat rule. Similar to the [`WSquareRule`](@ref), but with all parameters set to one.
180184
181185
# References
182-
[1]: S. Lapuschkin et al., Unmasking Clever Hans predictors and assessing what machines really learn
186+
- $REF_LAPUSCHKIN_CLEVER_HANS
183187
"""
184188
struct FlatRule <: AbstractLRPRule end
185189
modify_param!(::FlatRule, p) = fill!(p, 1)
@@ -207,12 +211,12 @@ check_compat(::PassRule, layer) = nothing
207211
208212
LRP-``z^{\\mathcal{B}}``-rule. Commonly used on the first layer for pixel input.
209213
210-
The parameters `low` and `high` should be set to the lower and upper bounds of the input features,
211-
e.g. `0.0` and `1.0` for raw image data.
214+
The parameters `low` and `high` should be set to the lower and upper bounds
215+
of the input features, e.g. `0.0` and `1.0` for raw image data.
212216
It is also possible to provide two arrays of that match the input size.
213217
214218
# References
215-
[1]: G. Montavon et al., Explaining nonlinear classification decisions with deep Taylor decomposition
219+
- $REF_MONTAVON_OVERVIEW
216220
"""
217221
struct ZBoxRule{T} <: AbstractLRPRule
218222
low::T
@@ -264,10 +268,8 @@ Commonly used on lower layers.
264268
- `beta`: Multiplier for the negative output term, defaults to `1.0`.
265269
266270
# References
267-
[1]: S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by
268-
Layer-Wise Relevance Propagation
269-
270-
[2]: G. Montavon et al., Layer-Wise Relevance Propagation: An Overview
271+
- $REF_BACH_LRP
272+
- $REF_MONTAVON_OVERVIEW
271273
"""
272274
struct AlphaBetaRule{T} <: AbstractLRPRule
273275
α::T

0 commit comments

Comments
 (0)