Skip to content

Commit c23ed66

Browse files
committed
Docs: describe result of canonization
1 parent 79f3a13 commit c23ed66

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

docs/src/literate/lrp/basics.jl

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,11 @@ model = strip_softmax(model)
5151
# Applying the [`GammaRule`](@ref) to two linear layers in a row will yield different results
5252
# than first fusing the two layers into one linear layer and then applying the rule.
5353
# This fusing is called "canonization" and can be done using the [`canonize`](@ref) function:
54-
model = canonize(model)
54+
model_canonized = canonize(model)
55+
56+
# After canonization, the first `BatchNorm` layer has been fused into the preceding `Conv` layer.
57+
# The second `BatchNorm` layer wasn't fused
58+
# since its preceding `Conv` layer has a ReLU activation function.
5559

5660
# ### [Flattening the model](@id docs-lrp-flatten-model)
5761
# ExplainableAI.jl's LRP implementation supports nested Flux Chains and Parallel layers.

0 commit comments

Comments
 (0)