Skip to content

Commit d61d115

Browse files
authored
Update README
1 parent 5ce2f95 commit d61d115

File tree

1 file changed

+20
-10
lines changed

1 file changed

+20
-10
lines changed

README.md

Lines changed: 20 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ julia> ]add ExplainableAI
2222
```
2323

2424
## Example
25-
Let's explain why an image of a castle gets classified as such by a vision model:
25+
Let's explain why an image of a castle is classified as such by a vision model:
2626

2727
![][castle]
2828

@@ -44,26 +44,36 @@ heatmap(expl)
4444
heatmap(input, analyzer)
4545
```
4646

47-
We can also get an explanation for the activation of the output neuron
48-
corresponding to the "street sign" class by specifying the corresponding output neuron position `920`:
47+
By default, explanations are computed for the class with the highest activation.
48+
We can also compute explanations for a specific class, e.g. the one at output index 5:
4949

5050
```julia
51-
analyze(input, analyzer, 920) # for explanation
52-
heatmap(input, analyzer, 920) # for heatmap
51+
analyze(input, analyzer, 5) # for explanation
52+
heatmap(input, analyzer, 5) # for heatmap
5353
```
5454

55-
Heatmaps for all implemented analyzers are shown in the following table.
56-
Red color indicate regions of positive relevance towards the selected class,
57-
whereas regions in blue are of negative relevance.
58-
5955
| **Analyzer** | **Heatmap for class "castle"** |**Heatmap for class "street sign"** |
6056
|:--------------------------------------------- |:------------------------------:|:----------------------------------:|
6157
| `InputTimesGradient` | ![][castle-ixg] | ![][streetsign-ixg] |
6258
| `Gradient` | ![][castle-grad] | ![][streetsign-grad] |
6359
| `SmoothGrad` | ![][castle-smoothgrad] | ![][streetsign-smoothgrad] |
6460
| `IntegratedGradients` | ![][castle-intgrad] | ![][streetsign-intgrad] |
6561

66-
The code used to generate these heatmaps can be found [here][asset-code].
62+
> [!TIP]
63+
> The heatmaps shown above were created using a VGG-16 vision model
64+
> from [Metalhead.jl](https://github.com/FluxML/Metalhead.jl)
65+
> that was pre-trained on the [ImageNet](http://www.image-net.org/) dataset.
66+
>
67+
> Since ExplainableAI.jl can be used outside of Deep Learning models and [Flux.jl](https://github.com/FluxML/Flux.jl),
68+
> we have omitted specific models and inputs from the code snippet above.
69+
> The full code used to generate the heatmaps can be found [here][asset-code].
70+
71+
Depending on the method, the applied heatmapping defaults differ:
72+
sensitivity-based methods (e.g. `Gradient`) default to a grayscale color scheme,
73+
whereas attribution-based methods (e.g. `InputTimesGradient`) default to a red-white-blue color scheme.
74+
Red color indicates regions of positive relevance towards the selected class,
75+
whereas regions in blue are of negative relevance.
76+
More information on heatmapping presets can be found in the [Julia-XAI documentation](https://julia-xai.github.io/XAIDocs/XAIDocs/dev/generated/heatmapping/).
6777

6878
> [!WARNING]
6979
> ExplainableAI.jl used to contain Layer-wise Relevance Propagation (LRP).

0 commit comments

Comments
 (0)