You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Explainable AI in Julia using [Flux.jl](https://fluxml.ai).
9
11
12
+
This package implements interpretability methods and visualizations for neural networks, similar to [Captum](https://github.com/pytorch/captum) for PyTorch and [iNNvestigate](https://github.com/albermax/innvestigate) for Keras models.
13
+
10
14
## Installation
11
15
To install this package and its dependencies, open the Julia REPL and run
12
16
```julia-repl
13
-
julia> ]add ExplainabilityMethods
17
+
julia> ]add ExplainableAI
14
18
```
15
19
16
20
⚠️ This package is still in early development, expect breaking changes. ⚠️
17
21
18
22
## Example
19
23
Let's use LRP to explain why an image of a cat gets classified as a cat:
20
24
```julia
21
-
usingExplainabilityMethods
25
+
usingExplainableAI
22
26
using Flux
23
27
using Metalhead
24
28
@@ -27,15 +31,14 @@ vgg = VGG19()
27
31
model =strip_softmax(vgg.layers)
28
32
29
33
# Run XAI method
30
-
analyzer =LRPEpsilon(model)
34
+
analyzer =LRP(model)
31
35
expl =analyze(img, analyzer)
32
36
33
37
# Show heatmap
34
38
heatmap(expl)
35
39
```
36
40
![][heatmap]
37
41
38
-
39
42
## Methods
40
43
Currently, the following analyzers are implemented:
41
44
@@ -48,29 +51,40 @@ Currently, the following analyzers are implemented:
48
51
└── LRPGamma
49
52
```
50
53
51
-
One of the design goals of ExplainabilityMethods.jl is extensibility.
54
+
One of the design goals of ExplainableAI.jl is extensibility.
52
55
Individual LRP rules like `ZeroRule`, `EpsilonRule`, `GammaRule` and `ZBoxRule`[can be composed][docs-composites] and are easily extended by [custom rules][docs-custom-rules].
- Shapley values via [ShapML.jl](https://github.com/nredell/ShapML.jl)
65
+
66
+
Contributions are welcome!
67
+
54
68
## Acknowledgements
55
69
> Adrian Hill acknowledges support by the Federal Ministry of Education and Research (BMBF) for the Berlin Institute for the Foundations of Learning and Data (BIFOLD) (01IS18037A).
# Found layers with unknown or unsupported activation functions myrelu. LRP assumes that the model is a "deep rectifier network" that only contains ReLU-like activation functions.
177
177
#
178
-
# If you think the missing activation function should be supported by default, please submit an issue (https://github.com/adrhill/ExplainabilityMethods.jl/issues).
178
+
# If you think the missing activation function should be supported by default, please submit an issue (https://github.com/adrhill/ExplainableAI.jl/issues).
179
179
#
180
180
# These model checks can be skipped at your own risk by setting the LRP-analyzer keyword argument skip_checks=true.
0 commit comments