Skip to content

Commit eac651c

Browse files
authored
Rename package to ExplainableAI (#44)
* Rename ExplainabilityMethods to ExplainableAI * Generate new UUID using `UUIDs.uuid4()` * Update readme
1 parent 715c5a8 commit eac651c

21 files changed

+80
-66
lines changed

.github/workflows/ci.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,8 +60,8 @@ jobs:
6060
- run: |
6161
julia --project=docs -e '
6262
using Documenter: doctest
63-
using ExplainabilityMethods
64-
doctest(ExplainabilityMethods)'
63+
using ExplainableAI
64+
doctest(ExplainableAI)'
6565
- run: julia --project=docs docs/make.jl
6666
env:
6767
DATADEPS_ALWAYS_ACCEPT: true # for MLDatasets download

Project.toml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
name = "ExplainabilityMethods"
2-
uuid = "cd722a4f-8d55-446b-8550-a4aabc9151ab"
1+
name = "ExplainableAI"
2+
uuid = "4f1bc3e1-d60d-4ed0-9367-9bdff9846d3b"
33
authors = ["Adrian Hill"]
4-
version = "0.2.0"
4+
version = "0.3.0"
55

66
[deps]
77
ColorSchemes = "35d6a980-a343-548e-a6ea-1d62b119f2f4"

README.md

Lines changed: 30 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,28 @@
1-
![ExplainabilityMethods.jl][banner-img]
1+
![ExplainableAI.jl][banner-img]
22
___
33

4+
*Formerly known as ExplainabilityMethods.jl*
5+
46
| **Documentation** | **Build Status** | **DOI** |
57
|:--------------------------------------------------------------------- |:----------------------------------------------------- |:----------------------- |
68
| [![][docs-stab-img]][docs-stab-url] [![][docs-dev-img]][docs-dev-url] | [![][ci-img]][ci-url] [![][codecov-img]][codecov-url] | [![][doi-img]][doi-url] |
79

810
Explainable AI in Julia using [Flux.jl](https://fluxml.ai).
911

12+
This package implements interpretability methods and visualizations for neural networks, similar to [Captum](https://github.com/pytorch/captum) for PyTorch and [iNNvestigate](https://github.com/albermax/innvestigate) for Keras models.
13+
1014
## Installation
1115
To install this package and its dependencies, open the Julia REPL and run
1216
```julia-repl
13-
julia> ]add ExplainabilityMethods
17+
julia> ]add ExplainableAI
1418
```
1519

1620
⚠️ This package is still in early development, expect breaking changes. ⚠️
1721

1822
## Example
1923
Let's use LRP to explain why an image of a cat gets classified as a cat:
2024
```julia
21-
using ExplainabilityMethods
25+
using ExplainableAI
2226
using Flux
2327
using Metalhead
2428

@@ -27,15 +31,14 @@ vgg = VGG19()
2731
model = strip_softmax(vgg.layers)
2832

2933
# Run XAI method
30-
analyzer = LRPEpsilon(model)
34+
analyzer = LRP(model)
3135
expl = analyze(img, analyzer)
3236

3337
# Show heatmap
3438
heatmap(expl)
3539
```
3640
![][heatmap]
3741

38-
3942
## Methods
4043
Currently, the following analyzers are implemented:
4144

@@ -48,29 +51,40 @@ Currently, the following analyzers are implemented:
4851
└── LRPGamma
4952
```
5053

51-
One of the design goals of ExplainabilityMethods.jl is extensibility.
54+
One of the design goals of ExplainableAI.jl is extensibility.
5255
Individual LRP rules like `ZeroRule`, `EpsilonRule`, `GammaRule` and `ZBoxRule` [can be composed][docs-composites] and are easily extended by [custom rules][docs-custom-rules].
5356

57+
## Roadmap
58+
In the future, we would like to include:
59+
- [SmoothGrad](https://arxiv.org/abs/1706.03825)
60+
- [Integrated Gradients](https://arxiv.org/abs/1703.01365)
61+
- [PatternNet](https://arxiv.org/abs/1705.05598)
62+
- [DeepLift](https://arxiv.org/abs/1704.02685)
63+
- [LIME](https://arxiv.org/abs/1602.04938)
64+
- Shapley values via [ShapML.jl](https://github.com/nredell/ShapML.jl)
65+
66+
Contributions are welcome!
67+
5468
## Acknowledgements
5569
> Adrian Hill acknowledges support by the Federal Ministry of Education and Research (BMBF) for the Berlin Institute for the Foundations of Learning and Data (BIFOLD) (01IS18037A).
5670
57-
[banner-img]: https://raw.githubusercontent.com/adrhill/ExplainabilityMethods.jl/gh-pages/assets/banner.png
58-
[heatmap]: https://raw.githubusercontent.com/adrhill/ExplainabilityMethods.jl/gh-pages/assets/heatmap.png
71+
[banner-img]: https://raw.githubusercontent.com/adrhill/ExplainableAI.jl/gh-pages/assets/banner.png
72+
[heatmap]: https://raw.githubusercontent.com/adrhill/ExplainableAI.jl/gh-pages/assets/heatmap.png
5973

6074
[docs-stab-img]: https://img.shields.io/badge/docs-stable-blue.svg
61-
[docs-stab-url]: https://adrhill.github.io/ExplainabilityMethods.jl/stable
75+
[docs-stab-url]: https://adrhill.github.io/ExplainableAI.jl/stable
6276

6377
[docs-dev-img]: https://img.shields.io/badge/docs-main-blue.svg
64-
[docs-dev-url]: https://adrhill.github.io/ExplainabilityMethods.jl/dev
78+
[docs-dev-url]: https://adrhill.github.io/ExplainableAI.jl/dev
6579

66-
[ci-img]: https://github.com/adrhill/ExplainabilityMethods.jl/workflows/CI/badge.svg
67-
[ci-url]: https://github.com/adrhill/ExplainabilityMethods.jl/actions
80+
[ci-img]: https://github.com/adrhill/ExplainableAI.jl/workflows/CI/badge.svg
81+
[ci-url]: https://github.com/adrhill/ExplainableAI.jl/actions
6882

69-
[codecov-img]: https://codecov.io/gh/adrhill/ExplainabilityMethods.jl/branch/master/graph/badge.svg
70-
[codecov-url]: https://codecov.io/gh/adrhill/ExplainabilityMethods.jl
83+
[codecov-img]: https://codecov.io/gh/adrhill/ExplainableAI.jl/branch/master/graph/badge.svg
84+
[codecov-url]: https://codecov.io/gh/adrhill/ExplainableAI.jl
7185

72-
[docs-composites]: https://adrhill.github.io/ExplainabilityMethods.jl/dev/generated/advanced_lrp/#Custom-LRP-composites
73-
[docs-custom-rules]: https://adrhill.github.io/ExplainabilityMethods.jl/dev/generated/advanced_lrp/#Custom-LRP-rules
86+
[docs-composites]: https://adrhill.github.io/ExplainableAI.jl/dev/generated/advanced_lrp/#Custom-LRP-composites
87+
[docs-custom-rules]: https://adrhill.github.io/ExplainableAI.jl/dev/generated/advanced_lrp/#Custom-LRP-rules
7488

7589
[doi-img]: https://zenodo.org/badge/337430397.svg
7690
[doi-url]: https://zenodo.org/badge/latestdoi/337430397

benchmark/Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
[deps]
22
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
3-
ExplainabilityMethods = "cd722a4f-8d55-446b-8550-a4aabc9151ab"
3+
ExplainableAI = "4f1bc3e1-d60d-4ed0-9367-9bdff9846d3b"
44
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
55
PkgBenchmark = "32113eaa-f34f-5b0d-bd6c-c81e245fc73d"

benchmark/benchmarks.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
using BenchmarkTools
22
using Flux
3-
using ExplainabilityMethods
4-
import ExplainabilityMethods: modify_layer, lrp!
3+
using ExplainableAI
4+
import ExplainableAI: modify_layer, lrp!
55

66
on_CI = haskey(ENV, "GITHUB_ACTIONS")
77

docs/Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
BSON = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
33
ColorSchemes = "35d6a980-a343-548e-a6ea-1d62b119f2f4"
44
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
5-
ExplainabilityMethods = "cd722a4f-8d55-446b-8550-a4aabc9151ab"
5+
ExplainableAI = "4f1bc3e1-d60d-4ed0-9367-9bdff9846d3b"
66
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
77
ImageCore = "a09fc81d-aa75-5fe9-8630-4744c3626534"
88
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"

docs/literate/advanced_lrp.jl

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
# # Advanced LRP usage
2-
# One of the design goals of ExplainabilityMethods.jl is to combine ease of use and
2+
# One of the design goals of ExplainableAI.jl is to combine ease of use and
33
# extensibility for the purpose of research.
44
#
55
#
66
# This example will show you how to implement custom LRP rules and register custom layers
77
# and activation functions.
88
# For this purpose, we will quickly load our model from the previous section:
9-
using ExplainabilityMethods
9+
using ExplainableAI
1010
using Flux
1111
using MLDatasets
1212
using ImageCore
@@ -36,7 +36,7 @@ rules = [
3636
analyzer = LRP(model, rules)
3737
heatmap(input, analyzer)
3838

39-
# Since some Flux Chains contain other Flux Chains, ExplainabilityMethods provides
39+
# Since some Flux Chains contain other Flux Chains, ExplainableAI provides
4040
# a utility function called [`flatten_model`](@ref).
4141
#
4242
#md # !!! warning "Flattening models"
@@ -51,7 +51,7 @@ struct MyGammaRule <: AbstractLRPRule end
5151
# It is then possible to dispatch on the utility functions [`modify_params`](@ref) and [`modify_denominator`](@ref)
5252
# with the rule type `MyCustomLRPRule` to define custom rules without writing any boilerplate code.
5353
# To extend internal functions, import them explicitly:
54-
import ExplainabilityMethods: modify_params
54+
import ExplainableAI: modify_params
5555

5656
function modify_params(::MyGammaRule, W, b)
5757
ρW = W + 0.25 * relu.(W)
@@ -68,7 +68,7 @@ heatmap(input, analyzer)
6868
analyzer = LRP(model, GammaRule())
6969
heatmap(input, analyzer)
7070

71-
# If the layer doesn't use weights and biases `W` and `b`, ExplainabilityMethods provides a
71+
# If the layer doesn't use weights and biases `W` and `b`, ExplainableAI provides a
7272
# lower-level variant of [`modify_params`](@ref) called [`modify_layer`](@ref).
7373
# This function is expected to take a layer and return a new, modified layer.
7474

@@ -111,22 +111,22 @@ model = Chain(model..., MyDoublingLayer())
111111
# Layers failed model check
112112
# ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡
113113
#
114-
# Found unknown layers MyDoublingLayer() that are not supported by ExplainabilityMethods' LRP implementation yet.
114+
# Found unknown layers MyDoublingLayer() that are not supported by ExplainableAI's LRP implementation yet.
115115
#
116-
# If you think the missing layer should be supported by default, please submit an issue (https://github.com/adrhill/ExplainabilityMethods.jl/issues).
116+
# If you think the missing layer should be supported by default, please submit an issue (https://github.com/adrhill/ExplainableAI.jl/issues).
117117
#
118118
# These model checks can be skipped at your own risk by setting the LRP-analyzer keyword argument skip_checks=true.
119119
#
120120
# [...]
121121
# ```
122122

123-
# LRP should only be used on deep rectifier networks and ExplainabilityMethods doesn't
123+
# LRP should only be used on deep rectifier networks and ExplainableAI doesn't
124124
# recognize `MyDoublingLayer` as a compatible layer.
125125
# By default, it will therefore return an error and a model check summary
126126
# instead of returning an incorrect explanation.
127127
#
128128
# However, if we know `MyDoublingLayer` is compatible with deep rectifier networks,
129-
# we can register it to tell ExplainabilityMethods that it is ok to use.
129+
# we can register it to tell ExplainableAI that it is ok to use.
130130
# This will be shown in the following section.
131131

132132
#md # !!! warning "Skipping model checks"
@@ -136,7 +136,7 @@ model = Chain(model..., MyDoublingLayer())
136136

137137
# ### Registering custom layers
138138
# The error in the model check will stop after registering our custom layer type
139-
# `MyDoublingLayer` as "supported" by ExplainabilityMethods.
139+
# `MyDoublingLayer` as "supported" by ExplainableAI.
140140
#
141141
# This is done using the function [`LRP_CONFIG.supports_layer`](@ref),
142142
# which should be set to return `true` for the type `MyDoublingLayer`:
@@ -175,7 +175,7 @@ model = Chain(flatten, Dense(784, 100, myrelu), Dense(100, 10))
175175
#
176176
# Found layers with unknown or unsupported activation functions myrelu. LRP assumes that the model is a "deep rectifier network" that only contains ReLU-like activation functions.
177177
#
178-
# If you think the missing activation function should be supported by default, please submit an issue (https://github.com/adrhill/ExplainabilityMethods.jl/issues).
178+
# If you think the missing activation function should be supported by default, please submit an issue (https://github.com/adrhill/ExplainableAI.jl/issues).
179179
#
180180
# These model checks can be skipped at your own risk by setting the LRP-analyzer keyword argument skip_checks=true.
181181
#
@@ -189,7 +189,7 @@ LRP_CONFIG.supports_activation(::typeof(myrelu)) = true
189189
analyzer = LRPZero(model)
190190

191191
# ## How it works internally
192-
# Internally, ExplainabilityMethods dispatches to low level functions
192+
# Internally, ExplainableAI dispatches to low level functions
193193
# ```julia
194194
# function lrp!(rule, layer, Rₖ, aₖ, Rₖ₊₁)
195195
# Rₖ .= ...
@@ -230,7 +230,7 @@ analyzer = LRPZero(model)
230230
#
231231
# and can be implemented via automatic differentiation (AD).
232232
#
233-
# This equation is implemented in ExplainabilityMethods as the default method
233+
# This equation is implemented in ExplainableAI as the default method
234234
# for all layer types that don't have a specialized implementation.
235235
# We will refer to it as the "AD fallback".
236236
#

docs/literate/example.jl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# # Getting started
2-
# ExplainabilityMethods.jl can be used on any classifier.
2+
# ExplainableAI.jl can be used on any classifier.
33
# In this first example, we will look at attributions on a LeNet5 model that was pretrained on MNIST.
44
#
55
# ### Loading the model
@@ -11,7 +11,7 @@
1111
#md # @load "model.bson" model
1212
#md # ```
1313

14-
using ExplainabilityMethods
14+
using ExplainableAI
1515
using Flux
1616
using BSON
1717

@@ -40,7 +40,7 @@ input = reshape(x, 28, 28, 1, :);
4040

4141
#md # !!! warning "Input format"
4242
#md #
43-
#md # For any attribution of a model, ExplainabilityMethods.jl assumes the batch dimension to be come last in the input.
43+
#md # For any attribution of a model, ExplainableAI.jl assumes the batch dimension to be come last in the input.
4444
#md #
4545
#md # For the purpose of heatmapping, the input is assumed to be in WHCN order
4646
#md # (width, height, channels, batch), which is Flux.jl's convention.

docs/make.jl

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
using ExplainabilityMethods
1+
using ExplainableAI
22
using Documenter
33
using Literate
44

@@ -15,9 +15,9 @@ for example in readdir(EXAMPLE_DIR)
1515
end
1616

1717
makedocs(;
18-
modules=[ExplainabilityMethods],
18+
modules=[ExplainableAI],
1919
authors="Adrian Hill",
20-
sitename="ExplainabilityMethods.jl",
20+
sitename="ExplainableAI.jl",
2121
format=Documenter.HTML(; prettyurls=get(ENV, "CI", "false") == "true", assets=String[]),
2222
pages=[
2323
"Home" => "index.md",
@@ -27,4 +27,4 @@ makedocs(;
2727
],
2828
)
2929

30-
deploydocs(; repo="github.com/adrhill/ExplainabilityMethods.jl")
30+
deploydocs(; repo="github.com/adrhill/ExplainableAI.jl")

docs/src/api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Basics
2-
All methods in ExplainabilityMethods.jl work by calling `analyze` on an input and an analyzer:
2+
All methods in ExplainableAI.jl work by calling `analyze` on an input and an analyzer:
33
```@docs
44
analyze
55
heatmap

0 commit comments

Comments
 (0)