Skip to content

Commit 5ffcf6e

Browse files
authored
Update README.md
1 parent 7d5b252 commit 5ffcf6e

File tree

1 file changed

+26
-22
lines changed

1 file changed

+26
-22
lines changed

README.md

Lines changed: 26 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -4,22 +4,22 @@
44
[![Documentation Status](https://readthedocs.org/projects/aix360/badge/?version=latest)](https://aix360.readthedocs.io/en/latest/?badge=latest)
55
[![PyPI version](https://badge.fury.io/py/aix360.svg)](https://badge.fury.io/py/aix360)
66

7-
The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. In addition to tabular, text and images, AIX360 is now expanded to support time series modality as well.
7+
The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. The AI Explainability 360 toolkit supports tabular, text, images, and time series data.
88

99
The [AI Explainability 360 interactive experience](http://aix360.mybluemix.net/data) provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The [tutorials and example notebooks](./examples) offer a deeper, data scientist-oriented introduction. The complete API is also available.
1010

11-
There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created some [guidance material](http://aix360.mybluemix.net/resources#guidance) and a [chart](./aix360/algorithms/README.md) that can be consulted.
11+
There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created some [guidance material](http://aix360.mybluemix.net/resources#guidance) and a [taxonomy tree](./aix360/algorithms/README.md) that can be consulted.
1212

1313
We have developed the package with extensibility in mind. This library is still in development. We encourage you to contribute your explainability algorithms, metrics, and use cases. To get started as a contributor, please join the [AI Explainability 360 Community on Slack](https://aix360.slack.com) by requesting an invitation [here](https://join.slack.com/t/aix360/shared_invite/enQtNzEyOTAwOTk1NzY2LTM1ZTMwM2M4OWQzNjhmNGRiZjg3MmJiYTAzNDU1MTRiYTIyMjFhZTI4ZDUwM2M1MGYyODkwNzQ2OWQzMThlN2Q). Please review the instructions to contribute code and python notebooks [here](CONTRIBUTING.md).
1414

1515
## Supported explainability algorithms
1616

17-
### Data explanation
17+
### Data explanations
1818

1919
- ProtoDash ([Gurumoorthy et al., 2019](https://arxiv.org/abs/1707.01212))
2020
- Disentangled Inferred Prior VAE ([Kumar et al., 2018](https://openreview.net/forum?id=H1kG7GZAW))
2121

22-
### Local post-hoc explanation
22+
### Local post-hoc explanations
2323

2424
- ProtoDash ([Gurumoorthy et al., 2019](https://arxiv.org/abs/1707.01212))
2525
- Contrastive Explanations Method ([Dhurandhar et al., 2018](https://papers.nips.cc/paper/7340-explanations-based-on-the-missing-towards-contrastive-explanations-with-pertinent-negatives))
@@ -29,26 +29,26 @@ We have developed the package with extensibility in mind. This library is still
2929
- LIME ([Ribeiro et al. 2016](https://arxiv.org/abs/1602.04938), [Github](https://github.com/marcotcr/lime))
3030
- SHAP ([Lundberg, et al. 2017](http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions), [Github](https://github.com/slundberg/shap))
3131

32-
### Time-Series local post-hoc explanation
32+
### Time-Series local post-hoc explanations
3333

3434
- Time Series Saliency Maps using Integrated Gradients (Inspired by [Sundararajan et al.](https://arxiv.org/pdf/1703.01365.pdf) )
3535
- Time Series LIME (Time series adaptation of the classic paper by [Ribeiro et al. 2016](https://arxiv.org/abs/1602.04938) )
3636
- Time Series Individual Conditional Expectation (Time series adaptation of Individual Conditional Expectation Plots [Goldstein et al.](https://arxiv.org/abs/1309.6392) )
3737

38-
### Local direct explanation
38+
### Local direct explanations
3939

4040
- Teaching AI to Explain its Decisions ([Hind et al., 2019](https://doi.org/10.1145/3306618.3314273))
4141
- Order Constraints in Optimal Transport ([Lim et al.,2022](https://arxiv.org/abs/2110.07275), [Github](https://github.com/IBM/otoc))
4242

43-
### Global direct explanation
43+
### Global direct explanations
4444

4545
- Interpretable Model Differencing (IMD) ([Haldar et al., 2023](https://arxiv.org/abs/2306.06473))
4646
- CoFrNets (Continued Fraction Nets) ([Puri et al., 2021](https://papers.nips.cc/paper/2021/file/b538f279cb2ca36268b23f557a831508-Paper.pdf))
4747
- Boolean Decision Rules via Column Generation (Light Edition) ([Dash et al., 2018](https://papers.nips.cc/paper/7716-boolean-decision-rules-via-column-generation))
4848
- Generalized Linear Rule Models ([Wei et al., 2019](http://proceedings.mlr.press/v97/wei19a.html))
4949
- Fast Effective Rule Induction (Ripper) ([William W Cohen, 1995](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.107.2612&rep=rep1&type=pdf))
5050

51-
### Global post-hoc explanation 
51+
### Global post-hoc explanations
5252

5353
- ProfWeight ([Dhurandhar et al., 2018](https://papers.nips.cc/paper/8231-improving-simple-models-with-confidence-profiles))
5454

@@ -59,7 +59,7 @@ We have developed the package with extensibility in mind. This library is still
5959

6060
## Setup
6161

62-
Supported Configurations:
62+
### Supported Configurations:
6363

6464
| Explainer | OS | Python version |
6565
| ---------------| ------------------------------| -------------- |
@@ -96,7 +96,7 @@ Miniconda](https://conda.io/docs/user-guide/install/download.html#anaconda-or-mi
9696
if you are curious) and can be installed from
9797
[here](https://conda.io/miniconda.html) if you do not already have it.
9898

99-
Then, to create a new Python 3.10(or any of the supported python versions) environment, run:
99+
Then, create a new python environment based on the explainability algorithms you wish to use by referring to the [table](https://github.com/Trusted-AI/AIX360/edit/master/README.md#supported-configurations) above. For example, for python 3.10, use the following command:
100100

101101
```bash
102102
conda create --name aix360 python=3.10
@@ -130,35 +130,39 @@ their respective folders as described in
130130
Then, navigate to the root directory of the project which contains `setup.py` file and run:
131131

132132
```bash
133-
(aix360)$ pip install -e .
133+
(aix360)$ pip install -e .[<algo1>, <algo2>, ...]
134134
```
135+
The above command installs packages required by specific algorithms. Here `<algo>` keyword refers to one or more explainability algorithms. For instance to install packages needed by BRCG, DIPVAE, and TSICE algorithms, one could use
136+
```bash
137+
(aix360)$ pip install -e .[rbm, dipvae, tsice]
138+
```
139+
The default command `pip install .` installs [default dependencies](https://github.com/Trusted-AI/AIX360/blob/462c4d575bfc71c5cbfd32ceacdb3df96a8dc2d1/setup.py#L9) alone.
135140

136-
If you face any issues, please try upgrading pip and setuptools and uninstall any previous versions of aix360 before attempting the above step again.
141+
Note that you may not be able to install two algorithms that require different versions of python in the same environment (for instance `contrastive` along with `rbm`).
137142

138-
With the new setup.py, `pip install .` installs [default dependencies](https://github.com/Trusted-AI/AIX360/blob/462c4d575bfc71c5cbfd32ceacdb3df96a8dc2d1/setup.py#L9) only. To install dependencies of required algorithms, use `pip install .[algo1, algo2]`. An example is `pip install .[dipvae,cofrnet,tsice]`.
143+
If you face any issues, please try upgrading pip and setuptools and uninstall any previous versions of aix360 before attempting the above step again.
139144

140145
```bash
141146
(aix360)$ pip install --upgrade pip setuptools
142147
(aix360)$ pip uninstall aix360
143148
```
144149

145-
## Running in Docker
146-
147-
* Under `AIX360` directory build the container image from Dockerfile using `docker build -t aix360_docker .`
148-
* Start the container image using command `docker run -it -p 8888:8888 aix360_docker:latest bash` assuming port 8888 is free on your machine.
149-
* Inside the container start jupuyter lab using command `jupyter lab --allow-root --ip 0.0.0.0 --port 8888 --no-browser`
150-
* Access the sample tutorials on your machine using URL `localhost:8888`
151-
152150
## PIP Installation of AI Explainability 360
153151

154152
If you would like to quickly start using the AI explainability 360 toolkit without cloning this repository, then you can install the [aix360 pypi package](https://pypi.org/project/aix360/) as follows.
155153

156154
```bash
157-
(your environment)$ pip install aix360
155+
(your environment)$ pip install aix360 .[<algo1>, <algo2>, ...]
158156
```
159157

160-
If you follow this approach, you may need to download the notebooks in the [examples](./examples) folder separately.
158+
If you follow this approach, you will need to download the notebooks available in the [examples](./examples) folder separately.
159+
160+
## Running in Docker
161161

162+
* Under `AIX360` directory build the container image from Dockerfile using `docker build -t aix360_docker .`
163+
* Start the container image using command `docker run -it -p 8888:8888 aix360_docker:latest bash` assuming port 8888 is free on your machine.
164+
* Inside the container start jupuyter lab using command `jupyter lab --allow-root --ip 0.0.0.0 --port 8888 --no-browser`
165+
* Access the sample tutorials on your machine using URL `localhost:8888`
162166

163167
## Using AI Explainability 360
164168

0 commit comments

Comments
 (0)