- Introduction
- Getting Started
- Algorithm Overview
- White-box and black-box models
- Saving and loading
- Frequently Asked Questions
- Methods
- Examples
- Alibi Overview Examples
- Accumulated Local Effets
- Anchors
- Contrastive Explanation Method
- Counterfactual Instances on MNIST
- Counterfactuals Guided by Prototypes
- Counterfactuals with Reinforcement Learning
- Integrated Gradients
- Kernel SHAP
- Partial Dependence
- Partial Dependence Variance
- Permutation Importance
- Similarity explanations
- Tree SHAP
- Methods
- Examples
- Measuring the linearity of machine learning models
- Trust Scores
- Methods
- Examples
alibi.apialibi.confidencealibi.datasetsalibi.exceptionsalibi.explainersalibi.explainers.alealibi.explainers.anchorsalibi.explainers.anchors.anchor_basealibi.explainers.anchors.anchor_explanationalibi.explainers.anchors.anchor_imagealibi.explainers.anchors.anchor_tabularalibi.explainers.anchors.anchor_tabular_distributedalibi.explainers.anchors.anchor_textalibi.explainers.anchors.language_model_text_sampleralibi.explainers.anchors.text_samplers
alibi.explainers.backendsalibi.explainers.cemalibi.explainers.cfprotoalibi.explainers.cfrl_basealibi.explainers.cfrl_tabularalibi.explainers.counterfactualalibi.explainers.integrated_gradientsalibi.explainers.partial_dependencealibi.explainers.pd_variancealibi.explainers.permutation_importancealibi.explainers.shap_wrappersalibi.explainers.similarity
alibi.modelsalibi.prototypesalibi.savingalibi.utilsalibi.utils.approximation_methodsalibi.utils.dataalibi.utils.discretizeralibi.utils.distancealibi.utils.distributedalibi.utils.distributionsalibi.utils.downloadalibi.utils.frameworksalibi.utils.gradientsalibi.utils.kernelalibi.utils.lang_modelalibi.utils.mappingalibi.utils.missing_optional_dependencyalibi.utils.tfalibi.utils.visualizationalibi.utils.wrappers
alibi.version