A python library for decision tree visualization and model interpretation.
-
Updated
Dec 6, 2025 - Jupyter Notebook
A python library for decision tree visualization and model interpretation.
InterpretDL: Interpretation of Deep Learning Models,基于『飞桨』的模型可解释性算法库。
A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.
Overview of different model interpretability libraries.
A set of tools for leveraging pre-trained embeddings, active learning and model explainability for effecient document classification
FastAI Model Interpretation with LIME
What Has Been Enhanced in my Knowledge-Enhanced Language Model?
A minimal, reproducible explainable-AI demo using SHAP values on tabular data. Trains RandomForest or LogisticRegression models, computes global and local feature importances, and visualizes results through summary and dependence plots, all in under 100 lines of Python.
Official implementation of "HyPepTox-Fuse: An interpretable hybrid framework for accurate peptide toxicity prediction fusing protein language model-based embeddings with conventional descriptors"
Overview of machine learning interpretation techniques and their implementations
Integrating multimodal data through heterogeneous ensembles
This repository has all of the assignments I had to do for the Standard Bank Data Science Virtual Experience Program. 📉👨💻📊📈
Model Interpretability via Hierarchical Feature Perturbation
Implémentation d'un modèle de scoring (OpenClassrooms | Data Scientist | Projet 7)
The tasks I was required to complete as a part of the BCG Open-Access Data Science & Advanced Analytics Virtual Experience Program are all contained in this repository. 📊📈📉👨💻
Using LIME and SHAP for model interpretability of Machine Learning Black-box models.
Visualize a Decision Tree using dtreeviz
Deep behavioral and machine learning analysis explaining why mobile users systematically report lower satisfaction with AI systems. Includes SHAP explainability, cognitive load modeling, device-context effects, interaction metadata analysis, and end-to-end reproducible research code and visuals.
Successfully established a machine learning model to predict the approval status of a health insurance claim based on patient and claim characteristics, using XGBoost with SHAP-based interpretability and deployed via Streamlit.
Add a description, image, and links to the model-interpretation topic page so that developers can more easily learn about it.
To associate your repository with the model-interpretation topic, visit your repo's landing page and select "manage topics."