Token importance for text classification - explainability with spaCy #9826
-
Hi all, I'm wondering if there is a common approach to shed light on the model explainability, for example, for a text classification task. Perhaps something similar to SHAP or Integrated Gradients? There have been a few attempts to get the token importance (e.g. here), but I couldn't find consensus on this topic. (I saw alibi, but this is not directly a spaCy-based method) How can we interpret and explain predictions of spaCy models for text classification for example? King regards |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
We don't have a standard way to do this, so you'll need to look at third-party solutions like alibi for the time being. You might also want to take a look at work from Marco Tulio Ribeiro's research group, like Anchor, which do seem intended for use with spaCy (if only very loosely integrated). (Note that based on the year the repo was released, I assume it's designed to work with spaCy v2, but since the integration is pretty loose it should be easy to adjust.) |
Beta Was this translation helpful? Give feedback.
We don't have a standard way to do this, so you'll need to look at third-party solutions like alibi for the time being. You might also want to take a look at work from Marco Tulio Ribeiro's research group, like Anchor, which do seem intended for use with spaCy (if only very loosely integrated). (Note that based on the year the repo was released, I assume it's designed to work with spaCy v2, but since the integration is pretty loose it should be easy to adjust.)