@@ -213,18 +213,20 @@ Some of the calculations for the dashboard such as calculating SHAP (interaction
213213and permutation importances can be slow for large datasets and complicated models.
214214There are a few tricks to make this less painful:
215215
216- 1 . Switching off the interactions tab (` shap_interaction=False ` ) and disabling
217- permutation importances (` no_permutations=True ` ). Especially SHAP interaction
218- values can be very slow to calculate, and often are not needed for analysis.
219- For permutation importances you can set the ` n_jobs ` parameter to speed up
220- the calculation in parallel.
221- 2 . Calculate approximate shap values. You can pass approximate=True as a shap parameter by
222- passing ` shap_kwargs=dict(approximate=True) ` to the explainer initialization.
223- 4 . Storing the explainer. The calculated properties are only calculated once
224- for each instance, however each time when you instantiate a new explainer
225- instance they will have to be recalculated. You can store them with
226- ` explainer.dump("explainer.joblib") ` and load with e.g.
227- ` ClassifierExplainer.from_file("explainer.joblib") ` . All calculated properties
216+ 1 . Switching off the interactions tab (` shap_interaction=False ` ) and disabling
217+ permutation importances (` no_permutations=True ` ). Especially SHAP interaction
218+ values can be very slow to calculate, and often are not needed for analysis.
219+ For permutation importances you can set the ` n_jobs ` parameter to speed up
220+ the calculation in parallel.
221+ 2 . Calculate approximate shap values. You can pass approximate=True as a shap parameter by
222+ passing ` shap_kwargs=dict(approximate=True) ` to the explainer initialization.
223+ 3 . Use GPU Tree SHAP by passing ` shap='gputree' ` when your model supports it.
224+ This requires an NVIDIA GPU and a CUDA-enabled SHAP build (see the SHAP docs).
225+ 4 . Storing the explainer. The calculated properties are only calculated once
226+ for each instance, however each time when you instantiate a new explainer
227+ instance they will have to be recalculated. You can store them with
228+ ` explainer.dump("explainer.joblib") ` and load with e.g.
229+ ` ClassifierExplainer.from_file("explainer.joblib") ` . All calculated properties
228230 are stored along with the explainer.
2292315 . Using a smaller (test) dataset, or using smaller decision trees.
230232 TreeShap computational complexity is ` O(TLD^2) ` , where ` T ` is the
0 commit comments