[ENH] Calibration plot (add performance curves) and a new Calibrated Learner widget#3881
[ENH] Calibration plot (add performance curves) and a new Calibrated Learner widget#3881janezd merged 21 commits intobiolab:masterfrom
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3881 +/- ##
==========================================
+ Coverage 84.31% 84.56% +0.24%
==========================================
Files 385 390 +5
Lines 72758 73803 +1045
==========================================
+ Hits 61343 62408 +1065
+ Misses 11415 11395 -20 |
dd860da to
2312bf4
Compare
064c928 to
a67d0fd
Compare
…erformance curves
…derived widget to change the default name without interferring with user-changed settings.
a67d0fd to
4cc3a54
Compare
4cc3a54 to
2edcb39
Compare
47308ca to
2049afa
Compare
| from Orange.classification import \ | ||
| LogisticRegressionLearner, SVMLearner, NuSVMLearner | ||
|
|
||
| data = Table(data_name or "ionosphere") |
There was a problem hiding this comment.
I changed this because ionosphere is no longer present. It affects previews of widgets in Evaluation.
|
|
||
| def minimumSizeHint(self): | ||
| return self.size_hint | ||
|
|
There was a problem hiding this comment.
I added this because list view remained too large. This is the only way with which I managed to reduce its size - setMinimumSizeHint just didn't work...
Orange/evaluation/testing.py
Outdated
|
|
||
| if self.models is not None: | ||
| res.models = self.models[:, i] | ||
| res.models = self.models[:, i:i+1] |
There was a problem hiding this comment.
I believe this was a bug because it changed self.models from 2d to 1d array. Apparently nobody needed this so far.
| elif self.resampling == OWTestLearners.TestOnTrain: | ||
| sampler = Orange.evaluation.TestOnTrainingData() | ||
| sampler = Orange.evaluation.TestOnTrainingData( | ||
| store_models=True) |
There was a problem hiding this comment.
Without these changes, Orange canvas can't produce evaluation results with valid models that Calibration plot can output. This will also be useful in, say ROC Analysis that will now also be able to output a model with some specific threshold.
| self._invalidate([key]) | ||
| del self.learners[key] | ||
| else: | ||
| elif learner is not None: |
There was a problem hiding this comment.
We haven't encountered this bug before because (I suppose) no learner widget can fail to produce a learner. Now it can - Calibrated Learner outputs None if it doesn't have a learner on its input.
| def __init__(self, targetAttr, selectedAttr): | ||
| super().__init__() | ||
| self.targetAttr, self.selectedAttr = targetAttr, selectedAttr | ||
| """Context handler for evaluation results""" |
There was a problem hiding this comment.
This is obviously a new context handler with the same name as an old one that nobody used because it probably didn't work and it was weird. ROC Analysis and Lift Curves currently don't have context; they can now have this one.
|
The potentially problematic commit (but I think it's OK) is f742ff9. Currently, widgets for learners set the default name and the user can change it. The default name was always fixed. In Calibrated Learner however, the name should depend on the widget's input and settings to produce a meaningful name. The widget could reset the name (as a part of its context?), but with this it would override user's changes. This commit sets the default name as editline's placeholder, so the user sees the default, (s)he can enter a different name, the widget can change the default (without having to pay attention to any user's changes) and it the user then removes hir input, (s)he gets the widget's default. |
|
Huh. I limited the model name to 20 characters (elide if longer; I can't add tooltips because Qt's rich text doesn't support them). Not the most elegant and foolproof solution, but I don't have a better one. |
37323a1 to
04d05f4
Compare
|
Press Report in Calibrated Learner widget. |
I obviously haven't tried it. :) Now it works. |
|
To merge this PR I would suggest:
|
|
Merged due to lack of activity. |

This PR revamps an old widget, adds a new one, adds some core functionality, and adds a few minor fixes to base classes.
Orange.evaluation.performance_curves.Curvesis a class that computes performance curves (ca, f1, sens, spec...) as functions of probabilities,Orange.model.calibrationcontains learners and classifiers that set and user thresholds and calibrate probabilities,Working on this uncovered some hidden bugs and glitches in other parts of the code, hence a large number of commits.
Fixes #1267.