[ENH] Learner widgets: Inform about potential problems when overriding preprocessors#5710
Conversation
Codecov Report
@@ Coverage Diff @@
## master #5710 +/- ##
==========================================
+ Coverage 86.11% 86.16% +0.04%
==========================================
Files 315 315
Lines 66093 66214 +121
==========================================
+ Hits 56918 57050 +132
+ Misses 9175 9164 -11 |
|
I think the message is not correct for current Orange's behavior. I like what the message implies, so perhaps we should change what Orange does. Look, from the edited file:
If I understand this code, the default preprocessors will still be used for Logistic regression in Orange is not a Why are |
|
Perhaps we can make a different message for I have no idea what would that kind of a behavior change mean for backward compatibility, but it could get nasty. :D |
5137da5 to
a0fc168
Compare
a0fc168 to
6176b2c
Compare
|
|
||
| def handleNewSignals(self): | ||
| super().handleNewSignals() | ||
| self.apply() |
There was a problem hiding this comment.
This should have called unconditional_apply. Now the base class defines handleNewSignals and handles them properly (I suppose).
| if instance: | ||
| instance.name = self.effective_learner_name() | ||
| if self.auto_apply: | ||
| output.send(instance) |
There was a problem hiding this comment.
This was incorrect because it didn't set dirty flag, but was necessary due to broken signal handling.
6176b2c to
b4f0b96
Compare
b4f0b96 to
8c42dbe
Compare
| self.model or self.update_model() | ||
|
|
||
| def apply_as(self, level, unconditional=False): | ||
| self.__apply_level.append(level) |
There was a problem hiding this comment.
This is horrendous, but I see no way around it (while maintaining compatibility).
This class and derived classes used apply to update and send both models; it would be better to update just the invalidated. Furthermore _change_name sent outputs by itself if auto_apply was True, but of course could not set the dirty flag if it was False. In some cases, this caused outputing the model twice...
To clean the mess, yet keep the compatilbility, I had a solution in which apply got an additional argument with default value. Unfortunatelly, gui.auto_apply doesn't pass extra arguments to apply -- and we shouldn't modify it to, just to fix this specific case.
To make it even more tricky, the function as defined here is called only when auto apply is on.
Hence, apply_as passes an argument by appending the "urgency" to the list. When the apply function is actually called, it finds the maximal "urgency" in the list and acts on it.
| self.assertFalse(wyes.Information.ignored_preprocessors.is_shown()) | ||
| self.assertFalse(wfit.Information.ignored_preprocessors.is_shown()) | ||
|
|
||
| def test_multiple_sends(self): |
There was a problem hiding this comment.
This test would fail on master because learner would be sent twice.
Issue
Resolves #5703. Fixes #5577.
Description of changes
Missing coverage is mostly in stack learner and was already missed before. This PR won't add tests there.
Includes