-
Notifications
You must be signed in to change notification settings - Fork 12
Add tests on the example image #345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #345 +/- ##
==========================================
- Coverage 97.69% 97.69% -0.01%
==========================================
Files 21 21
Lines 1127 1126 -1
==========================================
- Hits 1101 1100 -1
Misses 26 26 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
bthirion
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for doing this.
I'm happy with the code, but I'm wondering whether this is the right thing to do now: before cementing all the generated docs and figures, we should rather take more care that the examples provide the right messages and that we're happy with the scope.
| regressor = RidgeCV(alphas=np.logspace(-3, 3, 10)) | ||
| regressor_list = [clone(regressor) for _ in range(n_folds)] | ||
| kf = KFold(n_splits=n_folds, shuffle=True, random_state=0) | ||
| for i, (train_index, test_index) in enumerate(kf.split(X)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you removing that ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed it because it was a duplicated line of the next one. I don't know why there was a duplicate line.
I want some tests, for example, because a modification to the API can have a strong effect and change totally the example without I notice it. |
But you will notice it easily by eyeballing the examples. There are only 10 atm. |
|
It's not so obvious because before I started the tests, I didn't see the small difference in the example due to the randomisation. |
|
To me, the costs outweigh the benefits at the moment. |
Originally posted by @Remi-Gau in #439 (comment) |
|
I think testing the plots of the examples is not desirable due to the randomness of the methods implemented in hidimstat. |
|
I agree that variability is not good but we put some effort in it, it's possible to have full control of it, for example. The more I think about it, the more I realise that this can be used only to help reviewers to catch the change in the figures. |
I don't remember: are you building your doc with pinned versions of your dependencies? if not it's probably better to do that to avoid figures changing for reasons outside of your control |
|
We don't pin the version for building the examples. |
This is base on:
I added in circleci some tests for checking if there are any modifications to the images generated by the examples.
This needs to be improved to skip it in some conditions.