Skip to content

Conversation

@lionelkusch
Copy link
Collaborator

@lionelkusch lionelkusch commented Aug 22, 2025

This is base on: 344

I added in circleci some tests for checking if there are any modifications to the images generated by the examples.
This needs to be improved to skip it in some conditions.

@codecov
Copy link

codecov bot commented Aug 22, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 97.69%. Comparing base (a829f0f) to head (82cd494).
⚠️ Report is 9 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #345      +/-   ##
==========================================
- Coverage   97.69%   97.69%   -0.01%     
==========================================
  Files          21       21              
  Lines        1127     1126       -1     
==========================================
- Hits         1101     1100       -1     
  Misses         26       26              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator

@bthirion bthirion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for doing this.
I'm happy with the code, but I'm wondering whether this is the right thing to do now: before cementing all the generated docs and figures, we should rather take more care that the examples provide the right messages and that we're happy with the scope.

regressor = RidgeCV(alphas=np.logspace(-3, 3, 10))
regressor_list = [clone(regressor) for _ in range(n_folds)]
kf = KFold(n_splits=n_folds, shuffle=True, random_state=0)
for i, (train_index, test_index) in enumerate(kf.split(X)):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you removing that ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed it because it was a duplicated line of the next one. I don't know why there was a duplicate line.

@lionelkusch
Copy link
Collaborator Author

I'm happy with the code, but I'm wondering whether this is the right thing to do now: before cementing all the generated docs and figures, we should rather take more care that the examples provide the right messages and that we're happy with the scope.

I want some tests, for example, because a modification to the API can have a strong effect and change totally the example without I notice it.
I need to improve it a bit more inclusion into the CI to determine if it raises just a warning or a blocking the CI.

@bthirion
Copy link
Collaborator

I'm happy with the code, but I'm wondering whether this is the right thing to do now: before cementing all the generated docs and figures, we should rather take more care that the examples provide the right messages and that we're happy with the scope.

I want some tests, for example, because a modification to the API can have a strong effect and change totally the example without I notice it. I need to improve it a bit more inclusion into the CI to determine if it raises just a warning or a blocking the CI.

But you will notice it easily by eyeballing the examples. There are only 10 atm.

@lionelkusch
Copy link
Collaborator Author

It's not so obvious because before I started the tests, I didn't see the small difference in the example due to the randomisation.

@bthirion
Copy link
Collaborator

To me, the costs outweigh the benefits at the moment.

@lionelkusch lionelkusch linked an issue Sep 4, 2025 that may be closed by this pull request
@lionelkusch lionelkusch added API 2 Refactoring following the second version of API examples Question link to the examples test Question link to tests and removed API 2 Refactoring following the second version of API labels Sep 9, 2025
@lionelkusch
Copy link
Collaborator Author

lionelkusch commented Sep 25, 2025

It seems to be more sensitive to changes in the dependencies than other tests and thus more maintenance work. But maybe that's the price to pay to have plotting functions.

yeah I would definitely only run those tests with one set of dependency version (like the oldest supported matploblib version) otherwise it's gonna be hell to maintain

Originally posted by @Remi-Gau in #439 (comment)

@jpaillard
Copy link
Collaborator

I think testing the plots of the examples is not desirable due to the randomness of the methods implemented in hidimstat.
The comment of @Remi-Gau applies to testing plotting functions of the library, which can be done reliably by providing deterministic values before calling .plot() (hard-coding the importance values). However, if the values plotted are computed using a stochastic method (like knockoffs), the plots may look slightly different.

@lionelkusch
Copy link
Collaborator Author

I agree that variability is not good but we put some effort in it, it's possible to have full control of it, for example.

The more I think about it, the more I realise that this can be used only to help reviewers to catch the change in the figures.

@Remi-Gau
Copy link
Collaborator

I agree that variability is not good but we put some effort in it, it's possible to have full control of it, for example.

The more I think about it, the more I realise that this can be used only to help reviewers to catch the change in the figures.

I don't remember: are you building your doc with pinned versions of your dependencies? if not it's probably better to do that to avoid figures changing for reasons outside of your control

@lionelkusch
Copy link
Collaborator Author

We don't pin the version for building the examples.
This should be done at some point, especially if we want to do some testing on the figures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

examples Question link to the examples test Question link to tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Testing example

4 participants