-
Notifications
You must be signed in to change notification settings - Fork 37
Make plot helper compatible with Python 3.13 #549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make plot helper compatible with Python 3.13 #549
Conversation
|
Any updates here? |
|
This action also fails on the master branch of my fork. Not sure why it doesn't here. I am pretty sure that it is unrelated to the changes in this PR, but I'll try and change the way the test is launched later and see if it helps. |
| benchmark(wrapper) | ||
|
|
||
|
|
||
| @pytest.mark.base |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tlunet, do you agree with this markers? I don't think we need to run these tests in all environments. They are throwing some numpy deprecation warnings at the moment, which we will need to deal with someday.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are necessary. You just need to replace in the first assertion the == between arrays by some np.allclose (as the warning is suggesting ?)
EDIT : nvm, missread ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a polynomial deprecation warrning ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't pay much attention to the warning, tbh. For now, I just want to run these tests only in the base environment and not in every environment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, then that's fine indeed ... sorry I didn't realized that was an addition 😅
.github/workflows/ci_pipeline.yml
Outdated
| . venv-pySDC/bin/activate | ||
| pip install -e /repositories/pySDC | ||
| pip install qmat | ||
| pip install pytest-isolate-mpi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be added as a test dependency, directly in the pyproject.toml file, I beleive (see https://github.com/Parallel-in-Time/qmat/blob/4919f197bac230a27f8ccea4188645cb53fefd0a/pyproject.toml#L34) ... also, qmat should already be installed when installing pySDC
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I disagree. We still want environments without MPI after all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hence the idea of optional dependencies defined in the pyproject.toml file. You could create a gusto one or a gusto-test, etc ... and simply install using :
pip install -e ./repositories/pySDC[gusto]which would install the base dependencies, + the optional ones (cf here). It makes the pipeline way cleaner, and centralizes the dependencies in the pyproject.toml, which allows a better control for local or CI tests ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can think of adding MPI dependencies there, but I don't really see the point, tbh. I prefer this way where the setup of the container and what goes into the container is all together. Outside of the testing pipeline people are not going to use the container and will not need the testing thing.
Also, turns out the test is flaky somehow and adding pytest-isolate-mpi did not at all fix it. This is pretty annoying :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then, my guess is that you can remove this and the pip install qmat 😛 (stop the unnecessary code spamming ...)
|
Is the firedrake test the problematic one, of the Allen_Cahn ones ? |
The Allen_Cahn ones are just a one time whatever problem, I suppose. This one Firedrake test keeps failing. |
|
Does firedrake imports mpi4py or some MPI binding already on his side ? Got some problem with fenix once, where importing mpi4py before dolfin was producing an error in tests ... |
There are other tests using firedrake and mpi that run just fine. The failing one uses Firedrake ensemble communicators for space-time parallelism. |
|
Updates here? |
I have not managed to fix the firedrake tests yet. This plotting stuff is not related, but I will try to fix the firedrake stuff, maybe this week, hopefully by the end of next week. No need to merge this before. |
Python
distutilshas been deprecated since version 3.10 and removed in version 3.12. In 3.12 I could still use it but after switching to 3.13, the plot helper was broken for me.Apparently,
distutilhas been replaced byshutil.shutil.whichreturns what you get when you usewhichin the shell you run Python in, orNoneif there is no executable of the name you are looking for. Therefore, this should give the same functionality asdistutil.spawn.find_executableas it is used in the plot helper.