Skip to content

Conversation

@dario-coscia
Copy link
Collaborator

Fixes #519

@github-actions
Copy link
Contributor

github-actions bot commented Mar 24, 2025

badge

Code Coverage Summary

Filename                                                Stmts    Miss  Cover    Missing
----------------------------------------------------  -------  ------  -------  -----------------------------------------------------------------------------------------------------------------
__init__.py                                                 7       0  100.00%
collector.py                                               40       1  97.50%   46
graph.py                                                  111      11  90.09%   98-99, 111, 123, 125, 141, 143, 165, 168, 181, 268
label_tensor.py                                           248      29  88.31%   81, 121, 148, 165, 177, 182, 188-193, 273, 280, 332, 348, 444-447, 490, 548, 632, 652-654, 667-676, 691, 713, 716
operator.py                                                99      22  77.78%   66, 68, 87, 104, 116, 148, 157, 160, 163, 166, 247, 288-307
operators.py                                                6       6  0.00%    3-12
plotter.py                                                  1       1  0.00%    3
trainer.py                                                 75       6  92.00%   195-204, 293, 314, 318, 322
utils.py                                                   56       8  85.71%   109, 146, 149, 152, 188-191
adaptive_function/__init__.py                               3       0  100.00%
adaptive_function/adaptive_function.py                     55       0  100.00%
adaptive_function/adaptive_function_interface.py           51       6  88.24%   98, 141, 148-151
adaptive_functions/__init__.py                              6       6  0.00%    3-12
callback/__init__.py                                        5       0  100.00%
callback/adaptive_refinement_callback.py                    8       1  87.50%   37
callback/linear_weight_update_callback.py                  28       1  96.43%   63
callback/optimizer_callback.py                             22       1  95.45%   34
callback/processing_callback.py                            49       5  89.80%   42-43, 73, 168, 171
callbacks/__init__.py                                       6       6  0.00%    3-12
condition/__init__.py                                       7       0  100.00%
condition/condition.py                                     35       8  77.14%   23, 127-128, 131-132, 135-136, 151
condition/condition_interface.py                           32       3  90.62%   31, 76, 100
condition/data_condition.py                                26       1  96.15%   56
condition/domain_equation_condition.py                     19       0  100.00%
condition/input_equation_condition.py                      44       1  97.73%   129
condition/input_target_condition.py                        44       1  97.73%   125
data/__init__.py                                            3       0  100.00%
data/data_module.py                                       204      23  88.73%   42-53, 133, 173, 194, 233, 314-318, 324-328, 400, 422, 467, 547, 638, 640
data/dataset.py                                            80       6  92.50%   42, 123-126, 291
domain/__init__.py                                         10       0  100.00%
domain/cartesian.py                                       112      10  91.07%   37, 47, 75-76, 92, 97, 103, 246, 256, 264
domain/difference_domain.py                                25       2  92.00%   54, 87
domain/domain_interface.py                                 20       5  75.00%   37-41
domain/ellipsoid.py                                       104      24  76.92%   52, 56, 127, 250-257, 269-282, 286-287, 290, 295
domain/exclusion_domain.py                                 28       1  96.43%   86
domain/intersection_domain.py                              28       1  96.43%   85
domain/operation_interface.py                              26       1  96.15%   88
domain/simplex.py                                          72      14  80.56%   62, 207-225, 246-247, 251, 256
domain/union_domain.py                                     25       2  92.00%   43, 114
equation/__init__.py                                        4       0  100.00%
equation/equation.py                                       11       0  100.00%
equation/equation_factory.py                               24      10  58.33%   37, 62-75, 97-110, 132-145
equation/equation_interface.py                              4       0  100.00%
equation/system_equation.py                                22       0  100.00%
geometry/__init__.py                                        7       7  0.00%    3-15
loss/__init__.py                                            7       0  100.00%
loss/loss_interface.py                                     17       2  88.24%   45, 51
loss/lp_loss.py                                            15       0  100.00%
loss/ntk_weighting.py                                      26       0  100.00%
loss/power_loss.py                                         15       0  100.00%
loss/scalar_weighting.py                                   16       0  100.00%
loss/weighting_interface.py                                 6       0  100.00%
model/__init__.py                                          10       0  100.00%
model/average_neural_operator.py                           31       2  93.55%   73, 82
model/deeponet.py                                          93      13  86.02%   187-190, 209, 240, 283, 293, 303, 313, 323, 333, 488, 498
model/feed_forward.py                                      89      11  87.64%   58, 195, 200, 278-292
model/fourier_neural_operator.py                           78      10  87.18%   96-100, 110, 155-159, 218, 220, 242, 342
model/graph_neural_operator.py                             40       2  95.00%   58, 60
model/kernel_neural_operator.py                            34       6  82.35%   83-84, 103-104, 123-124
model/low_rank_neural_operator.py                          27       2  92.59%   89, 98
model/multi_feed_forward.py                                12       5  58.33%   25-31
model/spline.py                                            89      37  58.43%   30, 41-66, 69, 128-132, 135, 159-177, 180
model/block/__init__.py                                    12       0  100.00%
model/block/average_neural_operator_block.py               12       0  100.00%
model/block/convolution.py                                 64      13  79.69%   77, 81, 85, 91, 97, 111, 114, 151, 161, 171, 181, 191, 201
model/block/convolution_2d.py                             146      27  81.51%   155, 162, 282, 314, 379-433, 456
model/block/embedding.py                                   48       7  85.42%   93, 143-146, 155, 168
model/block/fourier_block.py                               31       0  100.00%
model/block/gno_block.py                                   22       4  81.82%   73-77, 87
model/block/integral.py                                    18       4  77.78%   22-25, 71
model/block/low_rank_block.py                              24       0  100.00%
model/block/orthogonal.py                                  37       0  100.00%
model/block/pod_block.py                                   65       9  86.15%   54-57, 69, 99, 134-139, 170, 195
model/block/rbf_block.py                                  179      25  86.03%   18, 42, 53, 64, 75, 86, 97, 223, 280, 282, 298, 301, 329, 335, 363, 367, 511-524
model/block/residual.py                                    46       0  100.00%
model/block/spectral.py                                    83       4  95.18%   132, 140, 262, 270
model/block/stride.py                                      28       7  75.00%   55, 58, 61, 67, 72-74
model/block/utils_convolution.py                           22       3  86.36%   58-60
model/layers/__init__.py                                    6       6  0.00%    3-12
optim/__init__.py                                           5       0  100.00%
optim/optimizer_interface.py                                7       0  100.00%
optim/scheduler_interface.py                                7       0  100.00%
optim/torch_optimizer.py                                   14       0  100.00%
optim/torch_scheduler.py                                   19       2  89.47%   5-6
problem/__init__.py                                         6       0  100.00%
problem/abstract_problem.py                               104      14  86.54%   52, 61, 101-106, 135, 147, 165, 239, 243, 272
problem/inverse_problem.py                                 22       0  100.00%
problem/parametric_problem.py                               8       1  87.50%   29
problem/spatial_problem.py                                  8       0  100.00%
problem/time_dependent_problem.py                           8       0  100.00%
problem/zoo/__init__.py                                     8       0  100.00%
problem/zoo/advection.py                                   33       7  78.79%   36-38, 52, 108-110
problem/zoo/allen_cahn.py                                  20       6  70.00%   20-22, 34-36
problem/zoo/diffusion_reaction.py                          29       5  82.76%   94-104
problem/zoo/helmholtz.py                                   30       6  80.00%   36-42, 103-107
problem/zoo/inverse_poisson_2d_square.py                   31       0  100.00%
problem/zoo/poisson_2d_square.py                           19       3  84.21%   65-70
problem/zoo/supervised_problem.py                          11       0  100.00%
solver/__init__.py                                          6       0  100.00%
solver/garom.py                                           110       2  98.18%   129-130
solver/reduced_order_model.py                              22       1  95.45%   137
solver/solver.py                                          180      11  93.89%   185, 265, 269, 285, 288-289, 459, 466, 471, 500, 506
solver/supervised.py                                       25       0  100.00%
solver/physics_informed_solver/__init__.py                  8       0  100.00%
solver/physics_informed_solver/causal_pinn.py              47       3  93.62%   157, 166-167
solver/physics_informed_solver/competitive_pinn.py         59       0  100.00%
solver/physics_informed_solver/gradient_pinn.py            17       0  100.00%
solver/physics_informed_solver/pinn.py                     17       0  100.00%
solver/physics_informed_solver/pinn_interface.py           72       0  100.00%
solver/physics_informed_solver/rba_pinn.py                 35       3  91.43%   155-158
solver/physics_informed_solver/self_adaptive_pinn.py       91       3  96.70%   318-321
solvers/__init__.py                                         6       6  0.00%    3-12
solvers/pinns/__init__.py                                   6       6  0.00%    3-12
TOTAL                                                    4430     508  88.53%

Results for commit: bfa7ebe

Minimum allowed coverage is 80.123%

♻️ This comment has been updated with latest results

@dario-coscia dario-coscia self-assigned this Mar 24, 2025
@dario-coscia dario-coscia added bug Something isn't working high priority Higher priority pr-to-review Label for PR that are ready to been reviewed labels Mar 24, 2025
@dario-coscia dario-coscia changed the title Update MultiSolver Update MultiSolverInterface Mar 24, 2025
Copy link
Collaborator

@GiovanniCanali GiovanniCanali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @dario-coscia for this PR.

I agree on the proposed changes, but I do not like how you make the user aware of manual optimization. In my opinion, introducing a bug on purpose does not look like the cleanest way to achieve this.

What about a note in the documentation?

@GiovanniCanali
Copy link
Collaborator

GiovanniCanali commented Mar 24, 2025

Thank you @dario-coscia for this PR.

I agree on the proposed changes, but I do not like how you make the user aware of manual optimization. In my opinion, introducing a bug on purpose does look like the cleanest way to achieve this.

What about a note in the documentation?

I propose something like this:

"""
Docstring...

.. note::
    When creating a subclass of :class:`MultiSolverInterface`, ensure that `automatic_optimization`
    is set to `False`, and manually handle backpropagation, as well as optimizer and scheduler steps.
"""

What do you think, @ndem0?

@dario-coscia
Copy link
Collaborator Author

Thank you @dario-coscia for this PR.

I agree on the proposed changes, but I do not like how you make the user aware of manual optimization. In my opinion, introducing a bug on purpose does look like the cleanest way to achieve this.

What about a note in the documentation?

By default, lightning sets that variable equal to true, which I agree with since we want automatic optimization by default. I can remove it but in the future, if PyTorch lightning decides to change it, it will cause unexpected behaviour in the Multisolver

@GiovanniCanali
Copy link
Collaborator

Thank you @dario-coscia for this PR.
I agree on the proposed changes, but I do not like how you make the user aware of manual optimization. In my opinion, introducing a bug on purpose does look like the cleanest way to achieve this.
What about a note in the documentation?

By default, lightning sets that variable equal to true, which I agree with since we want automatic optimization by default. I can remove it but in the future, if PyTorch lightning decides to change it, it will cause unexpected behaviour in the Multisolver

I see your point! What about setting it directly to False then? As far as I know, there are no cases in which a subclass of MultiSolverInterface handles automatic_optimization=True.

@dario-coscia
Copy link
Collaborator Author

Thank you @dario-coscia for this PR.
I agree on the proposed changes, but I do not like how you make the user aware of manual optimization. In my opinion, introducing a bug on purpose does look like the cleanest way to achieve this.
What about a note in the documentation?

By default, lightning sets that variable equal to true, which I agree with since we want automatic optimization by default. I can remove it but in the future, if PyTorch lightning decides to change it, it will cause unexpected behaviour in the Multisolver

I see your point! What about setting it directly to False then? As far as I know, there are no cases in which a subclass of MultiSolverInterface handles automatic_optimization=True.

The problem of setting it to False is that if you forget to do manual optimization in the training step (which needs to be override), lightning does not complain and you have a code that runs but doesn't optimized and it is hard to track back the error

@GiovanniCanali
Copy link
Collaborator

Thank you @dario-coscia for this PR.

I agree on the proposed changes, but I do not like how you make the user aware of manual optimization. In my opinion, introducing a bug on purpose does look like the cleanest way to achieve this.

What about a note in the documentation?

By default, lightning sets that variable equal to true, which I agree with since we want automatic optimization by default. I can remove it but in the future, if PyTorch lightning decides to change it, it will cause unexpected behaviour in the Multisolver

I see your point! What about setting it directly to False then? As far as I know, there are no cases in which a subclass of MultiSolverInterface handles automatic_optimization=True.

The problem of setting it to False is that if you forget to do manual optimization in the training step (which needs to be override), lightning does not complain and you have a code that runs but doesn't optimized and it is hard to track back the error

I understand. However, I am positive a note in the documentation should suffice.

I suggest to wait for other opinions, so as to find the best option. Meanwhile, thank you for the clarification!

@dario-coscia
Copy link
Collaborator Author

@ndem0 We have a discussion going with @GiovanniCanali (see above) on this line here:

# set automatic optimization to True.

What do you think it might be the best option? Thank you!

@dario-coscia dario-coscia requested review from FilippoOlivo and removed request for ndem0 April 3, 2025 17:05
@dario-coscia dario-coscia force-pushed the dario-coscia-patch-1 branch from f8d0d52 to a92277e Compare April 3, 2025 17:39
@dario-coscia
Copy link
Collaborator Author

dario-coscia commented Apr 3, 2025

@ndem0 We have a discussion going with @GiovanniCanali (see above) on this line here:

# set automatic optimization to True.

What do you think it might be the best option? Thank you!

@GiovanniCanali @FilippoOlivo I talked quickly with @ndem0 on the issue to get feedback, but I had trouble implementing a runtime warning. Therefore, I set automatic_optimization to False (as it should be) and added a comment in the documentation. I think this is the best fix; it is not intrusive, and we can always change it in the future. Since this is a bug, I would like it to merge it asap in the master branch.

Copy link
Collaborator

@GiovanniCanali GiovanniCanali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything looks fine for me!

@GiovanniCanali
Copy link
Collaborator

@ndem0 We have a discussion going with @GiovanniCanali (see above) on this line here:

# set automatic optimization to True.

What do you think it might be the best option? Thank you!

@GiovanniCanali @FilippoOlivo I talked quickly with @ndem0 on the issue to get feedback, but I had trouble implementing a runtime warning. Therefore, I set automatic_optimization to False (as it should be) and added a comment in the documentation. I think this is the best fix; it is not intrusive, and we can always change it in the future. Since this is a bug, I would like it to merge it asap in the master branch.

I am ok with this solution!

@dario-coscia
Copy link
Collaborator Author

@GiovanniCanali if you are ok with it please approve the PR for merging

@dario-coscia dario-coscia merged commit 0a60ed4 into master Apr 7, 2025
19 checks passed
@dario-coscia dario-coscia deleted the dario-coscia-patch-1 branch April 8, 2025 08:20
FilippoOlivo pushed a commit to FilippoOlivo/PINA that referenced this pull request Apr 17, 2025
dario-coscia added a commit that referenced this pull request Apr 17, 2025
GiovanniCanali pushed a commit to GiovanniCanali/PINA that referenced this pull request Dec 2, 2025
GiovanniCanali pushed a commit to GiovanniCanali/PINA that referenced this pull request Dec 2, 2025
GiovanniCanali pushed a commit to GiovanniCanali/PINA that referenced this pull request Dec 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working high priority Higher priority pr-to-review Label for PR that are ready to been reviewed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Multisolver raises error when not specifying optimizers

4 participants