You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/tutorial11/tutorial.py
+34-34Lines changed: 34 additions & 34 deletions
Original file line number
Diff line number
Diff line change
@@ -3,15 +3,15 @@
3
3
4
4
# # Tutorial: Introduction to `Trainer` class
5
5
# [](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial11/tutorial.ipynb)
6
-
#
7
-
# In this tutorial, we will delve deeper into the functionality of the `Trainer` class, which serves as the cornerstone for training **PINA** [Solvers](https://mathlab.github.io/PINA/_rst/_code.html#solvers).
8
-
#
6
+
#
7
+
# In this tutorial, we will delve deeper into the functionality of the `Trainer` class, which serves as the cornerstone for training **PINA** [Solvers](https://mathlab.github.io/PINA/_rst/_code.html#solvers).
8
+
#
9
9
# The `Trainer` class offers a plethora of features aimed at improving model accuracy, reducing training time and memory usage, facilitating logging visualization, and more thanks to the amazing job done by the PyTorch Lightning team!
10
-
#
10
+
#
11
11
# Our leading example will revolve around solving a simple regression problem where we want to approximate the following function with a Neural Net model $\mathcal{M}_{\theta}$:
12
12
# $$y = x^3$$
13
13
# by having only a set of $20$ observations $\{x_i, y_i\}_{i=1}^{20}$, with $x_i \sim\mathcal{U}[-3, 3]\;\;\forall i\in(1,\dots,20)$.
14
-
#
14
+
#
15
15
# Let's start by importing useful modules!
16
16
17
17
# In[ ]:
@@ -70,16 +70,16 @@
70
70
71
71
72
72
# ## Trainer Accelerator
73
-
#
73
+
#
74
74
# When creating the `Trainer`, **by default** the most performing `accelerator` for training which is available in your system will be chosen, ranked as follows:
# 4. [GPU](https://www.intel.com/content/www/us/en/products/docs/processors/what-is-a-gpu.html#:~:text=What%20does%20GPU%20stand%20for,video%20editing%2C%20and%20gaming%20applications) or [MPS](https://developer.apple.com/metal/pytorch/)
79
79
# 5. CPU
80
-
#
80
+
#
81
81
# For setting manually the `accelerator` run:
82
-
#
82
+
#
83
83
# * `accelerator = {'gpu', 'cpu', 'hpu', 'mps', 'cpu', 'ipu'}` sets the accelerator to a specific one
84
84
85
85
# In[15]:
@@ -91,11 +91,11 @@
91
91
# As you can see, even if a `GPU` is available on the system, it is not used since we set `accelerator='cpu'`.
92
92
93
93
# ## Trainer Logging
94
-
#
94
+
#
95
95
# In **PINA** you can log metrics in different ways. The simplest approach is to use the `MetricTracker` class from `pina.callbacks`, as seen in the [*Introduction to Physics Informed Neural Networks training*](https://github.com/mathLab/PINA/blob/master/tutorials/tutorial1/tutorial.ipynb) tutorial.
96
-
#
96
+
#
97
97
# However, especially when we need to train multiple times to get an average of the loss across multiple runs, `lightning.pytorch.loggers` might be useful. Here we will use `TensorBoardLogger` (more on [logging](https://lightning.ai/docs/pytorch/stable/extensions/logging.html) here), but you can choose the one you prefer (or make your own one).
98
-
#
98
+
#
99
99
# We will now import `TensorBoardLogger`, do three runs of training, and then visualize the results. Notice we set `enable_model_summary=False` to avoid model summary specifications (e.g. number of parameters); set it to `True` if needed.
100
100
101
101
# In[17]:
@@ -129,25 +129,25 @@
129
129
# We can now visualize the logs by simply running `tensorboard --logdir=training_log/` in the terminal. You should obtain a webpage similar to the one shown below if running for 1000 epochs:
# As you can see, by default, **PINA** logs the losses which are shown in the progress bar, as well as the number of epochs. You can always insert more loggings by either defining a **callback** ([more on callbacks](https://lightning.ai/docs/pytorch/stable/extensions/callbacks.html)), or inheriting the solver and modifying the programs with different **hooks** ([more on hooks](https://lightning.ai/docs/pytorch/stable/common/lightning_module.html#hooks)).
136
-
#
136
+
#
137
137
# ## Trainer Callbacks
138
-
#
138
+
#
139
139
# Whenever we need to access certain steps of the training for logging, perform static modifications (i.e. not changing the `Solver`), or update `Problem` hyperparameters (static variables), we can use **Callbacks**. Notice that **Callbacks** allow you to add arbitrary self-contained programs to your training. At specific points during the flow of execution (hooks), the Callback interface allows you to design programs that encapsulate a full set of functionality. It de-couples functionality that does not need to be in **PINA** `Solver`s.
140
-
#
140
+
#
141
141
# Lightning has a callback system to execute them when needed. **Callbacks** should capture NON-ESSENTIAL logic that is NOT required for your lightning module to run.
142
-
#
142
+
#
143
143
# The following are best practices when using/designing callbacks:
144
-
#
144
+
#
145
145
# * Callbacks should be isolated in their functionality.
146
146
# * Your callback should not rely on the behavior of other callbacks in order to work properly.
147
147
# * Do not manually call methods from the callback.
148
148
# * Directly calling methods (e.g., on_validation_end) is strongly discouraged.
149
149
# * Whenever possible, your callbacks should not depend on the order in which they are executed.
150
-
#
150
+
#
151
151
# We will try now to implement a naive version of `MetricTraker` to show how callbacks work. Notice that this is a very easy application of callbacks, fortunately in **PINA** we already provide more advanced callbacks in `pina.callbacks`.
152
152
153
153
# In[18]:
@@ -172,7 +172,7 @@ def on_train_epoch_end(
172
172
173
173
174
174
# Let's see the results when applied to the problem. You can define **callbacks** when initializing the `Trainer` by using the `callbacks` argument, which expects a list of callbacks.
175
-
#
175
+
#
176
176
177
177
# In[19]:
178
178
@@ -206,8 +206,8 @@ def on_train_epoch_end(
206
206
trainer.callbacks[0].saved_metrics[:3] # only the first three epochs
207
207
208
208
209
-
# PyTorch Lightning also has some built-in `Callbacks` which can be used in **PINA**, [here is an extensive list](https://lightning.ai/docs/pytorch/stable/extensions/callbacks.html#built-in-callbacks).
210
-
#
209
+
# PyTorch Lightning also has some built-in `Callbacks` which can be used in **PINA**, [here is an extensive list](https://lightning.ai/docs/pytorch/stable/extensions/callbacks.html#built-in-callbacks).
210
+
#
211
211
# We can, for example, try the `EarlyStopping` routine, which automatically stops the training when a specific metric converges (here the `train_loss`). In order to let the training keep going forever, set `max_epochs=-1`.
212
212
213
213
# In[22]:
@@ -237,17 +237,17 @@ def on_train_epoch_end(
237
237
# As we can see the model automatically stop when the logging metric stopped improving!
238
238
239
239
# ## Trainer Tips to Boost Accuracy, Save Memory and Speed Up Training
240
-
#
240
+
#
241
241
# Until now we have seen how to choose the right `accelerator`, how to log and visualize the results, and how to interface with the program in order to add specific parts of code at specific points via `callbacks`.
242
242
# Now, we will focus on how to boost your training by saving memory and speeding it up, while maintaining the same or even better degree of accuracy!
243
-
#
243
+
#
244
244
# There are several built-in methods developed in PyTorch Lightning which can be applied straightforward in **PINA**. Here we report some:
245
-
#
245
+
#
246
246
# * [Stochastic Weight Averaging](https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/) to boost accuracy
247
247
# * [Gradient Clipping](https://deepgram.com/ai-glossary/gradient-clipping) to reduce computational time (and improve accuracy)
248
248
# * [Gradient Accumulation](https://lightning.ai/docs/pytorch/stable/common/optimization.html#id3) to save memory consumption
249
249
# * [Mixed Precision Training](https://lightning.ai/docs/pytorch/stable/common/optimization.html#id3) to save memory consumption
250
-
#
250
+
#
251
251
# We will just demonstrate how to use the first two and see the results compared to standard training.
252
252
# We use the [`Timer`](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.Timer.html#lightning.pytorch.callbacks.Timer) callback from `pytorch_lightning.callbacks` to track the times. Let's start by training a simple model without any optimization (train for 500 epochs).
253
253
@@ -312,7 +312,7 @@ def on_train_epoch_end(
312
312
# As you can see, the training time does not change at all! Notice that around epoch 350
313
313
# the scheduler is switched from the defalut one `ConstantLR` to the Stochastic Weight Average Learning Rate (`SWALR`).
314
314
# This is because by default `StochasticWeightAveraging` will be activated after `int(swa_epoch_start * max_epochs)` with `swa_epoch_start=0.7` by default. Finally, the final `train_loss` is lower when `StochasticWeightAveraging` is used.
315
-
#
315
+
#
316
316
# We will now do the same but clippling the gradient to be relatively small.
317
317
318
318
# In[25]:
@@ -341,18 +341,18 @@ def on_train_epoch_end(
341
341
342
342
343
343
# As we can see, by applying gradient clipping, we were able to achieve even lower error!
344
-
#
344
+
#
345
345
# ## What's Next?
346
-
#
346
+
#
347
347
# Now you know how to use the `Trainer` class efficiently in **PINA**! There are several directions you can explore next:
348
-
#
348
+
#
349
349
# 1. **Explore Training on Different Devices**: Test training times on various devices (e.g., `TPU`) to compare performance.
350
-
#
350
+
#
351
351
# 2. **Reduce Memory Costs**: Experiment with mixed precision training and gradient accumulation to optimize memory usage, especially when training Neural Operators.
352
-
#
352
+
#
353
353
# 3. **Benchmark `Trainer` Speed**: Benchmark the training speed of the `Trainer` class for different precisions to identify potential optimizations.
354
-
#
354
+
#
355
355
# 4. **...and many more!**: Consider expanding to **multi-GPU** setups or other advanced configurations for large-scale training.
356
-
#
356
+
#
357
357
# For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/).
Copy file name to clipboardExpand all lines: tutorials/tutorial14/tutorial.py
+44-44Lines changed: 44 additions & 44 deletions
Original file line number
Diff line number
Diff line change
@@ -2,11 +2,11 @@
2
2
# coding: utf-8
3
3
4
4
# # Tutorial: Learning Bifurcating PDE Solutions with Physics-Informed Deep Ensembles
5
-
#
5
+
#
6
6
# [](https://colab.research.google.com/github/mathLab/PINA/blob/master/tutorials/tutorial14/tutorial.ipynb)
7
-
#
7
+
#
8
8
# This tutorial demonstrates how to use the Deep Ensemble Physics Informed Network (DeepEnsemblePINN) to learn PDEs exhibiting bifurcating behavior, as discussed in [*Learning and Discovering Multiple Solutions Using Physics-Informed Neural Networks with Random Initialization and Deep Ensemble*](https://arxiv.org/abs/2503.06320).
9
-
#
9
+
#
10
10
# Let’s begin by importing the necessary libraries.
11
11
12
12
# In[ ]:
@@ -41,62 +41,62 @@
41
41
42
42
43
43
# ## Deep Ensemble
44
-
#
44
+
#
45
45
# Deep Ensemble methods improve model performance by leveraging the diversity of predictions generated by multiple neural networks trained on the same problem. Each network in the ensemble is trained independently—typically with different weight initializations or even slight variations in the architecture or data sampling. By combining their outputs (e.g., via averaging or majority voting), ensembles reduce overfitting, increase robustness, and improve generalization.
46
-
#
46
+
#
47
47
# This approach allows the ensemble to capture different perspectives of the problem, leading to more accurate and reliable predictions.
# The image above illustrates a Deep Ensemble setup, where multiple models attempt to predict the text from an image. While individual models may make errors (e.g., predicting "PONY" instead of "PINA"), combining their outputs—such as taking the majority vote—often leads to the correct result. This ensemble effect improves reliability by mitigating the impact of individual model biases.
54
-
#
55
-
#
54
+
#
55
+
#
56
56
# ## Deep Ensemble Physics-Informed Networks
57
-
#
57
+
#
58
58
# In the context of Physics-Informed Neural Networks (PINNs), Deep Ensembles help the network discover different branches or multiple solutions of a PDE that exhibits bifurcating behavior.
59
-
#
59
+
#
60
60
# By training a diverse set of models with different initializations, Deep Ensemble methods overcome the limitations of single-initialization models, which may converge to only one of the possible solutions. This approach is particularly useful when the solution space of the problem contains multiple valid physical states or behaviors.
61
-
#
62
-
#
61
+
#
62
+
#
63
63
# ## The Bratu Problem
64
-
#
64
+
#
65
65
# In this tutorial, we'll train a `DeepEnsemblePINN` solver to solve a bifurcating ODE known as the **Bratu problem**. The ODE is given by:
# When $\lambda < 3.513830719$, the equation admits two solutions $\alpha_1$ and $\alpha_2$, which correspond to two distinct solutions of the original ODE: $u_1$ and $u_2$.
90
-
#
90
+
#
91
91
# In this tutorial, we set $\lambda = 1$, which leads to:
92
-
#
92
+
#
93
93
# - $\alpha_1 \approx 0.37929$
94
94
# - $\alpha_2 \approx 2.73468$
95
-
#
95
+
#
96
96
# We first write the problem class, we do not write the boundary conditions as we will hard impose them.
97
-
#
97
+
#
98
98
# > **👉 We have a dedicated [tutorial](https://mathlab.github.io/PINA/tutorial16/tutorial.html) to teach how to build a Problem — have a look if you're interested!**
99
-
#
99
+
#
100
100
# > **👉 We have a dedicated [tutorial](https://mathlab.github.io/PINA/tutorial3/tutorial.html) to teach how to impose hard constraints — have a look if you're interested!**
101
101
102
102
# In[80]:
@@ -135,11 +135,11 @@ class BratuProblem(TimeDependentProblem):
135
135
136
136
137
137
# ## Defining the Deep Ensemble Models
138
-
#
138
+
#
139
139
# Now that the problem setup is complete, we move on to creating an **ensemble of models**. Each ensemble member will be a standard `FeedForward` neural network, wrapped inside a custom `Model` class.
140
-
#
140
+
#
141
141
# Each model's weights are initialized using a **normal distribution** with mean 0 and standard deviation 2. This random initialization is crucial to promote diversity across the ensemble members, allowing the models to converge to potentially different solutions of the PDE.
142
-
#
142
+
#
143
143
# The final ensemble is simply a **list of PyTorch models**, which we will later pass to the `DeepEnsemblePINN`
# As you can see we get different output since the neural networks are initialized differently.
182
-
#
182
+
#
183
183
# ## Training with `DeepEnsemblePINN`
184
-
#
184
+
#
185
185
# Now that everything is ready, we can train the models using the `DeepEnsemblePINN` solver! 🎯
186
-
#
186
+
#
187
187
# This solver is constructed by combining multiple neural network models that all aim to solve the same PDE. Each model $\mathcal{M}_{i \in \{1, \dots, 10\}}$ in the ensemble contributes a unique perspective due to different random initializations.
188
-
#
188
+
#
189
189
# This diversity allows the ensemble to **capture multiple branches or bifurcating solutions** of the problem, making it especially powerful for PDEs like the Bratu problem.
190
-
#
190
+
#
191
191
# Once the `DeepEnsemblePINN` solver is defined with all the models, we train them using the `Trainer` class, as with any other solver in **PINA**. We also build a callback to store the value of `u(0.5)` during training iterations.
# As you can see, different networks in the ensemble converge to different values pf $u(0.5)$ — this means we can actually **spot the bifurcation** in the solution space!
246
-
#
246
+
#
247
247
# This is a powerful demonstration of how **Deep Ensemble Physics-Informed Neural Networks** are capable of learning **multiple valid solutions** of a PDE that exhibits bifurcating behavior.
248
-
#
248
+
#
249
249
# We can also visualize the ensemble predictions to better observe the multiple branches:
# You have completed the tutorial on deep ensemble PINNs for bifurcating PDEs, well don! There are many potential next steps you can explore:
275
-
#
275
+
#
276
276
# 1. **Train the network longer or with different hyperparameters**: Experiment with different configurations of the single model, you can compose an ensemble by also stacking models with different layers, activation, ... to improve accuracy.
277
-
#
277
+
#
278
278
# 2. **Solve more complex problems**: The original paper provides very complex problems that can be solved with PINA, we suggest you to try implement and solve them!
279
-
#
279
+
#
280
280
# 3. **...and many more!**: There are countless directions to further explore, for example, what does it happen when you vary the network initialization hyperparameters?
281
-
#
281
+
#
282
282
# For more resources and tutorials, check out the [PINA Documentation](https://mathlab.github.io/PINA/).
0 commit comments