You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/tutorial1/tutorial.py
+21-31Lines changed: 21 additions & 31 deletions
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,7 @@
53
53
# What if our equation is also time-dependent? In this case, our `class` will inherit from both `SpatialProblem` and `TimeDependentProblem`:
54
54
#
55
55
56
-
# In[1]:
56
+
# In[10]:
57
57
58
58
59
59
## routine needed to run the notebook on Google Colab
@@ -87,9 +87,9 @@ class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
87
87
88
88
# ### Write the problem class
89
89
#
90
-
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operators` module. Again, we'll consider Equation (1) and represent it in **PINA**:
90
+
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operator` module. Again, we'll consider Equation (1) and represent it in **PINA**:
91
91
92
-
# In[2]:
92
+
# In[1]:
93
93
94
94
95
95
frompina.problemimportSpatialProblem
@@ -99,7 +99,7 @@ class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
# Data for training can come in form of direct numerical simulation results, or points in the domains. In case we perform unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
149
149
150
-
# In[3]:
150
+
# In[2]:
151
151
152
152
153
153
# sampling 20 points in [0, 1] through discretization in all locations
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solver`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightning` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callbacks.MetricTracker`.
199
+
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solver`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightning` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callback.MetricTracker`.
trainer=Trainer(solver=pinn, max_epochs=1500, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
222
+
trainer=Trainer(solver=pinn, max_epochs=1500, logger=TensorBoardLogger('tutorial_logs'), accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
224
223
225
224
# train
226
225
trainer.train()
227
226
228
227
229
-
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightinig` loggers. The final loss can be accessed by `trainer.logged_metrics`
228
+
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightning` loggers. The final loss can be accessed by `trainer.logged_metrics`
# The Poisson problem is written in **PINA** code as a class. The equations are written as *conditions* that should be satisfied in the corresponding domains. The *truth_solution*
50
56
# is the exact solution which will be compared with the predicted one.
51
57
52
-
# In[2]:
58
+
# In[5]:
53
59
54
60
55
61
classPoisson(SpatialProblem):
@@ -90,33 +96,68 @@ def poisson_sol(self, pts):
90
96
91
97
# After the problem, the feed-forward neural network is defined, through the class `FeedForward`. This neural network takes as input the coordinates (in this case $x$ and $y$) and provides the unkwown field of the Poisson problem. The residual of the equations are evaluated at several sampling points (which the user can manipulate using the method `CartesianDomain_pts`) and the loss minimized by the neural network is the sum of the residuals.
92
98
#
93
-
# In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs. We use the `MetricTracker` class to track the metrics during training.
99
+
# In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs with a learning rate of 0.006 and $l_2$ weight regularization set to $10^{-8}$. These parameters can be modified as desired.
94
100
95
-
# In[4]:
101
+
# In[6]:
96
102
97
103
98
104
# make model + solver + trainer
105
+
frompina.optimimportTorchOptimizer
99
106
model=FeedForward(
100
107
layers=[10, 10],
101
108
func=Softplus,
102
109
output_dimensions=len(problem.output_variables),
103
110
input_dimensions=len(problem.input_variables)
104
111
)
105
-
pinn=PINN(problem, model)
106
-
trainer=Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
) # we train on CPU and avoid model summary at beginning of training (optional)
107
119
108
120
# train
109
121
trainer.train()
110
122
111
123
112
-
# Now the `Plotter` class is used to plot the results.
124
+
# Now we plot the results using `matplotlib`.
113
125
# The solution predicted by the neural network is plotted on the left, the exact one is represented at the center and on the right the error between the exact and the predicted solutions is showed.
plt.tricontourf( # convert to torch tensor + flatten
149
+
spatial_samples.extract("x").tensor.flatten(),
150
+
spatial_samples.extract("y").tensor.flatten(),
151
+
field.tensor.flatten(),
152
+
)
153
+
plt.colorbar(), plt.tight_layout()
154
+
155
+
156
+
# In[8]:
157
+
158
+
159
+
plt.figure(figsize=(12, 6))
160
+
plot_solution(solver=pinn)
120
161
121
162
122
163
# ## Solving the problem with extra-features PINNs
@@ -135,7 +176,7 @@ def poisson_sol(self, pts):
135
176
#
136
177
# Finally, we perform the same training as before: the problem is `Poisson`, the network is composed by the same number of neurons and optimizer parameters are equal to previous test, the only change is the new extra feature.
137
178
138
-
# In[6]:
179
+
# In[9]:
139
180
140
181
141
182
classSinSin(torch.nn.Module):
@@ -170,19 +211,24 @@ def forward(self, x):
170
211
layers=[10, 10],
171
212
extra_features=[SinSin()])
172
213
173
-
pinn_feat=PINN(problem, model_feat)
174
-
trainer_feat=Trainer(pinn_feat, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
logger=TensorBoardLogger("tutorial_logs")) # we train on CPU and avoid model summary at beginning of training (optional)
175
220
176
221
trainer_feat.train()
177
222
178
223
179
224
# The predicted and exact solutions and the error between them are represented below.
180
225
# We can easily note that now our network, having almost the same condition as before, is able to reach additional order of magnitudes in accuracy.
181
226
182
-
# In[7]:
227
+
# In[10]:
183
228
184
229
185
-
#plotter.plot(solver=pinn_feat)
230
+
plt.figure(figsize=(12, 6))
231
+
plot_solution(solver=pinn_feat)
186
232
187
233
188
234
# ## Solving the problem with learnable extra-features PINNs
@@ -199,7 +245,7 @@ def forward(self, x):
199
245
# where $\alpha$ and $\beta$ are the abovementioned parameters.
200
246
# Their implementation is quite trivial: by using the class `torch.nn.Parameter` we cam define all the learnable parameters we need, and they are managed by `autograd` module!
201
247
202
-
# In[8]:
248
+
# In[11]:
203
249
204
250
205
251
classSinSinAB(torch.nn.Module):
@@ -219,34 +265,42 @@ def forward(self, x):
219
265
220
266
221
267
# make model + solver + trainer
222
-
model_lean=FeedForwardWithExtraFeatures(
268
+
model_learn=FeedForwardWithExtraFeatures(
223
269
input_dimensions=len(problem.input_variables) +1, #we add one as also we consider the extra feature dimension
224
270
output_dimensions=len(problem.output_variables),
225
271
func=Softplus,
226
272
layers=[10, 10],
227
273
extra_features=[SinSinAB()])
228
274
229
-
pinn_lean=PINN(problem, model_lean)
230
-
trainer_learn=Trainer(pinn_lean, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
logger=TensorBoardLogger("tutorial_logs")) # we train on CPU and avoid model summary at beginning of training (optional)
231
281
232
282
# train
233
283
trainer_learn.train()
234
284
235
285
236
286
# Umh, the final loss is not appreciabily better than previous model (with static extra features), despite the usage of learnable parameters. This is mainly due to the over-parametrization of the network: there are many parameter to optimize during the training, and the model in unable to understand automatically that only the parameters of the extra feature (and not the weights/bias of the FFN) should be tuned in order to fit our problem. A longer training can be helpful, but in this case the faster way to reach machine precision for solving the Poisson problem is removing all the hidden layers in the `FeedForward`, keeping only the $\alpha$ and $\beta$ parameters of the extra feature.
237
287
238
-
# In[9]:
288
+
# In[12]:
239
289
240
290
241
291
# make model + solver + trainer
242
-
model_lean=FeedForwardWithExtraFeatures(
292
+
model_learn=FeedForwardWithExtraFeatures(
243
293
layers=[],
244
294
func=Softplus,
245
295
output_dimensions=len(problem.output_variables),
246
296
input_dimensions=len(problem.input_variables)+1,
247
297
extra_features=[SinSinAB()])
248
-
pinn_learn=PINN(problem, model_lean)
249
-
trainer_learn=Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
logger=TensorBoardLogger("tutorial_logs")) # we train on CPU and avoid model summary at beginning of training (optional)
250
304
251
305
# train
252
306
trainer_learn.train()
@@ -257,20 +311,14 @@ def forward(self, x):
257
311
#
258
312
# We conclude here by showing the graphical comparison of the unknown field and the loss trend for all the test cases presented here: the standard PINN, PINN with extra features, and PINN with learnable extra features.
259
313
260
-
# In[10]:
261
-
262
-
263
-
#plotter.plot(solver=pinn_learn)
264
-
265
-
266
314
# Let us compare the training losses for the various types of training
0 commit comments