You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -89,18 +93,18 @@ class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
89
93
#
90
94
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operator` module. Again, we'll consider Equation (1) and represent it in **PINA**:
# Data for training can come in form of direct numerical simulation results, or points in the domains. In case we perform unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
149
153
150
-
# In[2]:
154
+
# In[3]:
151
155
152
156
153
157
# sampling 20 points in [0, 1] through discretization in all locations
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solver`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightning` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callback.MetricTracker`.
trainer=Trainer(solver=pinn, max_epochs=1500, logger=TensorBoardLogger('tutorial_logs'), accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
223
233
224
234
# train
225
235
trainer.train()
226
236
227
237
228
238
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightning` loggers. The final loss can be accessed by `trainer.logged_metrics`
# As we can see the loss has not reached a minimum, suggesting that we could train for longer
273
+
# As we can see the loss has not reached a minimum, suggesting that we could train for longer! Alternatively, we can also take look at the loss using callbacks. Here we use `MetricTracker` from `pina.callback`:
# train, only CPU and avoid model summary at beginning of training (optional)
223
-
trainer=Trainer(solver=solver, max_epochs=40, accelerator='cpu', enable_model_summary=False, log_every_n_steps=-1, batch_size=5) # we train on CPU and avoid model summary at beginning of training (optional)
226
+
trainer=Trainer(solver=solver, max_epochs=40, accelerator='cpu', enable_model_summary=False, log_every_n_steps=-1, batch_size=5, # we train on CPU and avoid model summary at beginning of training (optional)
227
+
train_size=1.0,
228
+
val_size=0.0,
229
+
test_size=0.0)
224
230
trainer.train()
225
231
226
232
@@ -236,8 +242,8 @@ class NeuralOperatorProblem(AbstractProblem):
236
242
no_sol=no_sol[5])
237
243
238
244
239
-
# As we can see we can obtain nice result considering the small trainint time and the difficulty of the problem!
240
-
# Let's see how the training and testing error:
245
+
# As we can see we can obtain nice result considering the small training time and the difficulty of the problem!
246
+
# Let's take a look at the training and testing error:
241
247
242
248
# In[7]:
243
249
@@ -255,14 +261,14 @@ class NeuralOperatorProblem(AbstractProblem):
255
261
print(f'Testing error: {float(err_test):.3f}')
256
262
257
263
258
-
# as we can see the error is pretty small, which agrees with what we can see from the previous plots.
264
+
# As we can see the error is pretty small, which agrees with what we can see from the previous plots.
259
265
260
266
# ## What's next?
261
267
#
262
268
# Now you know how to solve a time dependent neural operator problem in **PINA**! There are multiple directions you can go now:
263
269
#
264
-
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
270
+
# 1. Train the network for longer or with different layer sizes and assert the final accuracy
265
271
#
266
-
# 2. We left a more challenging dataset [Data_KS2.mat](dat/Data_KS2.mat) where $A_k \in [-0.5, 0.5]$, $\ell_k \in [1, 2, 3]$, $\phi_k \in [0, 2\pi]$ for loger training
272
+
# 2. We left a more challenging dataset [Data_KS2.mat](dat/Data_KS2.mat) where $A_k \in [-0.5, 0.5]$, $\ell_k \in [1, 2, 3]$, $\phi_k \in [0, 2\pi]$ for longer training
267
273
#
268
274
# 3. Compare the performance between the different neural operators (you can even try to implement your favourite one!)
0 commit comments