You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# A standard PINN approach would be to fit this model using a Feed Forward (fully connected) Neural Network. For a conventional fully-connected neural network is easy to
95
100
# approximate a function $u$, given sufficient data inside the computational domain. However solving high-frequency or multi-scale problems presents great challenges to PINNs especially when the number of data cannot capture the different scales.
96
101
#
97
102
# Below we run a simulation using the `PINN` solver and the self adaptive `SAPINN` solver, using a [`FeedForward`](https://mathlab.github.io/PINA/_modules/pina/model/feed_forward.html#FeedForward) model. We used a `MultiStepLR` scheduler to decrease the learning rate slowly during training (it takes around 2 minutes to run on CPU).
# We can clearly see that the solution has not been learned by the two different solvers. Indeed the big problem is not in the optimization strategy (i.e. the solver), but in the model used to solve the problem. A simple `FeedForward` network can hardly handle multiscales if not enough collocation points are used!
125
151
#
126
152
# We can also compute the $l_2$ relative error for the `PINN` and `SAPINN` solutions:
127
153
128
-
# In[20]:
154
+
# In[5]:
129
155
130
156
131
157
# l2 loss from PINA losses
@@ -153,7 +179,7 @@ def truth_solution(self, x):
153
179
# In PINA we already have implemented the feature as a `layer` called [`FourierFeatureEmbedding`](https://mathlab.github.io/PINA/_rst/layers/fourier_embedding.html). Below we will build the *Multi-scale Fourier Feature Architecture*. In this architecture multiple Fourier feature embeddings (initialized with different $\sigma$)
154
180
# are applied to input coordinates and then passed through the same fully-connected neural network, before the outputs are finally concatenated with a linear layer.
# We will train the `MultiscaleFourierNet` with the `PINN` solver (and feel free to try also with our PINN variants (`SAPINN`, `GPINN`, `CompetitivePINN`, ...).
trainer=Trainer(multiscale_pinn, max_epochs=5000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
trainer=Trainer(multiscale_pinn, max_epochs=5000, accelerator='cpu', enable_model_summary=False, val_size=0., train_size=1., test_size=0.) # we train on CPU and avoid model summary at beginning of training (optional)
189
214
trainer.train()
190
215
191
216
192
217
# Let us now plot the solution and compute the relative $l_2$ again!
193
218
194
-
# In[24]:
219
+
# In[8]:
195
220
196
221
197
-
#plot the solution
198
-
pl.plot(multiscale_pinn, title='Solution PINN with MultiscaleFourierNet')
print(f'Relative l2 error PINN with MultiscaleFourierNet{l2_loss(multiscale_pinn(pts), problem.truth_solution(pts)).item():.2%}')
227
+
print(f'Relative l2 error PINN with MultiscaleFourierNet:{l2_loss(multiscale_pinn(pts), problem.truth_solution(pts)).item():.2%}')
203
228
204
229
205
-
# It is pretty clear that the network has learned the correct solution, with also a very law error. Obviously a longer training and a more expressive neural network could improve the results!
230
+
# It is pretty clear that the network has learned the correct solution, with also a very low error. Obviously a longer training and a more expressive neural network could improve the results!
# In this tutorial we're gonna use the `LidCavity` class from the [Smithers](https://github.com/mathLab/Smithers) library, which contains a set of parametric solutions of the Lid-driven cavity problem in a square domain. The dataset consists of 300 snapshots of the parameter fields, which in this case are the magnitude of velocity and the pressure, and the corresponding parameter values $u$ and $p$. Each snapshot corresponds to a different value of the tangential velocity $\mu$ of the lid, which has been sampled uniformly between 0.01 m/s and 1 m/s.
# The results are promising! Now let's visualise them, comparing four random predicted snapshots to the true ones:
180
182
181
-
# In[]:
183
+
# In[10]:
182
184
183
185
184
186
importnumpyasnp
@@ -212,7 +214,7 @@ def fit(self, p, x):
212
214
213
215
# Overall we have reached a good level of approximation while avoiding time-consuming training procedures. Let's try doing the same to predict the pressure snapshots:
214
216
215
-
# In[]:
217
+
# In[11]:
216
218
217
219
218
220
'''create the model'''
@@ -235,7 +237,7 @@ def fit(self, p, x):
235
237
236
238
# Unfortunately here we obtain a very high relative test error, although this is likely due to the nature of the available data. Looking at the plots we can see that the pressure field is subject to high variations between subsequent snapshots, especially here:
237
239
238
-
# In[]:
240
+
# In[12]:
239
241
240
242
241
243
fig, axs=plt.subplots(2, 3, figsize=(14, 6))
@@ -250,7 +252,7 @@ def fit(self, p, x):
250
252
251
253
# Or here:
252
254
253
-
# In[]:
255
+
# In[13]:
254
256
255
257
256
258
fig, axs=plt.subplots(2, 3, figsize=(14, 6))
@@ -264,7 +266,7 @@ def fit(self, p, x):
264
266
265
267
# Scrolling through the velocity snapshots we can observe a more regular behaviour, with no such variations in subsequent snapshots. Moreover, if we decide not to consider the abovementioned "problematic" snapshots, we can already observe a huge improvement:
0 commit comments