Skip to content

Commit 385c1f0

Browse files
MatteB03ndem0
authored andcommitted
Update tutorials 1 through 7
1 parent 5f93bdd commit 385c1f0

File tree

14 files changed

+675
-288
lines changed

14 files changed

+675
-288
lines changed

tutorials/tutorial1/tutorial.ipynb

Lines changed: 166 additions & 35 deletions
Large diffs are not rendered by default.

tutorials/tutorial1/tutorial.py

Lines changed: 21 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@
5353
# What if our equation is also time-dependent? In this case, our `class` will inherit from both `SpatialProblem` and `TimeDependentProblem`:
5454
#
5555

56-
# In[1]:
56+
# In[10]:
5757

5858

5959
## routine needed to run the notebook on Google Colab
@@ -87,9 +87,9 @@ class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
8787

8888
# ### Write the problem class
8989
#
90-
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operators` module. Again, we'll consider Equation (1) and represent it in **PINA**:
90+
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operator` module. Again, we'll consider Equation (1) and represent it in **PINA**:
9191

92-
# In[2]:
92+
# In[1]:
9393

9494

9595
from pina.problem import SpatialProblem
@@ -99,7 +99,7 @@ class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
9999
from pina.equation import Equation, FixedValue
100100

101101
import torch
102-
102+
import matplotlib.pyplot as plt
103103

104104
class SimpleODE(SpatialProblem):
105105

@@ -147,7 +147,7 @@ def truth_solution(self, pts):
147147
#
148148
# Data for training can come in form of direct numerical simulation results, or points in the domains. In case we perform unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
149149

150-
# In[3]:
150+
# In[2]:
151151

152152

153153
# sampling 20 points in [0, 1] through discretization in all locations
@@ -163,7 +163,7 @@ def truth_solution(self, pts):
163163

164164
# We are going to use latin hypercube points for sampling. We need to sample in all the conditions domains. In our case we sample in `D` and `x0`.
165165

166-
# In[4]:
166+
# In[3]:
167167

168168

169169
# sampling for training
@@ -173,7 +173,7 @@ def truth_solution(self, pts):
173173

174174
# The points are saved in a python `dict`, and can be accessed by calling the attribute `input_pts` of the problem
175175

176-
# In[5]:
176+
# In[4]:
177177

178178

179179
print('Input points:', problem.discretised_domains)
@@ -182,10 +182,9 @@ def truth_solution(self, pts):
182182

183183
# To visualize the sampled points we can use `matplotlib.pyplot`:
184184

185-
# In[6]:
185+
# In[5]:
186186

187187

188-
import matplotlib.pyplot as plt
189188
variables=problem.spatial_variables
190189
fig = plt.figure()
191190
proj = "3d" if len(variables) == 3 else None
@@ -197,15 +196,15 @@ def truth_solution(self, pts):
197196

198197
# ## Perform a small training
199198

200-
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solver`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightning` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callbacks.MetricTracker`.
199+
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solver`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightning` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callback.MetricTracker`.
201200

202-
# In[7]:
201+
# In[6]:
203202

204203

205204
from pina import Trainer
206205
from pina.solver import PINN
207206
from pina.model import FeedForward
208-
from pina.callback import MetricTracker
207+
from lightning.pytorch.loggers import TensorBoardLogger
209208

210209

211210
# build the model
@@ -220,15 +219,15 @@ def truth_solution(self, pts):
220219
pinn = PINN(problem, model)
221220

222221
# create the trainer
223-
trainer = Trainer(solver=pinn, max_epochs=1500, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
222+
trainer = Trainer(solver=pinn, max_epochs=1500, logger=TensorBoardLogger('tutorial_logs'), accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
224223

225224
# train
226225
trainer.train()
227226

228227

229-
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightinig` loggers. The final loss can be accessed by `trainer.logged_metrics`
228+
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightning` loggers. The final loss can be accessed by `trainer.logged_metrics`
230229

231-
# In[8]:
230+
# In[7]:
232231

233232

234233
# inspecting final loss
@@ -237,7 +236,7 @@ def truth_solution(self, pts):
237236

238237
# By using `matplotlib` we can also do some qualitative plots of the solution.
239238

240-
# In[9]:
239+
# In[8]:
241240

242241

243242
pts = pinn.problem.spatial_domain.sample(256, 'grid', variables='x')
@@ -250,24 +249,15 @@ def truth_solution(self, pts):
250249
plt.legend()
251250

252251

253-
# The solution is overlapped with the actual one, and they are barely indistinguishable. We can also plot easily the loss:
254-
255-
# In[10]:
252+
# The solution is overlapped with the actual one, and they are barely indistinguishable. We can also take a look at the loss using `TensorBoard`:
256253

254+
# In[9]:
257255

258-
list_ = [
259-
idx for idx, s in enumerate(trainer.callbacks)
260-
if isinstance(s, MetricTracker)
261-
]
262-
trainer_metrics = trainer.callbacks[list_[0]].metrics
263256

264-
loss = trainer_metrics['val_loss']
265-
epochs = range(len(loss))
266-
plt.plot(epochs, loss.cpu())
267-
# plotting
268-
plt.xlabel('epoch')
269-
plt.ylabel('loss')
270-
plt.yscale('log')
257+
# Load the TensorBoard extension
258+
get_ipython().run_line_magic('load_ext', 'tensorboard')
259+
# Show saved losses
260+
get_ipython().run_line_magic('tensorboard', "--logdir 'tutorial_logs'")
271261

272262

273263
# As we can see the loss has not reached a minimum, suggesting that we could train for longer

tutorials/tutorial2/tutorial.ipynb

Lines changed: 128 additions & 56 deletions
Large diffs are not rendered by default.

tutorials/tutorial2/tutorial.py

Lines changed: 82 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
#
1010
# First of all, some useful imports.
1111

12-
# In[1]:
12+
# In[4]:
1313

1414

1515
## routine needed to run the notebook on Google Colab
@@ -23,6 +23,8 @@
2323

2424
import torch
2525
from torch.nn import Softplus
26+
import matplotlib.pyplot as plt
27+
import warnings
2628

2729
from pina.problem import SpatialProblem
2830
from pina.operator import laplacian
@@ -31,9 +33,13 @@
3133
from pina.trainer import Trainer
3234
from pina.domain import CartesianDomain
3335
from pina.equation import Equation, FixedValue
34-
from pina import Condition, LabelTensor#,Plotter
36+
from pina import Condition, LabelTensor
3537
from pina.callback import MetricTracker
3638

39+
from lightning.pytorch.loggers import TensorBoardLogger
40+
41+
warnings.filterwarnings('ignore')
42+
3743

3844
# ## The problem definition
3945

@@ -49,7 +55,7 @@
4955
# The Poisson problem is written in **PINA** code as a class. The equations are written as *conditions* that should be satisfied in the corresponding domains. The *truth_solution*
5056
# is the exact solution which will be compared with the predicted one.
5157

52-
# In[2]:
58+
# In[5]:
5359

5460

5561
class Poisson(SpatialProblem):
@@ -90,33 +96,68 @@ def poisson_sol(self, pts):
9096

9197
# After the problem, the feed-forward neural network is defined, through the class `FeedForward`. This neural network takes as input the coordinates (in this case $x$ and $y$) and provides the unkwown field of the Poisson problem. The residual of the equations are evaluated at several sampling points (which the user can manipulate using the method `CartesianDomain_pts`) and the loss minimized by the neural network is the sum of the residuals.
9298
#
93-
# In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs. We use the `MetricTracker` class to track the metrics during training.
99+
# In this tutorial, the neural network is composed by two hidden layers of 10 neurons each, and it is trained for 1000 epochs with a learning rate of 0.006 and $l_2$ weight regularization set to $10^{-8}$. These parameters can be modified as desired.
94100

95-
# In[4]:
101+
# In[6]:
96102

97103

98104
# make model + solver + trainer
105+
from pina.optim import TorchOptimizer
99106
model = FeedForward(
100107
layers=[10, 10],
101108
func=Softplus,
102109
output_dimensions=len(problem.output_variables),
103110
input_dimensions=len(problem.input_variables)
104111
)
105-
pinn = PINN(problem, model)
106-
trainer = Trainer(pinn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
112+
pinn = PINN(problem, model, optimizer=TorchOptimizer(torch.optim.Adam, lr=0.006,weight_decay=1e-8))
113+
trainer = Trainer(pinn, max_epochs=1000, accelerator='cpu', enable_model_summary=False,
114+
train_size=1.0,
115+
val_size=0.0,
116+
test_size=0.0,
117+
logger=TensorBoardLogger("tutorial_logs")
118+
) # we train on CPU and avoid model summary at beginning of training (optional)
107119

108120
# train
109121
trainer.train()
110122

111123

112-
# Now the `Plotter` class is used to plot the results.
124+
# Now we plot the results using `matplotlib`.
113125
# The solution predicted by the neural network is plotted on the left, the exact one is represented at the center and on the right the error between the exact and the predicted solutions is showed.
114126

115-
# In[5]:
127+
# In[7]:
116128

117129

118-
#plotter = Plotter()
119-
#plotter.plot(solver=pinn)
130+
@torch.no_grad()
131+
def plot_solution(solver):
132+
# get the problem
133+
problem = solver.problem
134+
# get spatial points
135+
spatial_samples = problem.spatial_domain.sample(30, "grid")
136+
# compute pinn solution, true solution and absolute difference
137+
data = {
138+
"PINN solution": solver(spatial_samples),
139+
"True solution": problem.truth_solution(spatial_samples),
140+
"Absolute Difference": torch.abs(
141+
solver(spatial_samples) - problem.truth_solution(spatial_samples)
142+
)
143+
}
144+
# plot the solution
145+
for idx, (title, field) in enumerate(data.items()):
146+
plt.subplot(1, 3, idx + 1)
147+
plt.title(title)
148+
plt.tricontourf( # convert to torch tensor + flatten
149+
spatial_samples.extract("x").tensor.flatten(),
150+
spatial_samples.extract("y").tensor.flatten(),
151+
field.tensor.flatten(),
152+
)
153+
plt.colorbar(), plt.tight_layout()
154+
155+
156+
# In[8]:
157+
158+
159+
plt.figure(figsize=(12, 6))
160+
plot_solution(solver=pinn)
120161

121162

122163
# ## Solving the problem with extra-features PINNs
@@ -135,7 +176,7 @@ def poisson_sol(self, pts):
135176
#
136177
# Finally, we perform the same training as before: the problem is `Poisson`, the network is composed by the same number of neurons and optimizer parameters are equal to previous test, the only change is the new extra feature.
137178

138-
# In[6]:
179+
# In[9]:
139180

140181

141182
class SinSin(torch.nn.Module):
@@ -170,19 +211,24 @@ def forward(self, x):
170211
layers=[10, 10],
171212
extra_features=[SinSin()])
172213

173-
pinn_feat = PINN(problem, model_feat)
174-
trainer_feat = Trainer(pinn_feat, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
214+
pinn_feat = PINN(problem, model_feat, optimizer=TorchOptimizer(torch.optim.Adam, lr=0.006,weight_decay=1e-8))
215+
trainer_feat = Trainer(pinn_feat, max_epochs=1000, accelerator='cpu', enable_model_summary=False,
216+
train_size=1.0,
217+
val_size=0.0,
218+
test_size=0.0,
219+
logger=TensorBoardLogger("tutorial_logs")) # we train on CPU and avoid model summary at beginning of training (optional)
175220

176221
trainer_feat.train()
177222

178223

179224
# The predicted and exact solutions and the error between them are represented below.
180225
# We can easily note that now our network, having almost the same condition as before, is able to reach additional order of magnitudes in accuracy.
181226

182-
# In[7]:
227+
# In[10]:
183228

184229

185-
#plotter.plot(solver=pinn_feat)
230+
plt.figure(figsize=(12, 6))
231+
plot_solution(solver=pinn_feat)
186232

187233

188234
# ## Solving the problem with learnable extra-features PINNs
@@ -199,7 +245,7 @@ def forward(self, x):
199245
# where $\alpha$ and $\beta$ are the abovementioned parameters.
200246
# Their implementation is quite trivial: by using the class `torch.nn.Parameter` we cam define all the learnable parameters we need, and they are managed by `autograd` module!
201247

202-
# In[8]:
248+
# In[11]:
203249

204250

205251
class SinSinAB(torch.nn.Module):
@@ -219,34 +265,42 @@ def forward(self, x):
219265

220266

221267
# make model + solver + trainer
222-
model_lean = FeedForwardWithExtraFeatures(
268+
model_learn = FeedForwardWithExtraFeatures(
223269
input_dimensions=len(problem.input_variables) + 1, #we add one as also we consider the extra feature dimension
224270
output_dimensions=len(problem.output_variables),
225271
func=Softplus,
226272
layers=[10, 10],
227273
extra_features=[SinSinAB()])
228274

229-
pinn_lean = PINN(problem, model_lean)
230-
trainer_learn = Trainer(pinn_lean, max_epochs=1000, accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
275+
pinn_learn = PINN(problem, model_learn, optimizer=TorchOptimizer(torch.optim.Adam, lr=0.006,weight_decay=1e-8))
276+
trainer_learn = Trainer(pinn_learn, max_epochs=1000, enable_model_summary=False,
277+
train_size=1.0,
278+
val_size=0.0,
279+
test_size=0.0,
280+
logger=TensorBoardLogger("tutorial_logs")) # we train on CPU and avoid model summary at beginning of training (optional)
231281

232282
# train
233283
trainer_learn.train()
234284

235285

236286
# Umh, the final loss is not appreciabily better than previous model (with static extra features), despite the usage of learnable parameters. This is mainly due to the over-parametrization of the network: there are many parameter to optimize during the training, and the model in unable to understand automatically that only the parameters of the extra feature (and not the weights/bias of the FFN) should be tuned in order to fit our problem. A longer training can be helpful, but in this case the faster way to reach machine precision for solving the Poisson problem is removing all the hidden layers in the `FeedForward`, keeping only the $\alpha$ and $\beta$ parameters of the extra feature.
237287

238-
# In[9]:
288+
# In[12]:
239289

240290

241291
# make model + solver + trainer
242-
model_lean= FeedForwardWithExtraFeatures(
292+
model_learn= FeedForwardWithExtraFeatures(
243293
layers=[],
244294
func=Softplus,
245295
output_dimensions=len(problem.output_variables),
246296
input_dimensions=len(problem.input_variables)+1,
247297
extra_features=[SinSinAB()])
248-
pinn_learn = PINN(problem, model_lean)
249-
trainer_learn = Trainer(pinn_learn, max_epochs=1000, callbacks=[MetricTracker()], accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
298+
pinn_learn = PINN(problem, model_learn, optimizer=TorchOptimizer(torch.optim.Adam, lr=0.006,weight_decay=1e-8))
299+
trainer_learn = Trainer(pinn_learn, max_epochs=1000, accelerator='cpu', enable_model_summary=False,
300+
train_size=1.0,
301+
val_size=0.0,
302+
test_size=0.0,
303+
logger=TensorBoardLogger("tutorial_logs")) # we train on CPU and avoid model summary at beginning of training (optional)
250304

251305
# train
252306
trainer_learn.train()
@@ -257,20 +311,14 @@ def forward(self, x):
257311
#
258312
# We conclude here by showing the graphical comparison of the unknown field and the loss trend for all the test cases presented here: the standard PINN, PINN with extra features, and PINN with learnable extra features.
259313

260-
# In[10]:
261-
262-
263-
#plotter.plot(solver=pinn_learn)
264-
265-
266314
# Let us compare the training losses for the various types of training
267315

268-
# In[11]:
316+
# In[13]:
269317

270318

271-
#plotter.plot_loss(trainer, logy=True, label='Standard')
272-
#plotter.plot_loss(trainer_feat, logy=True,label='Static Features')
273-
#plotter.plot_loss(trainer_learn, logy=True, label='Learnable Features')
319+
# Load the TensorBoard extension
320+
get_ipython().run_line_magic('load_ext', 'tensorboard')
321+
get_ipython().run_line_magic('tensorboard', "--logdir 'tutorial_logs'")
274322

275323

276324
# ## What's next?

0 commit comments

Comments
 (0)