Skip to content

Commit 5183379

Browse files
MatteB03ndem0
authored andcommitted
Updates to tutorial and run post codacy changes
1 parent 610ff1b commit 5183379

File tree

27 files changed

+952
-393
lines changed

27 files changed

+952
-393
lines changed

tutorials/tutorial1/tutorial.ipynb

Lines changed: 143 additions & 50 deletions
Large diffs are not rendered by default.

tutorials/tutorial1/tutorial.py

Lines changed: 65 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@
5353
# What if our equation is also time-dependent? In this case, our `class` will inherit from both `SpatialProblem` and `TimeDependentProblem`:
5454
#
5555

56-
# In[10]:
56+
# In[1]:
5757

5858

5959
## routine needed to run the notebook on Google Colab
@@ -65,9 +65,13 @@
6565
if IN_COLAB:
6666
get_ipython().system('pip install "pina-mathlab"')
6767

68+
import warnings
69+
6870
from pina.problem import SpatialProblem, TimeDependentProblem
6971
from pina.domain import CartesianDomain
7072

73+
warnings.filterwarnings('ignore')
74+
7175
class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
7276

7377
output_variables = ['u']
@@ -89,18 +93,18 @@ class TimeSpaceODE(SpatialProblem, TimeDependentProblem):
8993
#
9094
# Once the `Problem` class is initialized, we need to represent the differential equation in **PINA**. In order to do this, we need to load the **PINA** operators from `pina.operator` module. Again, we'll consider Equation (1) and represent it in **PINA**:
9195

92-
# In[1]:
96+
# In[2]:
9397

9498

99+
import torch
100+
import matplotlib.pyplot as plt
101+
95102
from pina.problem import SpatialProblem
96103
from pina.operator import grad
97104
from pina import Condition
98105
from pina.domain import CartesianDomain
99106
from pina.equation import Equation, FixedValue
100107

101-
import torch
102-
import matplotlib.pyplot as plt
103-
104108
class SimpleODE(SpatialProblem):
105109

106110
output_variables = ['u']
@@ -147,7 +151,7 @@ def truth_solution(self, pts):
147151
#
148152
# Data for training can come in form of direct numerical simulation results, or points in the domains. In case we perform unsupervised learning, we just need the collocation points for training, i.e. points where we want to evaluate the neural network. Sampling point in **PINA** is very easy, here we show three examples using the `.discretise_domain` method of the `AbstractProblem` class.
149153

150-
# In[2]:
154+
# In[3]:
151155

152156

153157
# sampling 20 points in [0, 1] through discretization in all locations
@@ -163,7 +167,7 @@ def truth_solution(self, pts):
163167

164168
# We are going to use latin hypercube points for sampling. We need to sample in all the conditions domains. In our case we sample in `D` and `x0`.
165169

166-
# In[3]:
170+
# In[4]:
167171

168172

169173
# sampling for training
@@ -173,7 +177,7 @@ def truth_solution(self, pts):
173177

174178
# The points are saved in a python `dict`, and can be accessed by calling the attribute `input_pts` of the problem
175179

176-
# In[4]:
180+
# In[5]:
177181

178182

179183
print('Input points:', problem.discretised_domains)
@@ -182,7 +186,7 @@ def truth_solution(self, pts):
182186

183187
# To visualize the sampled points we can use `matplotlib.pyplot`:
184188

185-
# In[5]:
189+
# In[6]:
186190

187191

188192
variables=problem.spatial_variables
@@ -198,13 +202,14 @@ def truth_solution(self, pts):
198202

199203
# Once we have defined the problem and generated the data we can start the modelling. Here we will choose a `FeedForward` neural network available in `pina.model`, and we will train using the `PINN` solver from `pina.solver`. We highlight that this training is fairly simple, for more advanced stuff consider the tutorials in the ***Physics Informed Neural Networks*** section of ***Tutorials***. For training we use the `Trainer` class from `pina.trainer`. Here we show a very short training and some method for plotting the results. Notice that by default all relevant metrics (e.g. MSE error during training) are going to be tracked using a `lightning` logger, by default `CSVLogger`. If you want to track the metric by yourself without a logger, use `pina.callback.MetricTracker`.
200204

201-
# In[6]:
205+
# In[7]:
202206

203207

204208
from pina import Trainer
205209
from pina.solver import PINN
206210
from pina.model import FeedForward
207211
from lightning.pytorch.loggers import TensorBoardLogger
212+
from pina.optim import TorchOptimizer
208213

209214

210215
# build the model
@@ -216,18 +221,23 @@ def truth_solution(self, pts):
216221
)
217222

218223
# create the PINN object
219-
pinn = PINN(problem, model)
224+
pinn = PINN(problem, model, TorchOptimizer(torch.optim.Adam, lr=0.005))
220225

221226
# create the trainer
222-
trainer = Trainer(solver=pinn, max_epochs=1500, logger=TensorBoardLogger('tutorial_logs'), accelerator='cpu', enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
227+
trainer = Trainer(solver=pinn, max_epochs=1500, logger=TensorBoardLogger('tutorial_logs'),
228+
accelerator='cpu',
229+
train_size=1.0,
230+
test_size=0.0,
231+
val_size=0.0,
232+
enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
223233

224234
# train
225235
trainer.train()
226236

227237

228238
# After the training we can inspect trainer logged metrics (by default **PINA** logs mean square error residual loss). The logged metrics can be accessed online using one of the `Lightning` loggers. The final loss can be accessed by `trainer.logged_metrics`
229239

230-
# In[7]:
240+
# In[8]:
231241

232242

233243
# inspecting final loss
@@ -236,7 +246,7 @@ def truth_solution(self, pts):
236246

237247
# By using `matplotlib` we can also do some qualitative plots of the solution.
238248

239-
# In[8]:
249+
# In[9]:
240250

241251

242252
pts = pinn.problem.spatial_domain.sample(256, 'grid', variables='x')
@@ -251,7 +261,7 @@ def truth_solution(self, pts):
251261

252262
# The solution is overlapped with the actual one, and they are barely indistinguishable. We can also take a look at the loss using `TensorBoard`:
253263

254-
# In[9]:
264+
# In[10]:
255265

256266

257267
# Load the TensorBoard extension
@@ -260,7 +270,46 @@ def truth_solution(self, pts):
260270
get_ipython().run_line_magic('tensorboard', "--logdir 'tutorial_logs'")
261271

262272

263-
# As we can see the loss has not reached a minimum, suggesting that we could train for longer
273+
# As we can see the loss has not reached a minimum, suggesting that we could train for longer! Alternatively, we can also take look at the loss using callbacks. Here we use `MetricTracker` from `pina.callback`:
274+
275+
# In[11]:
276+
277+
278+
from pina.callback import MetricTracker
279+
280+
#create the model
281+
newmodel = FeedForward(
282+
layers=[10, 10],
283+
func=torch.nn.Tanh,
284+
output_dimensions=len(problem.output_variables),
285+
input_dimensions=len(problem.input_variables)
286+
)
287+
288+
# create the PINN object
289+
newpinn = PINN(problem, newmodel, optimizer=TorchOptimizer(torch.optim.Adam, lr=0.005))
290+
291+
# create the trainer
292+
newtrainer = Trainer(solver=newpinn, max_epochs=1500, logger=True, #enable parameter logging
293+
callbacks=[MetricTracker()],
294+
accelerator='cpu',
295+
train_size=1.0,
296+
test_size=0.0,
297+
val_size=0.0,
298+
enable_model_summary=False) # we train on CPU and avoid model summary at beginning of training (optional)
299+
300+
# train
301+
newtrainer.train()
302+
303+
#plot loss
304+
trainer_metrics = newtrainer.callbacks[0].metrics
305+
loss = trainer_metrics['train_loss']
306+
epochs = range(len(loss))
307+
plt.plot(epochs, loss.cpu())
308+
# plotting
309+
plt.xlabel('epoch')
310+
plt.ylabel('loss')
311+
plt.yscale('log')
312+
264313

265314
# ## What's next?
266315
#

tutorials/tutorial10/tutorial.ipynb

Lines changed: 18 additions & 12 deletions
Large diffs are not rendered by default.

tutorials/tutorial10/tutorial.py

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@
3232

3333
import torch
3434
import matplotlib.pyplot as plt
35+
import warnings
3536

3637
from scipy import io
3738
from pina import Condition, LabelTensor
@@ -40,6 +41,8 @@
4041
from pina.solver import SupervisedSolver
4142
from pina.trainer import Trainer
4243

44+
warnings.filterwarnings('ignore')
45+
4346

4447
# ## Data Generation
4548
#
@@ -220,7 +223,10 @@ class NeuralOperatorProblem(AbstractProblem):
220223
# initialize solver
221224
solver = SupervisedSolver(problem=problem, model=model)
222225
# train, only CPU and avoid model summary at beginning of training (optional)
223-
trainer = Trainer(solver=solver, max_epochs=40, accelerator='cpu', enable_model_summary=False, log_every_n_steps=-1, batch_size=5) # we train on CPU and avoid model summary at beginning of training (optional)
226+
trainer = Trainer(solver=solver, max_epochs=40, accelerator='cpu', enable_model_summary=False, log_every_n_steps=-1, batch_size=5, # we train on CPU and avoid model summary at beginning of training (optional)
227+
train_size=1.0,
228+
val_size=0.0,
229+
test_size=0.0)
224230
trainer.train()
225231

226232

@@ -236,8 +242,8 @@ class NeuralOperatorProblem(AbstractProblem):
236242
no_sol=no_sol[5])
237243

238244

239-
# As we can see we can obtain nice result considering the small trainint time and the difficulty of the problem!
240-
# Let's see how the training and testing error:
245+
# As we can see we can obtain nice result considering the small training time and the difficulty of the problem!
246+
# Let's take a look at the training and testing error:
241247

242248
# In[7]:
243249

@@ -255,14 +261,14 @@ class NeuralOperatorProblem(AbstractProblem):
255261
print(f'Testing error: {float(err_test):.3f}')
256262

257263

258-
# as we can see the error is pretty small, which agrees with what we can see from the previous plots.
264+
# As we can see the error is pretty small, which agrees with what we can see from the previous plots.
259265

260266
# ## What's next?
261267
#
262268
# Now you know how to solve a time dependent neural operator problem in **PINA**! There are multiple directions you can go now:
263269
#
264-
# 1. Train the network for longer or with different layer sizes and assert the finaly accuracy
270+
# 1. Train the network for longer or with different layer sizes and assert the final accuracy
265271
#
266-
# 2. We left a more challenging dataset [Data_KS2.mat](dat/Data_KS2.mat) where $A_k \in [-0.5, 0.5]$, $\ell_k \in [1, 2, 3]$, $\phi_k \in [0, 2\pi]$ for loger training
272+
# 2. We left a more challenging dataset [Data_KS2.mat](dat/Data_KS2.mat) where $A_k \in [-0.5, 0.5]$, $\ell_k \in [1, 2, 3]$, $\phi_k \in [0, 2\pi]$ for longer training
267273
#
268274
# 3. Compare the performance between the different neural operators (you can even try to implement your favourite one!)

0 commit comments

Comments
 (0)