Skip to content

Commit 3729e14

Browse files
[Doc] Add evaluation for xpinn and add to homepage (#912)
* add evaluation for xpinn and refine doc * update Examples in AllenCahn docstring
1 parent 92c654d commit 3729e14

File tree

10 files changed

+110
-100
lines changed

10 files changed

+110
-100
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ PaddleScience 是一个基于深度学习框架 PaddlePaddle 开发的科学计
2727

2828
| 问题类型 | 案例名称 | 优化算法 | 模型类型 | 训练方式 | 数据集 | 参考资料 |
2929
|-----|---------|-----|---------|----|---------|---------|
30+
| 相场方程 | [Allen-Cahn](https://paddlescience-docs.readthedocs.io/zh/latest/zh/examples/allen_cahn) | 机理驱动 | MLP | 无监督学习 | [Data](https://paddle-org.bj.bcebos.com/paddlescience/datasets/AllenCahn/allen_cahn.mat) | [Paper](https://arxiv.org/pdf/2402.00326) |
3031
| 微分方程 | [拉普拉斯方程](https://paddlescience-docs.readthedocs.io/zh/latest/zh/examples/laplace2d) | 机理驱动 | MLP | 无监督学习 | - | - |
3132
| 微分方程 | [伯格斯方程](https://paddlescience-docs.readthedocs.io/zh/latest/zh/examples/deephpms) | 机理驱动 | MLP | 无监督学习 | [Data](https://github.com/maziarraissi/DeepHPMs/tree/master/Data) | [Paper](https://arxiv.org/pdf/1801.06637.pdf) |
3233
| 微分方程 | [非线性偏微分方程](https://paddlescience-docs.readthedocs.io/zh/latest/zh/examples/pirbn) | 机理驱动 | PIRBN | 无监督学习 | - | [Paper](https://arxiv.org/abs/2304.06234) |
@@ -38,7 +39,7 @@ PaddleScience 是一个基于深度学习框架 PaddlePaddle 开发的科学计
3839
| 微分方程 | [分数阶微分方程](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/fpde/fractional_poisson_2d.py) | 机理驱动 | MLP | 无监督学习 | - | - |
3940
| 光孤子 | [Optical soliton](https://paddlescience-docs.readthedocs.io/zh/latest/zh/examples/nlsmb) | 机理驱动 | MLP | 无监督学习 | - | [Paper](https://doi.org/10.1007/s11071-023-08824-w)|
4041
| 光纤怪波 | [Optical rogue wave](https://paddlescience-docs.readthedocs.io/zh/latest/zh/examples/nlsmb) | 机理驱动 | MLP | 无监督学习 | - | [Paper](https://doi.org/10.1007/s11071-023-08824-w)|
41-
| 相场方程 | [Allen-Cahn](https://paddlescience-docs.readthedocs.io/zh/latest/zh/examples/allen_cahn) | 机理驱动 | MLP | 无监督学习 | [Data](https://paddle-org.bj.bcebos.com/paddlescience/datasets/AllenCahn/allen_cahn.mat) | [Paper](https://arxiv.org/pdf/2402.00326) |
42+
| 域分解 | [XPINN](https://paddlescience-docs.readthedocs.io/zh/latest/zh/examples/xpinns) | 机理驱动 | MLP | 无监督学习 | - | [Paper](https://doi.org/10.4208/cicp.OA-2020-0164)|
4243

4344
<br>
4445
<p align="center"><b>技术科学(AI for Technology)</b></p>

docs/index.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,7 @@
7272

7373
| 问题类型 | 案例名称 | 优化算法 | 模型类型 | 训练方式 | 数据集 | 参考资料 |
7474
|-----|---------|-----|---------|----|---------|---------|
75+
| 相场方程 | [Allen-Cahn](./zh/examples/allen_cahn.md) | 机理驱动 | MLP | 无监督学习 | [Data](https://paddle-org.bj.bcebos.com/paddlescience/datasets/AllenCahn/allen_cahn.mat) | [Paper](https://arxiv.org/pdf/2402.00326) |
7576
| 微分方程 | [拉普拉斯方程](./zh/examples/laplace2d.md) | 机理驱动 | MLP | 无监督学习 | - | - |
7677
| 微分方程 | [伯格斯方程](./zh/examples/deephpms.md) | 机理驱动 | MLP | 无监督学习 | [Data](https://github.com/maziarraissi/DeepHPMs/tree/master/Data) | [Paper](https://arxiv.org/pdf/1801.06637.pdf) |
7778
| 微分方程 | [非线性偏微分方程](./zh/examples/pirbn.md) | 机理驱动 | PIRBN | 无监督学习 | - | [Paper](https://arxiv.org/abs/2304.06234) |
@@ -83,7 +84,7 @@
8384
| 微分方程 | [分数阶微分方程](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/fpde/fractional_poisson_2d.py) | 机理驱动 | MLP | 无监督学习 | - | - |
8485
| 光孤子 | [Optical soliton](./zh/examples/nlsmb.md) | 机理驱动 | MLP | 无监督学习 | - | [Paper](https://doi.org/10.1007/s11071-023-08824-w)|
8586
| 光纤怪波 | [Optical rogue wave](./zh/examples/nlsmb.md) | 机理驱动 | MLP | 无监督学习 | - | [Paper](https://doi.org/10.1007/s11071-023-08824-w)|
86-
| 相场方程 | [Allen-Cahn](./zh/examples/allen_cahn.md) | 机理驱动 | MLP | 无监督学习 | [Data](https://paddle-org.bj.bcebos.com/paddlescience/datasets/AllenCahn/allen_cahn.mat) | [Paper](https://arxiv.org/pdf/2402.00326) |
87+
| 域分解 | [XPINN](./zh/examples/xpinns.md) | 机理驱动 | MLP | 无监督学习 | - | [Paper](https://doi.org/10.4208/cicp.OA-2020-0164)|
8788

8889
<br>
8990
<p align="center"><b>技术科学(AI for Technology)</b></p>

docs/zh/examples/xpinns.md

Lines changed: 32 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -5,14 +5,25 @@
55
``` sh
66
# linux
77
wget -nc https://paddle-org.bj.bcebos.com/paddlescience/datasets/XPINN/XPINN_2D_PoissonEqn.mat -P ./data/
8-
98
# windows
10-
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/XPINN/XPINN_2D_PoissonEqn.mat --output ./data/XPINN_2D_PoissonEqn.mat
11-
9+
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/XPINN/XPINN_2D_PoissonEqn.mat --create-dirs -o ./data/XPINN_2D_PoissonEqn.mat
1210
python xpinn.py
11+
```
1312

13+
=== "模型评估命令"
14+
15+
``` sh
16+
# linux
17+
wget -nc https://paddle-org.bj.bcebos.com/paddlescience/datasets/XPINN/XPINN_2D_PoissonEqn.mat -P ./data/
18+
# windows
19+
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/XPINN/XPINN_2D_PoissonEqn.mat --create-dirs -o ./data/XPINN_2D_PoissonEqn.mat
20+
python xpinn.py mode=eval EVAL.pretrained_model_path=https://paddle-org.bj.bcebos.com/paddlescience/models/XPINN/xpinn_pretrained.pdparams
1421
```
1522

23+
| 预训练模型 | 指标 |
24+
|:--| :--|
25+
| [xpinn_pretrained.pdparams](https://paddle-org.bj.bcebos.com/paddlescience/models/XPINN/xpinn_pretrained.pdparams) | L2Rel.l2_error: 0.04226 |
26+
1627
## 1. 背景简介
1728

1829
求解偏微分方程(PDE)是一类基础的物理问题,随着人工智能技术的高速发展,利用深度学习求解偏微分方程成为新的研究趋势。[XPINNs(Extended Physics-Informed Neural Networks)](https://doi.org/10.4208/cicp.OA-2020-0164)是一种适用于物理信息神经网络(PINNs)的广义时空域分解方法,以求解任意复杂几何域上的非线性偏微分方程。
@@ -57,7 +68,7 @@ $$ \gamma_2 =0.34+0.04 sin(5θ)+0.18 cos(3θ)+0.1 cos(6θ), θ \in [0,2π) $$
5768
wget -nc https://paddle-org.bj.bcebos.com/paddlescience/datasets/XPINN/XPINN_2D_PoissonEqn.mat -P ./data/
5869
```
5970

60-
### 3.3 模型构建
71+
### 3.2 模型构建
6172

6273
在本问题中,我们使用神经网络 `MLP` 作为模型,在模型代码中定义三个 `MLP` ,分别作为三个子区域的模型。
6374

@@ -74,15 +85,15 @@ examples/xpinn/xpinn.py:301:302
7485
<figcaption>XPINN子网络的训练过程</figcaption>
7586
</figure>
7687

77-
### 3.4 约束构建
88+
### 3.3 约束构建
7889

7990
在本案例中,我们使用监督数据集对模型进行训练,因此需要构建监督约束。
8091

8192
在定义约束之前,我们需要指定数据集的路径等相关配置,将这些信息存放到对应的 YAML 文件中,如下所示。
8293

83-
``` yaml linenums="43"
94+
``` yaml linenums="44"
8495
--8<--
85-
examples/xpinn/conf/xpinn.yaml:43:44
96+
examples/xpinn/conf/xpinn.yaml:44:45
8697
--8<--
8798
```
8899

@@ -102,17 +113,17 @@ examples/xpinn/xpinn.py:304:311
102113
--8<--
103114
```
104115

105-
### 3.5 超参数设定
116+
### 3.4 超参数设定
106117

107118
设置训练轮数等参数,如下所示。
108119

109-
``` yaml linenums="83"
120+
``` yaml linenums="84"
110121
--8<--
111-
examples/xpinn/conf/xpinn.yaml:83:88
122+
examples/xpinn/conf/xpinn.yaml:84:89
112123
--8<--
113124
```
114125

115-
### 3.6 优化器构建
126+
### 3.5 优化器构建
116127

117128
训练过程会调用优化器来更新模型参数,此处选择较为常用的 `Adam` 优化器。
118129

@@ -122,7 +133,7 @@ examples/xpinn/xpinn.py:337:338
122133
--8<--
123134
```
124135

125-
### 3.7 评估器构建
136+
### 3.6 评估器构建
126137

127138
在训练过程中通常会按一定轮数间隔,用验证集(测试集)评估当前模型的训练情况,因此使用 `ppsci.validate.SupervisedValidator` 构建评估器。
128139

@@ -132,37 +143,37 @@ examples/xpinn/xpinn.py:324:335
132143
--8<--
133144
```
134145

135-
评估指标为预测结果和真实结果的 RMSE 值,这里需自定义指标计算函数,如下所示。
146+
评估指标为预测结果和真实结果的 L2 相对误差值,这里需自定义指标计算函数,如下所示。
136147

137148
``` py linenums="194"
138149
--8<--
139150
examples/xpinn/xpinn.py:194:219
140151
--8<--
141152
```
142153

143-
### 3.8 模型训练
154+
### 3.7 模型训练评估
144155

145-
完成上述设置之后,只需要将上述实例化的对象按顺序传递给 `ppsci.solver.Solver`,然后启动训练。
156+
完成上述设置之后,只需要将上述实例化的对象按顺序传递给 `ppsci.solver.Solver`,然后启动训练、评估
146157

147158
``` py linenums="340"
148159
--8<--
149-
examples/xpinn/xpinn.py:340:357
160+
examples/xpinn/xpinn.py:340:350
150161
--8<--
151162
```
152163

153-
### 3.9 结果可视化
164+
### 3.8 结果可视化
154165

155166
训练完毕之后程序会对测试集中的数据进行预测,并以图片的形式对结果进行可视化,如下所示。
156167

157-
``` py linenums="360"
168+
``` py linenums="352"
158169
--8<--
159-
examples/xpinn/xpinn.py:360:384
170+
examples/xpinn/xpinn.py:352:376
160171
--8<--
161172
```
162173

163174
## 4. 完整代码
164175

165-
``` py linenums="1" title="cfdgcn.py"
176+
``` py linenums="1" title="xpinn.py"
166177
--8<--
167178
examples/xpinn/xpinn.py
168179
--8<--
@@ -181,4 +192,4 @@ examples/xpinn/xpinn.py
181192

182193
## 6. 参考文献
183194

184-
* [A.D.Jagtap, G.E.Karniadakis, Extended Physics-Informed Neural Networks (XPINNs): A Generalized Space-Time Domain Decomposition Based Deep Learning Framework for Nonlinear Partial Differential Equations, Commun. Comput. Phys., Vol.28, No.5, 2002-2041, 2020.](https://doi.org/10.4208/cicp.OA-2020-0164)
195+
- [Extended Physics-Informed Neural Networks (XPINNs): A Generalized Space-Time Domain Decomposition Based Deep Learning Framework for Nonlinear Partial Differential Equations](https://doi.org/10.4208/cicp.OA-2020-0164)

examples/xpinn/conf/xpinn.yaml

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,20 @@
1+
defaults:
2+
- ppsci_default
3+
- TRAIN: train_default
4+
- TRAIN/ema: ema_default
5+
- TRAIN/swa: swa_default
6+
- EVAL: eval_default
7+
- INFER: infer_default
8+
- hydra/job/config/override_dirname/exclude_keys: exclude_keys_default
9+
- _self_
10+
111
hydra:
212
run:
313
# dynamic output directory according to running time and override name
414
dir: outputs_xpinn/${now:%Y-%m-%d}/${now:%H-%M-%S}/${hydra.job.override_dirname}
515
job:
616
name: ${mode} # name of logfile
717
chdir: false # keep current working direcotry unchaned
8-
config:
9-
override_dirname:
10-
exclude_keys:
11-
- TRAIN.checkpoint_path
12-
- TRAIN.pretrained_model_path
13-
- EVAL.pretrained_model_path
14-
- mode
15-
- output_dir
16-
- log_freq
1718
callbacks:
1819
init_callback:
1920
_target_: ppsci.utils.callbacks.InitCallback

examples/xpinn/plotting.py

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,9 @@ def log_image(
8181
gridspec.GridSpec(1, 1)
8282
ax = plt.subplot2grid((1, 1), (0, 0))
8383
tcf = ax.tricontourf(triang_total, np.squeeze(residual_u_exact), 100, cmap="jet")
84-
ax.add_patch(patches.Polygon(xx, closed=True, fill=True, color="w", edgecolor="w"))
84+
ax.add_patch(
85+
patches.Polygon(xx, closed=True, fill=True, facecolor="w", edgecolor="w")
86+
)
8587
tcbar = fig.colorbar(tcf)
8688
tcbar.ax.tick_params(labelsize=28)
8789
ax.set_xlabel("$x$", fontsize=32)
@@ -111,7 +113,9 @@ def log_image(
111113
gridspec.GridSpec(1, 1)
112114
ax = plt.subplot2grid((1, 1), (0, 0))
113115
tcf = ax.tricontourf(triang_total, residual_u_pred.flatten(), 100, cmap="jet")
114-
ax.add_patch(patches.Polygon(xx, closed=True, fill=True, color="w", edgecolor="w"))
116+
ax.add_patch(
117+
patches.Polygon(xx, closed=True, fill=True, facecolor="w", edgecolor="w")
118+
)
115119
tcbar = fig.colorbar(tcf)
116120
tcbar.ax.tick_params(labelsize=28)
117121
ax.set_xlabel("$x$", fontsize=32)
@@ -146,7 +150,9 @@ def log_image(
146150
100,
147151
cmap="jet",
148152
)
149-
ax.add_patch(patches.Polygon(xx, closed=True, fill=True, color="w", edgecolor="w"))
153+
ax.add_patch(
154+
patches.Polygon(xx, closed=True, fill=True, facecolor="w", edgecolor="w")
155+
)
150156
tcbar = fig.colorbar(tcf)
151157
tcbar.ax.tick_params(labelsize=28)
152158
ax.set_xlabel("$x$", fontsize=32)
@@ -173,13 +179,13 @@ def log_image(
173179
plt.show()
174180

175181

176-
pgf_with_latex = { # setup matplotlib to use latex for output
182+
PGF_WITH_LATEX = { # setup matplotlib to use latex for output
177183
"pgf.texsystem": "pdflatex", # change this if using xetex or latex
178184
"text.usetex": True, # use LaTeX to write all text
179-
"font.family": "serif",
180-
"font.serif": [], # blank entries should cause plots to inherit fonts from the document
181-
"font.sans-serif": [],
182-
"font.monospace": [],
185+
# "font.family": "serif",
186+
# "font.serif": [], # blank entries should cause plots to inherit fonts from the document
187+
# "font.sans-serif": [],
188+
# "font.monospace": [],
183189
"axes.labelsize": 10, # LaTeX default is 10pt font.
184190
"font.size": 10,
185191
"legend.fontsize": 8, # Make the legend/label fonts a little smaller
@@ -193,4 +199,4 @@ def log_image(
193199
]
194200
),
195201
}
196-
mpl.rcParams.update(pgf_with_latex)
202+
mpl.rcParams.update(PGF_WITH_LATEX)

examples/xpinn/xpinn.py

Lines changed: 6 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ def residual_func(output_der: paddle.Tensor, input: paddle.Tensor) -> paddle.Ten
191191
return loss1 + loss2 + loss3
192192

193193

194-
def eval_rmse_func(
194+
def eval_l2_rel_func(
195195
output_dict: Dict[str, paddle.Tensor],
196196
label_dict: Dict[str, paddle.Tensor],
197197
*args,
@@ -329,7 +329,7 @@ def train_dataset_transform_func(
329329
"residual2_u": lambda out: out["residual2_u"],
330330
"residual3_u": lambda out: out["residual3_u"],
331331
},
332-
metric={"RMSE": ppsci.metric.FunctionalMetric(eval_rmse_func)},
332+
metric={"L2Rel": ppsci.metric.FunctionalMetric(eval_l2_rel_func)},
333333
name="sup_validator",
334334
)
335335
validator = {sup_validator.name: sup_validator}
@@ -341,17 +341,9 @@ def train_dataset_transform_func(
341341
solver = ppsci.solver.Solver(
342342
custom_model,
343343
constraint,
344-
cfg.output_dir,
345-
optimizer,
346-
None,
347-
cfg.TRAIN.epochs,
348-
cfg.TRAIN.iters_per_epoch,
349-
save_freq=cfg.TRAIN.save_freq,
350-
eval_during_train=cfg.TRAIN.eval_during_train,
351-
eval_freq=cfg.TRAIN.eval_freq,
344+
optimizer=optimizer,
352345
validator=validator,
353-
eval_with_no_grad=cfg.EVAL.eval_with_no_grad,
354-
checkpoint_path=cfg.TRAIN.checkpoint_path,
346+
cfg=cfg,
355347
)
356348

357349
solver.train()
@@ -412,19 +404,16 @@ def evaluate(cfg: DictConfig):
412404
"residual2_u": lambda out: out["residual2_u"],
413405
"residual3_u": lambda out: out["residual3_u"],
414406
},
415-
metric={"RMSE": ppsci.metric.FunctionalMetric(eval_rmse_func)},
407+
metric={"L2Rel": ppsci.metric.FunctionalMetric(eval_l2_rel_func)},
416408
name="sup_validator",
417409
)
418410
validator = {sup_validator.name: sup_validator}
419411

420412
# initialize solver
421413
solver = ppsci.solver.Solver(
422414
custom_model,
423-
output_dir=cfg.output_dir,
424-
eval_freq=cfg.TRAIN.eval_freq,
425415
validator=validator,
426-
eval_with_no_grad=cfg.EVAL.eval_with_no_grad,
427-
checkpoint_path=cfg.TRAIN.checkpoint_path,
416+
cfg=cfg,
428417
)
429418

430419
solver.eval()

mkdocs.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@ nav:
4848
- Rossler_transform_physx: zh/examples/rossler.md
4949
- Volterra_IDE: zh/examples/volterra_ide.md
5050
- NLSMB: zh/examples/nlsmb.md
51+
- XPINN: zh/examples/xpinns.md
5152
- 技术科学(AI for Technology):
5253
- 流体:
5354
- AMGNet: zh/examples/amgnet.md

ppsci/arch/sfnonet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -412,7 +412,7 @@ class SFNONet(base.Arch):
412412
norm (str, optional): Normalization layer to use. Defaults to None.
413413
ada_in_features (int,optional): The input channles of the adaptive normalization.Defaults to None.
414414
preactivation (bool, optional): Whether to use resnet-style preactivation. Defaults to False.
415-
skip (str, optional): Type of skip connection to use,{'linear', 'identity', 'soft-gating'}.
415+
fno_skip (str, optional): Type of skip connection to use,{'linear', 'identity', 'soft-gating'}.
416416
Defaults to "soft-gating".
417417
separable (bool, optional): Whether to use a depthwise separable spectral convolution.
418418
Defaults to False.

0 commit comments

Comments
 (0)