Skip to content

Commit 8c29ded

Browse files
committed
feat(ppsci): support data_effient_nopt
1 parent ed16bb4 commit 8c29ded

21 files changed

+135
-520
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ PaddleScience 是一个基于深度学习框架 PaddlePaddle 开发的科学计
4747
| 微分方程 | [若斯叻方程](https://paddlescience-docs.readthedocs.io/zh-cn/latest/zh/examples/rossler) | 数据驱动 | Transformer-Physx | 监督学习 | [Data](https://github.com/zabaras/transformer-physx) | [Paper](https://arxiv.org/abs/2010.03957) |
4848
| 算子学习 | [DeepONet](https://paddlescience-docs.readthedocs.io/zh-cn/latest/zh/examples/deeponet) | 数据驱动 | MLP | 监督学习 | [Data](https://deepxde.readthedocs.io/en/latest/demos/operator/antiderivative_unaligned.html) | [Paper](https://export.arxiv.org/pdf/1910.03193.pdf) |
4949
| 微分方程 | [梯度增强的物理知识融合 PDE 求解](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/gpinn/poisson_1d.py) | 机理驱动 | gPINN | 无监督学习 | - | [Paper](https://doi.org/10.1016/j.cma.2022.114823) |
50+
| 微分方程 | [PDE 求解](https://paddlescience-docs.readthedocs.io/zh-cn/latest/zh/examples/data_efficient_nopt) | 数据驱动 | FNO/Transformer | 无监督学习 | - | [Paper](https://arxiv.org/abs/2402.15734) |
5051
| 积分方程 | [沃尔泰拉积分方程](https://paddlescience-docs.readthedocs.io/zh-cn/latest/zh/examples/volterra_ide) | 机理驱动 | MLP | 无监督学习 | - | [Project](https://github.com/lululxvi/deepxde/blob/master/examples/pinn_forward/Volterra_IDE.py) |
5152
| 微分方程 | [分数阶微分方程](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/fpde/fractional_poisson_2d.py) | 机理驱动 | MLP | 无监督学习 | - | - |
5253
| 光孤子 | [Optical soliton](https://paddlescience-docs.readthedocs.io/zh-cn/latest/zh/examples/nlsmb) | 机理驱动 | MLP | 无监督学习 | - | [Paper](https://doi.org/10.1007/s11071-023-08824-w)|

docs/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,7 @@
8383
| 微分方程 | [若斯叻方程](./zh/examples/rossler.md) | 数据驱动 | Transformer-Physx | 监督学习 | [Data](https://github.com/zabaras/transformer-physx) | [Paper](https://arxiv.org/abs/2010.03957) |
8484
| 算子学习 | [DeepONet](./zh/examples/deeponet.md) | 数据驱动 | MLP | 监督学习 | [Data](https://deepxde.readthedocs.io/en/latest/demos/operator/antiderivative_unaligned.html) | [Paper](https://export.arxiv.org/pdf/1910.03193.pdf) |
8585
| 微分方程 | [梯度增强的物理知识融合 PDE 求解](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/gpinn/poisson_1d.py) | 机理驱动 | gPINN | 无监督学习 | - | [Paper](https://doi.org/10.1016/j.cma.2022.114823) |
86+
| 微分方程 | [PDE 求解](./zh/examples/data_efficient_nopt.md) | 数据驱动 | FNO/Transformer | 无监督学习 | - | [Paper](https://arxiv.org/abs/2402.15734) |
8687
| 积分方程 | [沃尔泰拉积分方程](./zh/examples/volterra_ide.md) | 机理驱动 | MLP | 无监督学习 | - | [Project](https://github.com/lululxvi/deepxde/blob/master/examples/pinn_forward/Volterra_IDE.py) |
8788
| 微分方程 | [分数阶微分方程](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/fpde/fractional_poisson_2d.py) | 机理驱动 | MLP | 无监督学习 | - | - |
8889
| 光孤子 | [Optical soliton](./zh/examples/nlsmb.md) | 机理驱动 | MLP | 无监督学习 | - | [Paper](https://doi.org/10.1007/s11071-023-08824-w)|
Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
# DataEfficientNopt
2+
3+
=== "模型训练命令"
4+
5+
``` sh
6+
cd examples/data_efficient_nopt
7+
# Download possion_64 data from https://drive.google.com/drive/folders/1crIsTZGxZULWhrXkwGDiWF33W6RHxJkf
8+
# Download helmholtz_64 data from https://drive.google.com/drive/folders/1UjIaF6FsjmN_xlGGSUX-1K2V3EF2Zalw
9+
10+
# Update the file paths in `config/operators_possion.yaml` or `config/operators_helmholtz.yaml` to specify `train_path`, `val_path`, `test_path`, `scales_path`, and `train_rand_idx_path`.
11+
12+
# possion_64 pretrain
13+
python pretrain_basic.py --run_name r0 --config pois-64-pretrain-e1_20_m3 --yaml_config ./config/operators_poisson.yaml
14+
15+
# possion_64 finetune
16+
python pretrain_basic.py --run_name r0 --config pois-64-e5_15_b0 --yaml_config ./config/operators_poisson.yaml
17+
18+
# helmholtz_64 pretrain
19+
python pretrain_basic.py --run_name r0 --config helm-64-pretrain-o1_20_m1 --yaml_config ./config/operators_helmholtz.yaml
20+
21+
# helmholtz_64 finetune
22+
python pretrain_basic.py --run_name r0 --config helm-64-o5_15_ft5_r2 --yaml_config ./config/operators_helmholtz.yaml
23+
```
24+
25+
=== "模型评估命令"
26+
27+
暂无
28+
29+
=== "模型导出命令"
30+
31+
暂无
32+
33+
=== "模型推理命令"
34+
35+
``` sh
36+
cd examples/data_efficient_nopt
37+
# Update the file paths in `config/operators_possion.yaml` or `config/operators_helmholtz.yaml` to specify `train_path`, `test_path`, and `scales_path`.
38+
# Use a fine-tuned model as the checkpoint in 'exp' or utilize `model_convert.py` to convert the official checkpoint.
39+
python3 inference_fno_helmholtz_poisson.py --config ./config/inference_poisson.yaml --ckpt_path <ckpt_path> --num_demos 1
40+
```
41+
42+
## 1. 背景简介
43+
44+
data_efficient_nopt旨在提高偏微分方程(PDE)算子学习的数据效率,通过设计无监督预训练方法减少对高成本模拟数据的依赖。利用未标记的PDE数据(无需模拟解),并通过基于物理启发的重建代理任务对神经算子进行预训练。为了提升分布外(OOD)泛化性能,我们进一步引入了一种基于相似性的上下文学习方法,使神经算子能够灵活利用上下文示例,而无需额外的训练成本或设计。在多种PDE上的实验表明,该方法具有高度的数据效率、更强的泛化能力,甚至优于传统的视觉预训练模型。
45+
46+
## 2. 模型原理
47+
48+
这篇论文主要解决使用深度学习方法解决基于偏微分方程(PDEs)的科学问题时的数据效率问题。具体来说,作者指出当前的神经算子(Neural Operators)方法需要大量的高保真PDE数据,这导致了高昂的数值模拟成本。为了减少对这些昂贵数据的依赖,作者提出了一种无监督预训练方法,旨在通过使用无标签的PDE数据来提高模型的数据效率和泛化能力。
49+
50+
论文通过以下方案解决上述提到的问题:
51+
52+
1. 无监督预训练
53+
- 无标签PDE数据定义:作者定义了无标签的PDE数据,这些数据不包含PDE的解,从而避免了昂贵的数值模拟。
54+
- 物理启发的代理任务:作者提出了两个基于重构的代理任务,分别是Masked Autoencoder(MAE)和Super-resolution(SR)。MAE通过随机遮蔽部分输入并要求模型重建完整的输入来学习稀疏感知的不变性;SR通过应用高斯滤波器使输入模糊,然后要求模型重建高分辨率的输入来学习分辨率和模糊的不变性。
55+
- 预训练过程:使用无标签的PDE数据和上述代理任务进行无监督预训练,从而获得更好的初始模型,减少后续监督训练所需的模拟数据量。
56+
57+
2. 上下文学习
58+
- 相似性挖掘:在推理时,通过计算查询输入与支持示例(demos)的输出距离来找到相似的示例。
59+
- 聚合预测:对于每个查询的时空位置,找到相似的示例后,通过聚合这些示例的解来生成最终的预测。
60+
- 方法优势:这种方法在推理时引入了零额外训练成本,而且可以无缝集成到现有的训练管道中,提高了模型在OOD数据上的泛化能力。
61+
62+
## 5. 完整代码
63+
64+
``` py linenums="1" title="examples/data_efficient_nopt/pretrain_basic.py"
65+
--8<--
66+
examples/data_efficient_nopt/pretrain_basic.py
67+
--8<--
68+
```
69+
70+
``` py linenums="1" title="examples/data_efficient_nopt/inference_fno_helmholtz_poisson.py"
71+
--8<--
72+
examples/data_efficient_nopt/inference_fno_helmholtz_poisson.py
73+
--8<--
74+
```
75+
76+
## 6. 结果展示
77+
78+
## 7. 参考资料
79+
80+
- [Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning](https://arxiv.org/abs/2402.15734)

examples/data_efficient_nopt/README.md

Lines changed: 0 additions & 40 deletions
This file was deleted.

examples/data_efficient_nopt/config/inference_helmholtz.yaml

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,7 @@
11
default:
2-
train_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o15_20_train.h5'
3-
test_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o15_20_test.h5'
4-
# scales_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o15_20_train_scale.npy'
5-
# datapath: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o16_test.h5'
6-
# datapath: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o15_test.h5'
7-
# datapath: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o16_tensor15_test.h5'
8-
# datapath: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_test.h5'
9-
scales_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_train_scale.npy'
2+
train_path: 'data/helmholtz_64/helmholtz_64/helmholtz_64_o15_20_train.h5'
3+
test_path: 'data/helmholtz_64/helmholtz_64/helmholtz_64_o15_20_test.h5'
4+
scales_path: 'data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_train_scale.npy'
105

116
num_data_workers: 1
127
subsample: 1

examples/data_efficient_nopt/config/inference_poisson.yaml

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,7 @@
11
default:
2-
# datapath: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e5_15_test.h5'
3-
train_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e15_50_train.h5' # pick demos
4-
test_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e15_50_test.h5'
5-
# datapath: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e20_test.h5'
6-
scales_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e5_15_train_scale.npy'
2+
train_path: 'data/possion_64/poisson_64_e15_50_train.h5' # pick demos
3+
test_path: 'data/possion_64/poisson_64_e15_50_test.h5'
4+
scales_path: 'data/possion_64/poisson_64_e5_15_train_scale.npy'
75

86
num_data_workers: 1
97
subsample: 1

examples/data_efficient_nopt/config/operators_helmholtz.yaml

Lines changed: 14 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -78,11 +78,10 @@ helmholtz: &helmholtz
7878

7979
helm-64-scale-o5_15: &helm_64_o5_15
8080
<<: *helmholtz
81-
train_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_train.h5'
82-
val_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_val.h5'
83-
test_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_test.h5'
84-
scales_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_train_scale.npy'
85-
# train_rand_idx_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/old_gen/train_rand_idx.npy'
81+
train_path: 'data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_train.h5'
82+
val_path: 'data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_val.h5'
83+
test_path: 'data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_test.h5'
84+
scales_path: 'data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_train_scale.npy'
8685
batch_size: 128
8786
in_dim: 3
8887
out_dim: 1
@@ -101,11 +100,11 @@ helm-64-scale-o5_15: &helm_64_o5_15
101100

102101
helm-64-pretrain-o1_20: &helm_64_o1_20_pt
103102
<<: *helmholtz
104-
train_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o1_20_train.h5'
105-
val_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o1_20_val.h5'
106-
test_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o1_20_test.h5'
107-
scales_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o1_20_train_scale.npy'
108-
train_rand_idx_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/train_rand_idx.npy'
103+
train_path: 'data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o1_20_train.h5'
104+
val_path: 'data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o1_20_val.h5'
105+
test_path: 'data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o1_20_test.h5'
106+
scales_path: 'data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o1_20_train_scale.npy'
107+
train_rand_idx_path: 'data_efficient_nopt/data/helmholtz_64/helmholtz_64/train_rand_idx.npy'
109108
batch_size: 128
110109
in_dim: 3
111110
out_dim: 1
@@ -128,11 +127,11 @@ helm-64-pretrain-o1_20_ft: &helm_64_o1_20_ft
128127

129128
helm-64-finetune-o5_15: &helm_64_o5_15_ft
130129
<<: *helmholtz
131-
train_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_train.h5'
132-
val_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_val.h5'
133-
test_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_test.h5'
134-
scales_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_train_scale.npy'
135-
train_rand_idx_path: '/home/aistudio/data_efficient_nopt/data/helmholtz_64/helmholtz_64/train_rand_idx.npy'
130+
train_path: 'data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_train.h5'
131+
val_path: 'data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_val.h5'
132+
test_path: 'data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_test.h5'
133+
scales_path: 'data/helmholtz_64/helmholtz_64/helmholtz_64_o5_15_train_scale.npy'
134+
train_rand_idx_path: 'data/helmholtz_64/helmholtz_64/train_rand_idx.npy'
136135
batch_size: 128
137136
in_dim: 3 #normal helmholtz has 3 dim, joint has 4
138137
out_dim: 1

examples/data_efficient_nopt/config/operators_poisson.yaml

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -78,11 +78,11 @@ poisson: &poisson
7878

7979
poisson-64-scale-e5_15: &poisson_64_e5_15
8080
<<: *poisson
81-
train_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e5_15_train.h5'
82-
val_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e5_15_val.h5'
83-
test_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e5_15_test.h5'
84-
scales_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e5_15_train_scale.npy'
85-
train_rand_idx_path: '/home/aistudio/data_efficient_nopt/data/possion_64/train_rand_idx.npy'
81+
train_path: 'data/possion_64/poisson_64_e5_15_train.h5'
82+
val_path: 'data/possion_64/poisson_64_e5_15_val.h5'
83+
test_path: 'data/possion_64/poisson_64_e5_15_test.h5'
84+
scales_path: 'data/possion_64/poisson_64_e5_15_train_scale.npy'
85+
train_rand_idx_path: 'data/possion_64/train_rand_idx.npy'
8686
batch_size: 128
8787
log_to_wandb: !!bool False
8888
learning_rate: 1E-3
@@ -101,11 +101,11 @@ poisson-64-scale-e5_15: &poisson_64_e5_15
101101

102102
pois-64-pretrain-e1_20: &pois_64_e1_20_pt
103103
<<: *poisson
104-
train_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e1_20_train.h5'
105-
val_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e1_20_val.h5'
106-
test_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e1_20_test.h5'
107-
scales_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e1_20_train_scale.npy'
108-
train_rand_idx_path: '/home/aistudio/data_efficient_nopt/data/possion_64/train_rand_idx.npy'
104+
train_path: 'data/possion_64/poisson_64_e1_20_train.h5'
105+
val_path: 'data/possion_64/poisson_64_e1_20_val.h5'
106+
test_path: 'data/possion_64/poisson_64_e1_20_test.h5'
107+
scales_path: 'data/possion_64/poisson_64_e1_20_train_scale.npy'
108+
train_rand_idx_path: 'data/possion_64/train_rand_idx.npy'
109109
batch_size: 128
110110
log_to_wandb: !!bool False
111111
mode_cut: 32
@@ -122,11 +122,11 @@ pois-64-pretrain-e1_20: &pois_64_e1_20_pt
122122

123123
pois-64-finetune-e5_15: &pois_64_e5_15_ft
124124
<<: *poisson
125-
train_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e5_15_train.h5'
126-
val_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e5_15_val.h5'
127-
test_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e5_15_test.h5'
128-
scales_path: '/home/aistudio/data_efficient_nopt/data/possion_64/poisson_64_e5_15_train_scale.npy'
129-
train_rand_idx_path: '/home/aistudio/data_efficient_nopt/data/possion_64/train_rand_idx.npy'
125+
train_path: 'data/possion_64/poisson_64_e5_15_train.h5'
126+
val_path: 'data/possion_64/poisson_64_e5_15_val.h5'
127+
test_path: 'data/possion_64/poisson_64_e5_15_test.h5'
128+
scales_path: 'data/possion_64/poisson_64_e5_15_train_scale.npy'
129+
train_rand_idx_path: 'data/possion_64/train_rand_idx.npy'
130130
batch_size: 128
131131
log_to_wandb: !!bool False
132132
mode_cut: 32

examples/data_efficient_nopt/inference_fno_helmholtz_poisson.py

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -11,17 +11,12 @@
1111
import paddle
1212
import paddle.distributed as dist
1313
import yaml
14-
15-
# from utils.data_utils import get_data_loader
1614
from data_utils.pois_helm_datasets import get_data_loader
1715
from models.fno import build_fno
1816
from pretrain_basic import l2_err
1917
from scipy.stats import linregress
2018
from tqdm import tqdm
2119

22-
# from utils.loss_utils import LossMSE
23-
# from utils.YParams import YParams
24-
2520

2621
@paddle.no_grad()
2722
def get_pred(args):
@@ -166,16 +161,11 @@ def get_pred(args):
166161
if __name__ == "__main__":
167162
parser = ArgumentParser()
168163
parser.add_argument("--config", type=str, default="config/inference_helmholtz.yaml")
169-
# parser.add_argument('--ckpt_path', type=str, default='/pscratch/sd/p/puren93/neuralopt/expts/helm-64-o5_15_ft0/all_mask_m6/checkpoints/ckpt.tar')
170164
parser.add_argument(
171165
"--ckpt_path",
172166
type=str,
173167
default="/pscratch/sd/j/jsong/deff_archive/neuraloperators-foundation_/expts/helm-64-o5_15_ft0/b012_m6/checkpoints/ckpt.tar",
174168
)
175-
# parser.add_argument('--ckpt_path', type=str, default='/pscratch/sd/j/jsong/deff_archive/neuraloperators-foundation_/expts/helm-64-o5_15_ft0/b01_m6/checkpoints/ckpt.tar')
176-
# parser.add_argument('--ckpt_path', type=str, default='/pscratch/sd/j/jsong/neuraloperators-foundation/expts/pois-64-e5_15_ft9/b01_m0/checkpoints/ckpt.tar') # [X]
177-
# parser.add_argument('--ckpt_path', type=str, default='/pscratch/sd/j/jsong/deff_archive/neuraloperators-foundation_/expts/pois-64-e5_15_ft9/b01_m0_/checkpoints/ckpt.tar')
178-
# parser.add_argument('--ckpt_path', type=str, default='/pscratch/sd/j/jsong/deff_archive/neuraloperators-foundation_/expts/pois-64-e5_15_ft9/b01_m0_r0/checkpoints/ckpt.tar')
179169
parser.add_argument("--num_demos", type=int, default=None)
180170
parser.add_argument(
181171
"--tqdm", action="store_true", default=False, help="Turn on the tqdm"

examples/data_efficient_nopt/models/ffn.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
# import numpy as np
21
import paddle
32
import paddle.nn as nn
43

0 commit comments

Comments
 (0)