Skip to content

【Hackathon 8th No.16】 data_efficient_nopt 论文复现 #1111

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 40 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
ab7d6a7
feat(ppsci): support data_effient_nopt for training and test
xiaoyewww Mar 23, 2025
e5b39df
feat(ppsci): support data_effient_nopt for training and test
xiaoyewww Mar 26, 2025
29ae4c2
feat(ppsci): support data_effient_nopt for training and test
xiaoyewww Mar 26, 2025
747e543
feat(ppsci): support data_effient_nopt for training and test
xiaoyewww Mar 26, 2025
617d37a
feat(ppsci): support data_effient_nopt for training and test
xiaoyewww Mar 31, 2025
996df12
feat(ppsci): support data_effient_nopt for training and test
xiaoyewww Apr 1, 2025
b1d8a3d
feat(ppsci): support data_effient_nopt for inference
xiaoyewww Apr 2, 2025
f92052d
feat(ppsci): support data_effient_nopt for inference
xiaoyewww Apr 6, 2025
e4e7975
feat(ppsci): support data_effient_nopt
xiaoyewww May 19, 2025
6bcaa21
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 11, 2025
b16fdcc
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 11, 2025
c33aae6
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 12, 2025
f90dffc
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 12, 2025
cb239e0
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 13, 2025
325207e
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 13, 2025
6763666
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 14, 2025
14cdc71
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 14, 2025
b301103
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 15, 2025
826ab30
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 15, 2025
af08cb1
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 25, 2025
d9f6a11
fix
wangguan1995 Jun 29, 2025
9d31a08
Revert "fix"
xiaoyewww Jun 29, 2025
f3f3095
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 29, 2025
2122529
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 29, 2025
7eefdae
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 29, 2025
220e895
feat(ppsci): remove redundant codes
xiaoyewww Jun 29, 2025
a57c461
feat(ppsci): remove redundant codes
xiaoyewww Jun 30, 2025
9930bcb
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 30, 2025
a21443c
feat(ppsci): support data_effient_nopt
xiaoyewww Jun 30, 2025
ba5ced5
feat(ppsci): support data_effient_nopt
xiaoyewww Jul 1, 2025
d65862f
feat(ppsci): support data_effient_nopt
xiaoyewww Jul 9, 2025
27b75b6
feat(ppsci): support data_effient_nopt
xiaoyewww Jul 9, 2025
8d431fe
feat(ppsci): support data_effient_nopt
xiaoyewww Jul 10, 2025
5d858dc
feat(ppsci): support data_effient_nopt
xiaoyewww Jul 10, 2025
6a6a11a
feat(ppsci): support data_effient_nopt
xiaoyewww Jul 11, 2025
6727723
Update data_efficient_nopt.py
wangguan1995 Jul 14, 2025
d895b05
Update data_efficient_nopt_fno_poisson.yaml
wangguan1995 Jul 14, 2025
91e7f79
Update data_efficient_nopt.py
wangguan1995 Jul 14, 2025
a038f4e
Merge pull request #4 from wangguan1995/patch-6
xiaoyewww Jul 14, 2025
ae68110
feat(ppsci): support data_effient_nopt
xiaoyewww Jul 14, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ PaddleScience 是一个基于深度学习框架 PaddlePaddle 开发的科学计
| 微分方程 | [若斯叻方程](https://paddlescience-docs.readthedocs.io/zh-cn/latest/zh/examples/rossler) | 数据驱动 | Transformer-Physx | 监督学习 | [Data](https://github.com/zabaras/transformer-physx) | [Paper](https://arxiv.org/abs/2010.03957) |
| 算子学习 | [DeepONet](https://paddlescience-docs.readthedocs.io/zh-cn/latest/zh/examples/deeponet) | 数据驱动 | MLP | 监督学习 | [Data](https://deepxde.readthedocs.io/en/latest/demos/operator/antiderivative_unaligned.html) | [Paper](https://export.arxiv.org/pdf/1910.03193.pdf) |
| 微分方程 | [梯度增强的物理知识融合 PDE 求解](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/gpinn/poisson_1d.py) | 机理驱动 | gPINN | 无监督学习 | - | [Paper](https://doi.org/10.1016/j.cma.2022.114823) |
| 微分方程 | [PDE 求解](https://paddlescience-docs.readthedocs.io/zh-cn/latest/zh/examples/data_efficient_nopt) | 数据驱动 | FNO/Transformer | 无监督学习 | - | [Paper](https://arxiv.org/abs/2402.15734) |
| 积分方程 | [沃尔泰拉积分方程](https://paddlescience-docs.readthedocs.io/zh-cn/latest/zh/examples/volterra_ide) | 机理驱动 | MLP | 无监督学习 | - | [Project](https://github.com/lululxvi/deepxde/blob/master/examples/pinn_forward/Volterra_IDE.py) |
| 微分方程 | [分数阶微分方程](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/fpde/fractional_poisson_2d.py) | 机理驱动 | MLP | 无监督学习 | - | - |
| 光孤子 | [Optical soliton](https://paddlescience-docs.readthedocs.io/zh-cn/latest/zh/examples/nlsmb) | 机理驱动 | MLP | 无监督学习 | - | [Paper](https://doi.org/10.1007/s11071-023-08824-w)|
Expand Down
1 change: 1 addition & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@
| 微分方程 | [若斯叻方程](./zh/examples/rossler.md) | 数据驱动 | Transformer-Physx | 监督学习 | [Data](https://github.com/zabaras/transformer-physx) | [Paper](https://arxiv.org/abs/2010.03957) |
| 算子学习 | [DeepONet](./zh/examples/deeponet.md) | 数据驱动 | MLP | 监督学习 | [Data](https://deepxde.readthedocs.io/en/latest/demos/operator/antiderivative_unaligned.html) | [Paper](https://export.arxiv.org/pdf/1910.03193.pdf) |
| 微分方程 | [梯度增强的物理知识融合 PDE 求解](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/gpinn/poisson_1d.py) | 机理驱动 | gPINN | 无监督学习 | - | [Paper](https://doi.org/10.1016/j.cma.2022.114823) |
| 微分方程 | [PDE 求解](./zh/examples/data_efficient_nopt.md) | 数据驱动 | FNO/Transformer | 无监督学习 | - | [Paper](https://arxiv.org/abs/2402.15734) |
| 积分方程 | [沃尔泰拉积分方程](./zh/examples/volterra_ide.md) | 机理驱动 | MLP | 无监督学习 | - | [Project](https://github.com/lululxvi/deepxde/blob/master/examples/pinn_forward/Volterra_IDE.py) |
| 微分方程 | [分数阶微分方程](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/fpde/fractional_poisson_2d.py) | 机理驱动 | MLP | 无监督学习 | - | - |
| 光孤子 | [Optical soliton](./zh/examples/nlsmb.md) | 机理驱动 | MLP | 无监督学习 | - | [Paper](https://doi.org/10.1007/s11071-023-08824-w)|
Expand Down
266 changes: 266 additions & 0 deletions docs/zh/examples/data_efficient_nopt.md

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,325 @@
# general settings
mode: train
seed: 42

# training settings
run_name: r0
use_ddp: False
config: pois-64-pretrain-e1_20_m3
sweep_id: ''
logdir: exp
output_dir: ${hydra:run.dir}

train_config:
default: &DEFAULT
num_data_workers: 4
# model
model: 'fno'
depth: 5
in_dim: 2
out_dim: 1
dropout: 0
# data/domain
Lx: !!float 1.0
Ly: !!float 1.0
nx: 256
ny: 256
# optimization
loss_style: 'mean'
loss_func: 'mse'
optimizer: 'adam'
scheduler: 'none'
learning_rate: !!float 1.0
max_epochs: 500
scheduler_epochs: 500
weight_decay: 0
batch_size: 25
# misc
log_to_screen: !!bool True
save_checkpoint: !!bool False
seed: 0
plot_figs: !!bool False
pack_data: !!bool False
# Weights & Biases
entity: 'entity_name'
project: 'proj_name'
group: 'poisson'
log_to_wandb: !!bool False
distill: !!bool False
subsample: 1
exp_dir: './exp/'
tie_fields: !!bool False # Whether to use 1 embedding per field per data
use_all_fields: !!bool True # Prepopulate the field metadata dictionary from dictionary in datasets
tie_batches: !!bool False # Force everything in batch to come from one dset
model_type: fno
pretrained: False
warmup_steps: 0
epoch_size: 1
accum_grad: 1
enable_amp: !!bool False
log_interval: 1
checkpoint_save_interval: 1000
debug_grad: False

poisson: &poisson
<<: *DEFAULT
n_demos: 0
batch_size: 512
nx: 128
ny: 128
save_checkpoint: !!bool True
max_epochs: 500
scheduler: 'cosine'

model: 'fno'
layers: [64, 64, 64, 64, 64]
modes1: [65, 65, 65, 65]
modes2: [65, 65, 65, 65]
fc_dim: 256

in_dim: 4
out_dim: 1
mode_cut: 16
embed_cut: 64
fc_cut: 2

optimizer: 'adam'

learning_rate: 1E-3
pack_data: !!bool False


poisson-64-scale-e5_15: &poisson_64_e5_15
<<: *poisson
train_path: 'data/possion_64/poisson_64_e5_15_train.h5'
val_path: 'data/possion_64/poisson_64_e5_15_val.h5'
test_path: 'data/possion_64/poisson_64_e5_15_test.h5'
scales_path: 'data/possion_64/poisson_64_e5_15_train_scale.npy'
train_rand_idx_path: 'data/possion_64/train_rand_idx.npy'
batch_size: 128
log_to_wandb: !!bool False
learning_rate: 1E-3

mode_cut: 32
embed_cut: 64
fc_cut: 2
subsample: 1
nx: 64
ny: 64

pt: "train"
pt_split: [46080, 8192]
pretrained: False


pois-64-pretrain-e1_20: &pois_64_e1_20_pt
<<: *poisson
train_path: 'data/possion_64/poisson_64_e1_20_train.h5'
val_path: 'data/possion_64/poisson_64_e1_20_val.h5'
test_path: 'data/possion_64/poisson_64_e1_20_test.h5'
scales_path: 'data/possion_64/poisson_64_e1_20_train_scale.npy'
train_rand_idx_path: 'data/possion_64/train_rand_idx.npy'
batch_size: 128
log_to_wandb: !!bool False
mode_cut: 32
embed_cut: 64
fc_cut: 2
subsample: 1
nx: 64
ny: 64
learning_rate: 1E-3
pt: "pretrain"
pt_split: [46080, 8192]
blur: [0, 1]


pois_64_finetune_e5_15: &pois_64_e5_15_ft
<<: *poisson
train_path: 'data/possion_64/poisson_64_e5_15_train.h5'
val_path: 'data/possion_64/poisson_64_e5_15_val.h5'
test_path: 'data/possion_64/poisson_64_e5_15_test.h5'
scales_path: 'data/possion_64/poisson_64_e5_15_train_scale.npy'
train_rand_idx_path: 'data/possion_64/train_rand_idx.npy'
batch_size: 128
log_to_wandb: !!bool False
mode_cut: 32
embed_cut: 64
fc_cut: 2
subsample: 1
nx: 64
ny: 64
learning_rate: 1E-3
pt: "train"
pt_split: [46080, 8192]
fix_backbone: False
resuming: False
pretrained: True
pretrained_ckpt_path: /pretrained_ckpt_path/training_checkpoints/ckpt.tar

pois-64-e5_15_ft0: &pois_64_e5_15_ft0
<<: *pois_64_e5_15_ft
subsample: 1

pois-64-e5_15_ft1: &pois_64_e5_15_ft1
<<: *pois_64_e5_15_ft
subsample: 2

pois-64-e5_15_ft2: &pois_64_e5_15_ft2
<<: *pois_64_e5_15_ft
subsample: 4

pois-64-e5_15_ft3: &pois_64_e5_15_ft3
<<: *pois_64_e5_15_ft
subsample: 8

pois-64-e5_15_ft4: &pois_64_e5_15_ft4
<<: *pois_64_e5_15_ft
subsample: 16

pois-64-e5_15_ft5: &pois_64_e5_15_ft5
<<: *pois_64_e5_15_ft
subsample: 32

pois-64-e5_15_ft6: &pois_64_e5_15_ft6
<<: *pois_64_e5_15_ft
subsample: 64

pois-64-e5_15_ft7: &pois_64_e5_15_ft7
<<: *pois_64_e5_15_ft
subsample: 128
batch_size: 64

pois-64-e5_15_ft8: &pois_64_e5_15_ft8
<<: *pois_64_e5_15_ft
subsample: 256
batch_size: 32

pois-64-e5_15_ft9: &pois_64_e5_15_ft9
<<: *pois_64_e5_15_ft
subsample: 512
batch_size: 16

pois-64-pretrain-e1_20_m0: &pois-64-e1_20_pt_m0
<<: *pois_64_e1_20_pt
mask_ratio: 0.

pois-64-pretrain-e1_20_m1: &pois-64-e1_20_pt_m1
<<: *pois_64_e1_20_pt
mask_ratio: 0.1

pois-64-pretrain-e1_20_m2: &pois-64-e1_20_pt_m2
<<: *pois_64_e1_20_pt
mask_ratio: 0.2

pois-64-pretrain-e1_20_m3: &pois-64-e1_20_pt_m3
<<: *pois_64_e1_20_pt
mask_ratio: 0.3

pois-64-pretrain-e1_20_m4: &pois-64-e1_20_pt_m4
<<: *pois_64_e1_20_pt
mask_ratio: 0.4

pois-64-pretrain-e1_20_m5: &pois-64-e1_20_pt_m5
<<: *pois_64_e1_20_pt
mask_ratio: 0.5

pois-64-pretrain-e1_20_m6: &pois-64-e1_20_pt_m6
<<: *pois_64_e1_20_pt
mask_ratio: 0.6

pois-64-pretrain-e1_20_m7: &pois-64-e1_20_pt_m7
<<: *pois_64_e1_20_pt
mask_ratio: 0.7

pois-64-pretrain-e1_20_m8: &pois-64-e1_20_pt_m8
<<: *pois_64_e1_20_pt
mask_ratio: 0.8

pois-64-pretrain-e1_20_m9: &pois-64-e1_20_pt_m9
<<: *pois_64_e1_20_pt
mask_ratio: 0.9



poisson-64-e5_15_bsln: &pois_64_e5_15_baseline
<<: *poisson_64_e5_15

# 8192
poisson-64-e5_15_b0: &pois_64_e5_15_ss4
<<: *pois_64_e5_15_baseline
subsample: 1

poisson-64-e5_15_b1: &pois_64_e5_15_ss8
<<: *pois_64_e5_15_baseline
subsample: 2

poisson-64-e5_15_b2: &pois_64_e5_15_ss16
<<: *pois_64_e5_15_baseline
subsample: 4

poisson-64-e5_15_b3: &pois_64_e5_15_ss32
<<: *pois_64_e5_15_baseline
subsample: 8

poisson-64-e5_15_b4: &pois_64_e5_15_ss64
<<: *pois_64_e5_15_baseline
subsample: 16

poisson-64-e5_15_b5: &pois_64_e5_15_ss128
<<: *pois_64_e5_15_baseline
subsample: 32

poisson-64-e5_15_b6: &pois_64_e5_15_ss256
<<: *pois_64_e5_15_baseline
subsample: 64

poisson-64-e5_15_b7: &pois_64_e5_15_ss512
<<: *pois_64_e5_15_baseline
subsample: 128
batch_size: 64


# inference settings
ckpt_path: data/pd_finetune_b01_m0_n8192.tar
num_demos: 1
tqdm: False
save_pred: False

infer_config:
train_path: 'data/possion_64/poisson_64_e15_50_train.h5' # pick demos
test_path: 'data/possion_64/poisson_64_e15_50_test.h5'
scales_path: 'data/possion_64/poisson_64_e5_15_train_scale.npy'
ckpt_path: data/possion_64/finetune_b01_m0_n8192.pdparams

num_data_workers: 4
subsample: 1
num_demos: 0
shuffle: False
nx: 64
nt: 64
Lx: !!float 1.0
Ly: !!float 1.0
pack_data: !!bool False

model: 'fno'
layers: [64, 64, 64, 64, 64]
modes1: [65, 65, 65, 65]
modes2: [65, 65, 65, 65]
fc_dim: 128

in_dim: 4
out_dim: 1
mode_cut: 32
embed_cut: 64
fc_cut: 2
dropout: 0

fix_backbone: True

loss_func: mse

batch_size: 1
loss_style: sum

log_to_wandb: !!bool False
logdir: ./log
Loading