Skip to content

Commit 8155d71

Browse files
[Doc] Update links of cooperation projects (#898)
* fix(test=document_fix) * optimize importing * 添加共创计划项目链接
1 parent dbdb813 commit 8155d71

File tree

4 files changed

+28
-21
lines changed

4 files changed

+28
-21
lines changed

docs/zh/cooperation.md

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,17 @@
11
# 共创计划
22

3-
PaddleScience 作为一个开源项目,欢迎来各行各业的伙伴携手共建基于飞桨的 AI for Science 领域顶尖开源项目, 打造活跃的前瞻性的 AI for Science 开源社区,建立产学研闭环,推动科研创新与产业赋能。点击了解 [飞桨AI for Science共创计划](https://www.paddlepaddle.org.cn/science)
3+
PaddleScience 作为一个开源项目,欢迎来各行各业的伙伴携手共建基于飞桨的 AI for Science 领域顶尖开源项目,打造活跃的前瞻性的 AI for Science 开源社区,建立产学研闭环,推动科研创新与产业赋能。点击了解 [飞桨 AI for Science 共创计划](https://www.paddlepaddle.org.cn/science)
4+
5+
## 项目精选
6+
7+
- 使用嵌套傅立叶神经算子进行实时高分辨二氧化碳地质封存预测: <https://aistudio.baidu.com/projectdetail/7390303>
8+
- 多源异构数据与机理融合的极端天气预报算法研究: <https://aistudio.baidu.com/projectdetail/7586532>
9+
- 基于强化学习的复杂系统控制 —— 以疾病传播: <https://aistudio.baidu.com/projectdetail/7520457>
10+
- 基于 Transformer 架构的流体流动降阶模拟: <https://aistudio.baidu.com/projectdetail/7509905>
11+
- 基于 Transformer 的神经算子预测模型: <https://aistudio.baidu.com/projectdetail/7309026>
12+
- 基于 PINN 方法求解可压缩流体欧拉方程组的正问题: <https://aistudio.baidu.com/projectdetail/7502148>
13+
- 基于连续演化数据预测双曲方程简断解: <https://aistudio.baidu.com/projectdetail/7620492>
14+
- 拉格朗日粒子流体 Benchmark 开源数据集: <https://aistudio.baidu.com/projectdetail/7507477>
15+
- 基于 PINN 方法求解可压缩流体欧拉方程组的正问题: <https://aistudio.baidu.com/projectdetail/7593837>
16+
- 数据驱动 AI 模型的 PDE 方程可解释性评估: <https://aistudio.baidu.com/projectdetail/7463477>
17+
- 数据驱动 AI 模型的 PDE 方程可解释性评估: <https://aistudio.baidu.com/projectdetail/7512749>

mkdocs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ nav:
8686
- NowcastNet: zh/examples/nowcastnet.md
8787
- DGMR: zh/examples/dgmr.md
8888
- EarthFormer: zh/examples/earthformer.md
89-
- API文档:
89+
- API 文档:
9090
- ppsci:
9191
- ppsci.arch: zh/api/arch.md
9292
- ppsci.autodiff: zh/api/autodiff.md

ppsci/data/dataset/dgmr_dataset.py

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -14,19 +14,15 @@
1414

1515
from __future__ import annotations
1616

17+
import importlib
1718
from typing import Tuple
1819

1920
import numpy as np
20-
import paddle
2121
from numpy.random import default_rng
22+
from paddle import io
2223

23-
try:
24-
import datasets
25-
except ModuleNotFoundError:
26-
pass
2724

28-
29-
class DGMRDataset(paddle.io.Dataset):
25+
class DGMRDataset(io.Dataset):
3026
"""
3127
Dataset class for DGMR (Deep Generative Model for Radar) model.
3228
This open-sourced UK dataset has been mirrored to HuggingFace Datasets https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km.
@@ -59,6 +55,13 @@ def __init__(
5955
self.label_keys = label_keys
6056
self.num_input_frames = num_input_frames
6157
self.num_target_frames = num_target_frames
58+
if not importlib.util.find_spec("datasets"):
59+
raise ModuleNotFoundError(
60+
"Please install datasets with `pip install datasets`"
61+
" before exporting onnx model."
62+
)
63+
import datasets
64+
6265
self.reader = datasets.load_dataset(
6366
dataset_path, "sample", split=split, streaming=True, trust_remote_code=True
6467
)

ppsci/data/dataset/enso_dataset.py

Lines changed: 2 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -15,20 +15,14 @@
1515
from __future__ import annotations
1616

1717
import importlib
18+
from pathlib import Path
1819
from typing import Dict
1920
from typing import Optional
2021
from typing import Tuple
2122

2223
import numpy as np
2324
from paddle import io
2425

25-
try:
26-
from pathlib import Path
27-
28-
import xarray as xr
29-
except ModuleNotFoundError:
30-
pass
31-
3226
NINO_WINDOW_T = 3 # Nino index is the sliding average over sst, window size is 3
3327
CMIP6_SST_MAX = 10.198975563049316
3428
CMIP6_SST_MIN = -16.549121856689453
@@ -146,6 +140,7 @@ def read_raw_data(ds_dir, out_dir=None):
146140
out_dir (str): the path of output. Defaults to None.
147141
148142
"""
143+
import xarray as xr
149144

150145
train_cmip = xr.open_dataset(Path(ds_dir) / "CMIP_train.nc").transpose(
151146
"year", "month", "lat", "lon"
@@ -275,11 +270,6 @@ def __init__(
275270
"To use RadarDataset, please install 'xarray' via: `pip install "
276271
"xarray` first."
277272
)
278-
if importlib.util.find_spec("pathlib") is None:
279-
raise ModuleNotFoundError(
280-
"To use RadarDataset, please install 'pathlib' via: `pip install "
281-
"pathlib` first."
282-
)
283273
self.input_keys = input_keys
284274
self.label_keys = label_keys
285275
self.data_dir = data_dir

0 commit comments

Comments
 (0)