Skip to content

【PPSCI Export&Infer No.8】Add export and inference for amgnet_cylinder #1165

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 11 commits into
base: develop
Choose a base branch
from
Open
74 changes: 74 additions & 0 deletions docs/zh/examples/amgnet.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image 这两个命令不能使用

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个md我忘记改了,用的之前的版本,后面我再来改

Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,22 @@
python amgnet_cylinder.py mode=eval EVAL.pretrained_model_path=https://paddle-org.bj.bcebos.com/paddlescience/models/amgnet/amgnet_cylinder_pretrained.pdparams
```

=== "模型导出命令"

=== "amgnet_cylinder"

``` sh
python amgnet_cylinder.py mode=export
```

=== "Python推理命令"

=== "amgnet_cylinder"

``` sh
python amgnet_cylinder.py mode=infer
```

| 预训练模型 | 指标 |
|:--| :--|
| [amgnet_airfoil_pretrained.pdparams](https://paddle-org.bj.bcebos.com/paddlescience/models/amgnet/amgnet_airfoil_pretrained.pdparams) | loss(RMSE_validator): 0.0001 <br> RMSE.RMSE(RMSE_validator): 0.01315 |
Expand Down Expand Up @@ -291,6 +307,64 @@ unzip data.zip
--8<--
```

### 3.9 模型导出与推理

训练完成后,我们可以将模型导出为静态图格式,并使用Python推理引擎进行部署。

#### 3.9.1 导出模型

我们首先需要在 `amgnet_cylinder.py` 中实现 `export` 函数,它负责加载训练好的模型,并将其保存为推理所需的格式。

``` py linenums="235"
--8<--
examples/amgnet/amgnet_cylinder.py:235:256
--8<--
```

通过运行以下命令,即可执行导出:

```bash
python amgnet_cylinder.py mode=export
```

导出的模型将包含 `amgnet_cylinder.pdmodel` (模型结构) 和 `amgnet_cylinder.pdiparams` (模型权重) 文件,保存在配置文件 `INFER.export_path` 所指定的目录中。

#### 3.9.2 创建推理器

为了执行推理,我们创建了一个专用的 `AMGNPredictor` 类,存放于 `deploy/python_infer/amgn_predictor.py`。这个类继承自 `ppsci.deploy.base_predictor.Predictor`,并实现了加载模型和执行预测的核心逻辑。

``` py linenums="28"
--8<--
examples/amgnet/deploy/python_infer/amgn_predictor.py:28:87
--8<--
```

#### 3.9.3 执行推理

最后,我们实现 `inference` 函数。该函数会实例化 `AMGNPredictor`,加载数据,并循环执行预测,最后将结果可视化。

``` py linenums="259"
--8<--
examples/amgnet/amgnet_cylinder.py:259:298
--8<--
```

通过以下命令来运行推理:

```bash
python amgnet_cylinder.py mode=infer
```

#### 3.9.4 新增配置

为了支持以上功能,需要在 `conf/amgnet_cylinder.yaml` 中添加 `INFER` 配置项。

``` yaml linenums="65"
--8<--
examples/amgnet/conf/amgnet_cylinder.yaml:65:68
--8<--
```

## 4. 完整代码

=== "airfoil"
Expand Down
84 changes: 84 additions & 0 deletions examples/amgnet/amgn_predictor.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Copyright (c) 2025 PaddlePaddle Authors. All Rights Reserved.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

# http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import annotations

from typing import TYPE_CHECKING
from typing import Dict

import numpy as np

from ppsci.deploy.base_predictor import Predictor
from ppsci.utils import logger

if TYPE_CHECKING:
import pgl
from omegaconf import DictConfig


class AMGNPredictor(Predictor):
"""Predictor for AMGNet model.

Args:
cfg (DictConfig): Configuration object.
"""

def __init__(self, cfg: DictConfig):
super().__init__(cfg)

def predict(
self,
input_dict: Dict[str, "pgl.Graph"],
batch_size: int = 64,
) -> Dict[str, np.ndarray]:
"""Predicts the output of the model for a given input.

Args:
input_dict (Dict[str, "pgl.Graph"]): Input data in a dictionary.
batch_size (int, optional): Batch size for prediction. Defaults to 64.

Returns:
Dict[str, np.ndarray]: Predicted output in a dictionary.
"""
# Note: amgnet only supports batch_size=1
if batch_size > 1:
logger.warning(
f"AMGNet predictor only support batch_size=1, but got {batch_size}. "
"Automatically set batch_size to 1."
)
batch_size = 1

output_dict = {}
for key, graph in input_dict.items():
input_names = self.predictor.get_input_names()
for name in input_names:
handle = self.predictor.get_input_handle(name)
data = getattr(graph, name)
handle.copy_from_cpu(data)

self.predictor.run()
output_names = self.predictor.get_output_names()
for name in output_names:
handle = self.predictor.get_output_handle(name)
output = handle.copy_to_cpu()
output_dict[name] = output

# mapping data to cfg.INFER.output_keys
output_dict = {
store_key: output_dict[infer_key]
for store_key, infer_key in zip(
self.output_keys, self.predictor.get_output_names()
)
}
return output_dict
77 changes: 76 additions & 1 deletion examples/amgnet/amgnet_cylinder.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
import utils
from omegaconf import DictConfig
from paddle.nn import functional as F
from paddle.static import InputSpec

import ppsci
from ppsci.utils import logger
Expand Down Expand Up @@ -212,14 +213,88 @@ def evaluate(cfg: DictConfig):
)


def export(cfg: DictConfig):
"""Export the model for inference."""
# set model
model = ppsci.arch.AMGNet(**cfg.MODEL)

# initialize solver
solver = ppsci.solver.Solver(
model,
pretrained_model_path=cfg.INFER.pretrained_model_path,
)

# export model
input_spec = [
{
"input": {
"node_feat": InputSpec(
[None, cfg.MODEL.input_dim], "float32", name="node_feat"
),
"edge_feat": InputSpec([None, 2], "float32", name="edge_feat"),
}
},
]
solver.export(input_spec, cfg.INFER.export_path, skip_prune=True)


def inference(cfg: DictConfig):
"""Run inference with the exported model."""
import amgn_predictor

# initialize logger
logger.init_logger("ppsci", osp.join(cfg.output_dir, "infer.log"), "info")

# set model predictor
predictor = amgn_predictor.AMGNPredictor(cfg)

# set dataloader
eval_dataloader_cfg = {
"dataset": {
"name": "MeshCylinderDataset",
"input_keys": ("input",),
"label_keys": ("label",),
"data_dir": cfg.EVAL_DATA_DIR,
"mesh_graph_path": cfg.EVAL_MESH_GRAPH_PATH,
},
"batch_size": 1,
"sampler": {
"name": "BatchSampler",
"drop_last": False,
"shuffle": False,
},
}
eval_dataloader = ppsci.data.build_dataloader(**eval_dataloader_cfg)

# run inference
logger.message("Now running inference, please wait...")
for index, (input_, label, _) in enumerate(eval_dataloader):
output_dict = predictor.predict(input_, cfg.INFER.batch_size)
truefield = label["label"].y
utils.log_images(
input_["input"].pos,
output_dict["pred"],
truefield,
eval_dataloader.dataset.elems_list,
index,
"cylinder_infer",
)


@hydra.main(version_base=None, config_path="./conf", config_name="amgnet_cylinder.yaml")
def main(cfg: DictConfig):
if cfg.mode == "train":
train(cfg)
elif cfg.mode == "eval":
evaluate(cfg)
elif cfg.mode == "export":
export(cfg)
elif cfg.mode == "infer":
inference(cfg)
else:
raise ValueError(f"cfg.mode should in ['train', 'eval'], but got '{cfg.mode}'")
raise ValueError(
f"cfg.mode should in ['train', 'eval', 'export', 'infer'], but got '{cfg.mode}'"
)


if __name__ == "__main__":
Expand Down
9 changes: 8 additions & 1 deletion examples/amgnet/conf/amgnet_cylinder.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -63,5 +63,12 @@ TRAIN:
# evaluation settings
EVAL:
batch_size: 1
pretrained_model_path: null
# NOTE: The following path is a placeholder, please replace it with your own model path
pretrained_model_path: https://paddle-org.bj.bcebos.com/paddlescience/models/amgnet/amgnet_cylinder_pretrained.pdparams
eval_with_no_grad: true

# inference settings
INFER:
batch_size: 1
pretrained_model_path: ${EVAL.pretrained_model_path}
export_path: ./inference/amgnet_cylinder