Skip to content

Commit 65d2e9b

Browse files
authored
Update README.md
1 parent cc99c78 commit 65d2e9b

File tree

1 file changed

+17
-12
lines changed

1 file changed

+17
-12
lines changed

README.md

Lines changed: 17 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -44,30 +44,37 @@ python -m graph_net.torch.validate \
4444
```
4545

4646
**Example: How does GraphNet extract and construct a computation graph sample on PyTorch?**
47+
4748
<div align="center">
4849
<img src="/pics/graphnet_sample.png" alt="GraphNet Extract Sample" width="65%">
4950
</div>
5051

52+
* Source code of custom_op is required **only when** corresponding operator is used in the module, and **no specific format** is required.
53+
5154
**Step 1: graph_net.torch.extract**
5255

56+
Import and wrap the model with `graph_net.torch.extract(name=model_name, dynamic=dynamic_mode)()` is all you need:
57+
5358
```bash
5459
import graph_net
5560

5661
# Instantiate the model (e.g. a torchvision model)
5762
model = ...
5863

5964
# Extract your own model
60-
model = graph_net.torch.extract(name="model_name")(model)
65+
model = graph_net.torch.extract(name="model_name", dynamic="True")(model)
6166

6267
# After running, the extracted graph will be saved to:
6368
# $GRAPH_NET_EXTRACT_WORKSPACE/model_name/
6469
```
6570

66-
For details, see docstring of `graph_net.torch.extract` defined in `graph_net/torch/extractor.py`
71+
For details, see docstring of `graph_net.torch.extract` defined in `graph_net/torch/extractor.py`.
6772

6873
**Step 2: graph_net.torch.validate**
74+
75+
To verify that the extracted model meets requirements, we use `graph_net.torch.validate` in CI tool and ask contributors to self-check in advance:
76+
6977
```bash
70-
# Verify that the extracted model meets requirements
7178
python -m graph_net.torch.validate \
7279
--model-path $GRAPH_NET_EXTRACT_WORKSPACE/model_name
7380
```
@@ -76,15 +83,15 @@ python -m graph_net.torch.validate \
7683

7784
**Step 1: Benchmark**
7885

79-
We use ```graph_net/benchmark_demo.sh``` to benchmark GraphNet computation graph samples:
86+
We use `graph_net/benchmark_demo.sh` to benchmark GraphNet computation graph samples:
8087

8188
```bash
8289
bash graph_net/benchmark_demo.sh &
8390
```
8491

85-
The script runs ```graph_net.torch.test_compiler``` with specific batch and log configurations.
92+
The script runs `graph_net.torch.test_compiler` with specific batch and log configurations.
8693

87-
Or you can customize and use ```graph_net.torch.test_compiler``` yourself:
94+
Or you can customize and use `graph_net.torch.test_compiler` yourself:
8895

8996
```bash
9097
python -m graph_net.torch.test_compiler \
@@ -98,15 +105,15 @@ python -m graph_net.torch.test_compiler \
98105
# Note: if --compiler is omitted, PyTorch’s built-in compiler is used by default
99106
```
100107

101-
After executing, ```graph_net.torch.test_compiler``` will:
108+
After executing, `graph_net.torch.test_compiler` will:
102109
1. Running the original model in eager mode to record a baseline.
103110
2. Compiling the model with the specified backend (e.g., CINN, TVM, Inductor, TensorRT, XLA, BladeDISC).
104111
3. Executing the compiled model and collecting its runtime and outputs.
105112
4. Conduct speedup by comparing the compiled results against the baseline.
106113

107114
**Step 2: Analysis**
108115

109-
After processing, we provide ```graph_net/analysis.py``` to generate [violin plot](https://en.m.wikipedia.org/wiki/Violin_plot) based on the JSON results.
116+
After processing, we provide `graph_net/analysis.py` to generate [violin plot](https://en.m.wikipedia.org/wiki/Violin_plot) based on the JSON results.
110117

111118
```bash
112119
python -m graph_net.analysis \
@@ -116,7 +123,7 @@ python -m graph_net.analysis \
116123

117124
After executing, one summary plot of results on all compilers, as well as multiple sub-plots of results in categories (model tasks, Library...) on a single compiler will be exported.
118125

119-
The script is designed to process a file structure as ```/benchmark_path/compiler_name/category_name/``` (for example ```/benchmark_logs/paddle/nlp/```), and items on x-axis are identified by name of the folders. So you can modify ```read_all_speedups``` function to fit the benchmark settings on your demand.
126+
The script is designed to process a file structure as `/benchmark_path/compiler_name/category_name/` (for example `/benchmark_logs/paddle/nlp/`), and items on x-axis are identified by name of the folders. So you can modify `read_all_speedups` function to fit the benchmark settings on your demand.
120127

121128
## 📌 Roadmap
122129

@@ -125,9 +132,7 @@ The script is designed to process a file structure as ```/benchmark_path/compile
125132
3. Extract samples from multi-GPU scenarios to support benchmarking and optimization for large-scale, distributed computing.
126133
4. Enable splitting full graphs into independently optimized subgraphs and operator sequences.
127134

128-
**Vision**: GraphNet aims to lay the foundation for [ai4c](https://github.com/PaddlePaddle/ai4c) by enabling large-scale, systematic evaluation of tensor compiler optimizations.
129-
130-
We aim to achieve cross-hardware portability of compiler optimizations by allowing models to learn and transfer optimization strategies. It will significantly reduce the manual effort required to develop efficient operator implementations.
135+
**Vision**: GraphNet aims to lay the foundation for [ai4c](https://github.com/PaddlePaddle/ai4c) by **enabling large-scale, systematic evaluation** of tensor compiler optimizations, and providing a dataset for **models to learn and transfer optimization strategies**.
131136

132137
## 💬 GraphNet Community
133138

0 commit comments

Comments
 (0)