Skip to content

Commit 4692305

Browse files
Update Readme (#264)
* Update * update * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Add files via upload * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md --------- Co-authored-by: Shan Jiang <[email protected]>
1 parent 7621835 commit 4692305

File tree

2 files changed

+26
-13
lines changed

2 files changed

+26
-13
lines changed

README.md

Lines changed: 26 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -43,43 +43,56 @@ python -m graph_net.torch.validate \
4343
--model-path $GRAPH_NET_EXTRACT_WORKSPACE/resnet18/
4444
```
4545

46+
**Illustration: How does GraphNet extract and construct a computation graph sample on PyTorch?**
47+
48+
<div align="center">
49+
<img src="/pics/graphnet_sample.png" alt="GraphNet Extract Sample" width="65%">
50+
</div>
51+
52+
* Source code of custom_op is required **only when** corresponding operator is used in the module, and **no specific format** is required.
53+
4654
**Step 1: graph_net.torch.extract**
4755

56+
Import and wrap the model with `graph_net.torch.extract(name=model_name, dynamic=dynamic_mode)()` is all you need:
57+
4858
```bash
4959
import graph_net
5060

5161
# Instantiate the model (e.g. a torchvision model)
5262
model = ...
5363

5464
# Extract your own model
55-
model = graph_net.torch.extract(name="model_name")(model)
56-
57-
# After running, the extracted graph will be saved to:
58-
# $GRAPH_NET_EXTRACT_WORKSPACE/model_name/
65+
model = graph_net.torch.extract(name="model_name", dynamic="True")(model)
5966
```
6067

61-
For details, see docstring of `graph_net.torch.extract` defined in `graph_net/torch/extractor.py`
68+
After running, the extracted graph will be saved to: `$GRAPH_NET_EXTRACT_WORKSPACE/model_name/`.
69+
70+
For more details, see docstring of `graph_net.torch.extract` defined in `graph_net/torch/extractor.py`.
6271

6372
**Step 2: graph_net.torch.validate**
73+
74+
To verify that the extracted model meets requirements, we use `graph_net.torch.validate` in CI tool and also ask contributors to self-check in advance:
75+
6476
```bash
65-
# Verify that the extracted model meets requirements
6677
python -m graph_net.torch.validate \
6778
--model-path $GRAPH_NET_EXTRACT_WORKSPACE/model_name
6879
```
6980

81+
All the **construction constraints** will be examined automatically. After passing validation, a unique `graph_hash.txt` will be generated and later checked in CI procedure to avoid redundant.
82+
7083
## ⚖️ Compiler Evaluation
7184

7285
**Step 1: Benchmark**
7386

74-
We use ```graph_net/benchmark_demo.sh``` to benchmark GraphNet computation graph samples:
87+
We use `graph_net/benchmark_demo.sh` to benchmark GraphNet computation graph samples:
7588

7689
```bash
7790
bash graph_net/benchmark_demo.sh &
7891
```
7992

80-
The script runs ```graph_net.torch.test_compiler``` with specific batch and log configurations.
93+
The script runs `graph_net.torch.test_compiler` with specific batch and log configurations.
8194

82-
Or you can customize and use ```graph_net.torch.test_compiler``` yourself:
95+
Or you can customize and use `graph_net.torch.test_compiler` yourself:
8396

8497
```bash
8598
python -m graph_net.torch.test_compiler \
@@ -93,15 +106,15 @@ python -m graph_net.torch.test_compiler \
93106
# Note: if --compiler is omitted, PyTorch’s built-in compiler is used by default
94107
```
95108

96-
After executing, ```graph_net.torch.test_compiler``` will:
109+
After executing, `graph_net.torch.test_compiler` will:
97110
1. Running the original model in eager mode to record a baseline.
98111
2. Compiling the model with the specified backend (e.g., CINN, TVM, Inductor, TensorRT, XLA, BladeDISC).
99112
3. Executing the compiled model and collecting its runtime and outputs.
100113
4. Conduct speedup by comparing the compiled results against the baseline.
101114

102115
**Step 2: Analysis**
103116

104-
After processing, we provide ```graph_net/analysis.py``` to generate [violin plot](https://en.m.wikipedia.org/wiki/Violin_plot) based on the JSON results.
117+
After processing, we provide `graph_net/analysis.py` to generate [violin plot](https://en.m.wikipedia.org/wiki/Violin_plot) based on the JSON results.
105118

106119
```bash
107120
python -m graph_net.analysis \
@@ -111,7 +124,7 @@ python -m graph_net.analysis \
111124

112125
After executing, one summary plot of results on all compilers, as well as multiple sub-plots of results in categories (model tasks, Library...) on a single compiler will be exported.
113126

114-
The script is designed to process a file structure as ```/benchmark_path/compiler_name/category_name/``` (for example ```/benchmark_logs/paddle/nlp/```), and items on x-axis are identified by name of the folders. So you can modify ```read_all_speedups``` function to fit the benchmark settings on your demand.
127+
The script is designed to process a file structure as `/benchmark_path/compiler_name/category_name/` (for example `/benchmark_logs/paddle/nlp/`), and items on x-axis are identified by name of the folders. So you can modify `read_all_speedups` function to fit the benchmark settings on your demand.
115128

116129
## 📌 Roadmap
117130

@@ -120,7 +133,7 @@ The script is designed to process a file structure as ```/benchmark_path/compile
120133
3. Extract samples from multi-GPU scenarios to support benchmarking and optimization for large-scale, distributed computing.
121134
4. Enable splitting full graphs into independently optimized subgraphs and operator sequences.
122135

123-
**Vision**: GraphNet aims to lay the foundation for [ai4c](https://github.com/PaddlePaddle/ai4c) by enabling large-scale, systematic evaluation of tensor compiler optimizations.
136+
**Vision**: GraphNet aims to lay the foundation for [ai4c](https://github.com/PaddlePaddle/ai4c) by enabling **large-scale, systematic evaluation** of tensor compiler optimizations, and providing a **dataset for models to learn** and transfer optimization strategies.
124137

125138
## 💬 GraphNet Community
126139

pics/graphnet_sample.png

461 KB
Loading

0 commit comments

Comments
 (0)