You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**GraphNet** is a large-scale dataset of deep learning **computation graphs**, built as a standard benchmark for **tensor compiler** optimization. It provides 2.7K computation graphs extracted from state-of-the-art deep learning models spanning diverse tasks and ML frameworks. With standardized formats and rich metadata, GraphNet enables fair comparison and reproducible evaluation of the general optimization capabilities of tensor compilers, thereby supporting advanced research such as AI for System on compilers (AI for Compiler).
3
4
4
-
**GraphNet** is a large-scale dataset of deep learning **computation graphs**, built as a standard benchmark for **tensor compiler** optimization. It provides 2.7K computation graphs extracted from state-of-the-art deep learning models spanning diverse tasks and ML frameworks. With standardized formats and rich metadata, GraphNet enables fair comparison, reproducible evaluation, and deeper research into the general optimization capabilities of tensor compilers.
<imgsrc="/pics/Eval_result.png"alt="Violin plots of speedup distributions"width="65%">
8
8
</div>
9
9
10
-
With GraphNet, users can:
11
-
1.**Contribute new computation graphs** through the built-in automated extraction and validation pipeline.
12
-
2.**Evaluate tensor compilers** on existing graphs with the integrated compiler evaluation tool, supporting multiple compiler backends.
13
-
3.**Advance research** in tensor compiler optimization using the test data and statistics provided by GraphNet.
14
-
15
-
16
-
17
-
18
-
**Vision**: We aim to achieve cross-hardware portability of compiler optimizations by allowing models to learn and transfer optimization strategies. It will significantly reduce the manual effort required to develop efficient operator implementations.
19
-
10
+
Compiler developers can use GraphNet samples to evaluate tensor compilers (e.g., CINN, TorchInductor, TVM) on target tasks. The figure above shows the speedup of two compilers (CINN and TorchInductor) across two tasks (CV and NLP).
20
11
21
-
## Dataset Construction
12
+
## 🧱 Dataset Construction
22
13
23
14
To guarantee the dataset’s overall quality, reproducibility, and cross-compiler compatibility, we define the following construction **constraints**:
24
15
25
-
1.Dynamic graphs must execute correctly.
26
-
2.Graphs and their corresponding Python code must support serialization and deserialization.
16
+
1.Computation graphs must be executable in imperative (eager) mode.
17
+
2.Computation graphs and their corresponding Python code must support serialization and deserialization.
27
18
3. The full graph can be decomposed into two disjoint subgraphs.
28
19
4. Operator names within each computation graph must be statically parseable.
29
20
5. If custom operators are used, their implementation code must be fully accessible.
30
21
31
-
32
22
### Graph Extraction & Validation
33
-
For full implementation details, please refer to the [Co-Creation Tutorial](https://github.com/PaddlePaddle/GraphNet/blob/develop/CONTRIBUTE_TUTORIAL.md#co-creation-tutorial).
23
+
24
+
We provide automated extraction and validation tools for constructing this dataset.
After running, the extracted graph will be saved to: `$GRAPH_NET_EXTRACT_WORKSPACE/model_name/`.
74
69
70
+
For more details, see docstring of `graph_net.torch.extract` defined in `graph_net/torch/extractor.py`.
75
71
76
-
## Compiler Evaluation
72
+
**Step 2: graph_net.torch.validate**
77
73
78
-
The compiler evaluation process takes a GraphNet sample as input and involves:
79
-
1. Running the original model in eager mode to record a baseline.
80
-
2. Compiling the model with the specified backend (e.g., CINN, TorchInductor, TVM).
81
-
3. Executing the compiled model and collecting its runtime and outputs.
82
-
4. Analyzing performance by comparing the compiled results against the baseline.
74
+
To verify that the extracted model meets requirements, we use `graph_net.torch.validate` in CI tool and also ask contributors to self-check in advance:
We define two key metrics here: **rectified speedup**and **GraphNet Score**. Rectified speedup measures runtime performance while incorporating compilation success, time cost, and correctness. GraphNet Score aggregates the rectified speedup of a compiler on specified tasks, providing a measure of its general optimization capability.
81
+
All the **construction constraints**will be examined automatically. After passing validation, a unique `graph_hash.txt` will be generated and later checked in CI procedure to avoid redundant.
87
82
88
-
**Demo: How to benchmark your compiler on the model:**
83
+
## ⚖️ Compiler Evaluation
89
84
90
-
1. Benchmark
85
+
**Step 1: Benchmark**
91
86
92
-
We use ```graph_net/benchmark_demo.sh``` to benchmark GraphNet computation graph samples:
87
+
We use `graph_net/benchmark_demo.sh` to benchmark GraphNet computation graph samples:
93
88
94
-
```
89
+
```bash
95
90
bash graph_net/benchmark_demo.sh &
96
91
```
97
92
98
-
The script will run ```graph_net.torch.test_compiler``` with specific batch and log configurations.
93
+
The script runs `graph_net.torch.test_compiler` with specific batch and log configurations.
99
94
100
-
Or you can customize and use ```graph_net.torch.test_compiler``` yourself:
95
+
Or you can customize and use `graph_net.torch.test_compiler` yourself:
# Note: if --compiler is omitted, PyTorch’s built-in compiler is used by default
108
107
```
109
108
110
-
2. Analysis
109
+
After executing, `graph_net.torch.test_compiler` will:
110
+
1. Running the original model in eager mode to record a baseline.
111
+
2. Compiling the model with the specified backend (e.g., CINN, TVM, Inductor, TensorRT, XLA, BladeDISC).
112
+
3. Executing the compiled model and collecting its runtime and outputs.
113
+
4. Conduct speedup by comparing the compiled results against the baseline.
111
114
112
-
After processing, we provide ```graph_net/analysis.py``` to generate [violin plot](https://en.m.wikipedia.org/wiki/Violin_plot) based on the JSON results.
115
+
**Step 2: Analysis**
113
116
114
-
```
115
-
python3 graph_net/analysis.py \
117
+
After processing, we provide `graph_net/analysis.py` to generate [violin plot](https://en.m.wikipedia.org/wiki/Violin_plot) based on the JSON results.
After executing, one summary plot of results on all compilers (as shown below in "Evaluation Results Example"), as well as multiple sub-plots of results in categories (model tasks, Library...) on a single compiler.
121
-
122
-
The script is designed to process a file structure as ```/benchmark_path/compiler_name/category_name/``` (for example ```/benchmark_logs/paddle/nlp/```), and items on x-axis are identified by name of the folders. So you can modify ```read_all_speedups``` function to fit the benchmark settings on your demand.
125
+
After executing, one summary plot of results on all compilers, as well as multiple sub-plots of results in categories (model tasks, Library...) on a single compiler will be exported.
123
126
124
-
### Evaluation Results Example
125
-
126
-
<divalign="center">
127
-
<imgsrc="/pics/Eval_result.png"alt="Violin plots of rectified speedup distributions"width="65%">
128
-
</div>
127
+
The script is designed to process a file structure as `/benchmark_path/compiler_name/category_name/` (for example `/benchmark_logs/paddle/nlp/`), and items on x-axis are identified by name of the folders. So you can modify `read_all_speedups` function to fit the benchmark settings on your demand.
129
128
130
-
131
-
## Roadmap
129
+
## 📌 Roadmap
132
130
133
131
1. Scale GraphNet to 10K+ graphs.
134
132
2. Further annotate GraphNet samples into more granular sub-categories
135
133
3. Extract samples from multi-GPU scenarios to support benchmarking and optimization for large-scale, distributed computing.
136
134
4. Enable splitting full graphs into independently optimized subgraphs and operator sequences.
137
135
138
-
## GraphNet Community:
139
-
136
+
**Vision**: GraphNet aims to lay the foundation for AI for Compiler by enabling **large-scale, systematic evaluation** of tensor compiler optimizations, and providing a **dataset for models to learn** and transfer optimization strategies.
140
137
141
-
You can join GraphNet community via the following group chats.
138
+
## 💬 GraphNet Community
142
139
140
+
You can join our community via following group chats. Welcome to ask any questions about using and building GraphNet.
143
141
144
142
<divalign="center">
145
143
<table>
@@ -155,8 +153,5 @@ You can join GraphNet community via the following group chats.
155
153
</table>
156
154
</div>
157
155
158
-
159
-
160
-
## License
156
+
## 🪪 License
161
157
This project is released under the [MIT License](LICENSE).
0 commit comments