Skip to content

Commit 7091818

Browse files
committed
Update
1 parent 2a033a1 commit 7091818

File tree

3 files changed

+53
-30
lines changed

3 files changed

+53
-30
lines changed

README.md

Lines changed: 53 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,37 @@
11
# GraphNet ![](https://img.shields.io/badge/version-v0.1-brightgreen) ![](https://img.shields.io/github/issues/PaddlePaddle/GraphNet?label=open%20issues) [![](https://img.shields.io/badge/Contribute%20to%20GraphNet-blue)](https://github.com/PaddlePaddle/GraphNet/issues/98)
22

33

4-
**GraphNet** is a large-scale dataset of deep learning **computation graphs**, designed to serve as a standard benchmark and training corpus for **AI-driven tensor compiler optimization**. It contains diverse graphs extracted from state-of-the-art models, enabling effective evaluation of compiler pass optimizations across frameworks and hardware platforms.
5-
4+
**GraphNet** is a large-scale dataset of deep learning **computation graphs**, built as a standard benchmark for **tensor compiler** optimization. It provides 2.7K computation graphs extracted from state-of-the-art deep learning models spanning diverse tasks and ML frameworks. With standardized formats and rich metadata, GraphNet enables fair comparison, reproducible evaluation, and deeper research into the general optimization capabilities of tensor compilers.
5+
<br>
6+
<div align="center">
7+
<img src="/pics/graphnet_overview.jpg" alt="GraphNet Architecture Overview" width="65%">
8+
</div>
69

710
With GraphNet, users can:
8-
1. Quickly benchmark the optimization performance of various compiler strategies.
9-
2. Easily conduct regression tests on existing compilers.
10-
3. Train AI‑for‑Systems models to automatically generate compiler optimization passes.
11+
1. **Contribute new computation graphs** through the built-in automated extraction and validation pipeline.
12+
2. **Evaluate tensor compilers** on existing graphs with the integrated compiler evaluation tool, supporting multiple compiler backends.
13+
3. **Advance research** in tensor compiler optimization using the test data and statistics provided by GraphNet.
14+
15+
16+
1117

1218
**Vision**: We aim to achieve cross-hardware portability of compiler optimizations by allowing models to learn and transfer optimization strategies. It will significantly reduce the manual effort required to develop efficient operator implementations.
1319

1420

15-
### Dataset Construction Constraints:
21+
## Dataset Construction
22+
23+
To guarantee the dataset’s overall quality, reproducibility, and cross-compiler compatibility, we define the following construction **constraints**:
24+
1625
1. Dynamic graphs must execute correctly.
1726
2. Graphs and their corresponding Python code must support serialization and deserialization.
1827
3. The full graph can be decomposed into two disjoint subgraphs.
1928
4. Operator names within each computation graph must be statically parseable.
2029
5. If custom operators are used, their implementation code must be fully accessible.
2130

2231

23-
## ⚡ Quick Start
32+
### Graph Extraction & Validation
2433
For full implementation details, please refer to the [Co-Creation Tutorial](https://github.com/PaddlePaddle/GraphNet/blob/develop/CONTRIBUTE_TUTORIAL.md#co-creation-tutorial).
25-
### Benchmark your compiler on the model:
26-
27-
**graph_net.torch.test_compiler**
28-
```
29-
python3 -m graph_net.torch.test_compiler \
30-
--model-path $GRAPH_NET_EXTRACT_WORKSPACE/model_name/ \
31-
--compiler /path/to/custom/compiler
32-
# Note: if --compiler is omitted, PyTorch’s built-in compiler is used by default
33-
```
3434

35-
### Contribute computation graphs to GraphNet:
3635
**Demo: Extract & Validate ResNet‑18**
3736
```
3837
git clone https://github.com/PaddlePaddle/GraphNet.git
@@ -71,22 +70,46 @@ python -m graph_net.torch.validate \
7170
--model-path $GRAPH_NET_EXTRACT_WORKSPACE/model_name
7271
```
7372

74-
**graph_net.pack**
75-
```
76-
# Create a ZIP archive of $GRAPH_NET_EXTRACT_WORKSPACE.
77-
# The --clear-after-pack flag (True|False) determines whether to delete the workspace after packing.
78-
python -m graph_net.pack \
79-
--output /path/to/output.zip \
80-
--clear-after-pack True
81-
```
8273

83-
Note: To configure your user details (username and email) for GraphNet, run:
74+
## Compiler Evaluation
75+
76+
The compiler evaluation process takes a GraphNet sample as input and involves:
77+
1. Running the original model in eager mode to record a baseline.
78+
2. Compiling the model with the specified backend (e.g., CINN, TorchInductor, TVM).
79+
3. Executing the compiled model and collecting its runtime and outputs.
80+
4. Analyzing performance by comparing the compiled results against the baseline.
81+
82+
### Evaluation Metrics
83+
84+
We define two key metrics here: **rectified speedup** and **GraphNet Score**. Rectified speedup measures runtime performance while incorporating compilation success, time cost, and correctness. GraphNet Score aggregates the rectified speedup of a compiler on specified tasks, providing a measure of its general optimization capability.
85+
86+
**Demo: How to benchmark your compiler on the model:**
87+
8488
```
85-
python -m graph_net.config --global \
86-
--username "your-name" \
87-
--email "your-email"
89+
python3 -m graph_net.torch.test_compiler \
90+
--model-path $GRAPH_NET_EXTRACT_WORKSPACE/model_name/ \
91+
--compiler /path/to/custom/compiler
92+
# Note: if --compiler is omitted, PyTorch’s built-in compiler is used by default
8893
```
89-
Once you have packaged these extracted computation graphs, submit them to the GraphNet community via the following group chats.
94+
95+
### Evaluation Results Example
96+
97+
<div align="center">
98+
<img src="/pics/Eval_result.jpg" alt="Violin plots of rectified speedup distributions" width="65%">
99+
</div>
100+
101+
102+
## Roadmap
103+
104+
1. Scale GraphNet to 10K+ graphs.
105+
2. Further annotate GraphNet samples into more granular sub-categories
106+
3. Extract samples from multi-GPU scenarios to support benchmarking and optimization for large-scale, distributed computing.
107+
4. Enable splitting full graphs into independently optimized subgraphs and operator sequences.
108+
109+
## GraphNet Community:
110+
111+
112+
You can join GraphNet community via the following group chats.
90113

91114

92115
<div align="center">

pics/Eval_result.jpg

104 KB
Loading

pics/graphnet_overview.jpg

546 KB
Loading

0 commit comments

Comments
 (0)