Skip to content

Commit ea1a47c

Browse files
authored
Update README (#262)
* Update * update
1 parent c4c59de commit ea1a47c

File tree

3 files changed

+17
-32
lines changed

3 files changed

+17
-32
lines changed

README.md

Lines changed: 17 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,36 @@
11
# GraphNet ![](https://img.shields.io/badge/version-v0.1-brightgreen) ![](https://img.shields.io/github/issues/PaddlePaddle/GraphNet?label=open%20issues) [![](https://img.shields.io/badge/Contribute%20to%20GraphNet-blue)](https://github.com/PaddlePaddle/GraphNet/issues/98)
22

33

4-
**GraphNet** is a large-scale dataset of deep learning **computation graphs**, built as a standard benchmark for **tensor compiler** optimization. It provides 2.7K computation graphs extracted from state-of-the-art deep learning models spanning diverse tasks and ML frameworks. With standardized formats and rich metadata, GraphNet enables fair comparison, reproducible evaluation, and deeper research into the general optimization capabilities of tensor compilers.
4+
**GraphNet** is a large-scale dataset of deep learning **computation graphs**, built as a standard benchmark for **tensor compiler** optimization. It provides 2.7K computation graphs extracted from state-of-the-art deep learning models spanning diverse tasks and ML frameworks. With standardized formats and rich metadata, GraphNet enables fair comparison and reproducible evaluation of the general optimization capabilities of tensor compilers, thereby supporting advanced research in AI for compilers (**AI4C**).
5+
56
<br>
67
<div align="center">
7-
<img src="/pics/graphnet_overview.jpg" alt="GraphNet Architecture Overview" width="65%">
8+
<img src="/pics/Eval_result.png" alt="Violin plots of speedup distributions" width="65%">
89
</div>
910

10-
With GraphNet, users can:
11-
1. **Contribute new computation graphs** through the built-in automated extraction and validation pipeline.
12-
2. **Evaluate tensor compilers** on existing graphs with the integrated compiler evaluation tool, supporting multiple compiler backends.
13-
3. **Advance research** in tensor compiler optimization using the test data and statistics provided by GraphNet.
14-
15-
16-
11+
Compiler developers can use GraphNet samples to evaluate tensor compilers (e.g., CINN, TorchInductor, TVM) on target tasks. The figure above shows the speedup of two compilers (CINN and TorchInductor) across two tasks (CV and NLP).
1712

18-
**Vision**: We aim to achieve cross-hardware portability of compiler optimizations by allowing models to learn and transfer optimization strategies. It will significantly reduce the manual effort required to develop efficient operator implementations.
1913

2014

2115
## Dataset Construction
2216

2317
To guarantee the dataset’s overall quality, reproducibility, and cross-compiler compatibility, we define the following construction **constraints**:
2418

25-
1. Dynamic graphs must execute correctly.
26-
2. Graphs and their corresponding Python code must support serialization and deserialization.
19+
1. Computation graphs must be executable in imperative (eager) mode.
20+
2. Computation graphs and their corresponding Python code must support serialization and deserialization.
2721
3. The full graph can be decomposed into two disjoint subgraphs.
2822
4. Operator names within each computation graph must be statically parseable.
2923
5. If custom operators are used, their implementation code must be fully accessible.
3024

3125

3226
### Graph Extraction & Validation
33-
For full implementation details, please refer to the [Co-Creation Tutorial](https://github.com/PaddlePaddle/GraphNet/blob/develop/CONTRIBUTE_TUTORIAL.md#co-creation-tutorial).
27+
28+
We provide automated extraction and validation tools for constructing this dataset.
29+
30+
<div align="center">
31+
<img src="/pics/graphnet_overview.jpg" alt="GraphNet Architecture Overview" width="65%">
32+
</div>
33+
3434

3535
**Demo: Extract & Validate ResNet‑18**
3636
```
@@ -75,19 +75,9 @@ python -m graph_net.torch.validate \
7575

7676
## Compiler Evaluation
7777

78-
The compiler evaluation process takes a GraphNet sample as input and involves:
79-
1. Running the original model in eager mode to record a baseline.
80-
2. Compiling the model with the specified backend (e.g., CINN, TorchInductor, TVM).
81-
3. Executing the compiled model and collecting its runtime and outputs.
82-
4. Analyzing performance by comparing the compiled results against the baseline.
83-
84-
### Evaluation Metrics
85-
86-
We define two key metrics here: **rectified speedup** and **GraphNet Score**. Rectified speedup measures runtime performance while incorporating compilation success, time cost, and correctness. GraphNet Score aggregates the rectified speedup of a compiler on specified tasks, providing a measure of its general optimization capability.
87-
8878
**Demo: How to benchmark your compiler on the model:**
8979

90-
1. Benchmark
80+
**Step 1: Benchmark**
9181

9282
We use ```graph_net/benchmark_demo.sh``` to benchmark GraphNet computation graph samples:
9383

@@ -107,7 +97,7 @@ python3 -m graph_net.torch.test_compiler \
10797
# Note: if --compiler is omitted, PyTorch’s built-in compiler is used by default
10898
```
10999

110-
2. Analysis
100+
**Step 2: Analysis**
111101

112102
After processing, we provide ```graph_net/analysis.py``` to generate [violin plot](https://en.m.wikipedia.org/wiki/Violin_plot) based on the JSON results.
113103

@@ -121,20 +111,15 @@ After executing, one summary plot of results on all compilers (as shown below in
121111

122112
The script is designed to process a file structure as ```/benchmark_path/compiler_name/category_name/``` (for example ```/benchmark_logs/paddle/nlp/```), and items on x-axis are identified by name of the folders. So you can modify ```read_all_speedups``` function to fit the benchmark settings on your demand.
123113

124-
### Evaluation Results Example
125-
126-
<div align="center">
127-
<img src="/pics/Eval_result.png" alt="Violin plots of rectified speedup distributions" width="65%">
128-
</div>
129-
130-
131114
## Roadmap
132115

133116
1. Scale GraphNet to 10K+ graphs.
134117
2. Further annotate GraphNet samples into more granular sub-categories
135118
3. Extract samples from multi-GPU scenarios to support benchmarking and optimization for large-scale, distributed computing.
136119
4. Enable splitting full graphs into independently optimized subgraphs and operator sequences.
137120

121+
**Vision**: GraphNet aims to lay the foundation for AI4C by enabling large-scale, systematic evaluation of tensor compiler optimizations.
122+
138123
## GraphNet Community:
139124

140125

pics/Eval_result.png

83.8 KB
Loading

pics/graphnet_overview.jpg

2.04 KB
Loading

0 commit comments

Comments
 (0)