Skip to content

Commit 96785c6

Browse files
authored
Add Tech Report (#318)
* Update * update * Update README.md * update * add tech report
1 parent 03687e7 commit 96785c6

File tree

2 files changed

+11
-9
lines changed

2 files changed

+11
-9
lines changed

GraphNet_technical_report.pdf

8.27 KB
Binary file not shown.

README.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -3,20 +3,19 @@
33

44
<div align="center">
55

6-
![](https://img.shields.io/badge/version-v0.1-brightgreen)
76
![](https://img.shields.io/github/issues/PaddlePaddle/GraphNet?label=open%20issues)
8-
[![Documentation](https://img.shields.io/badge/documentation-blue)](./GraphNet_technical_report.pdf)
7+
[![arXiv](https://img.shields.io/badge/arXiv-2510.24035-b31b1b.svg)](https://arxiv.org/abs/2510.24035)
98
<a href="https://github.com/user-attachments/assets/125e3494-25c9-4494-9acd-8ad65ca85d03"><img src="https://img.shields.io/badge/微信-green?logo=wechat&amp"></a>
109
</div>
1110

1211
**GraphNet** is a large-scale dataset of deep learning **computation graphs**, built as a standard benchmark for **tensor compiler** optimization. It provides over 2.7K computation graphs extracted from state-of-the-art deep learning models spanning diverse tasks and ML frameworks. With standardized formats and rich metadata, GraphNet enables fair comparison and reproducible evaluation of the general optimization capabilities of tensor compilers, thereby supporting advanced research such as AI for System on compilers.
1312

1413
## 📣 News
15-
- [2025-10-14] ✨ Our technical report is out: a detailed study of dataset construction and compiler benchmarking, introducing the novel performance metrics Speedup Score S(t) and Error-aware Speedup Score ES(t). [📘 GraphNet: A Large-Scale Computational Graph Dataset for Tensor Compiler Research](./GraphNet_technical_report.pdf)
14+
- [2025-10-14] ✨ Our technical report is out: a detailed study of dataset construction and compiler benchmarking, introducing the novel performance metrics Speedup Score S(t) and Error-aware Speedup Score ES(t). [📘 GraphNet: A Large-Scale Computational Graph Dataset for Tensor Compiler Research](https://arxiv.org/abs/2510.24035)
1615
- [2025-8-20] 🚀 The second round of [open contribution tasks](https://github.com/PaddlePaddle/Paddle/issues/74773) was released. (completed ✅)
1716
- [2025-7-30] 🚀 The first round of [open contribution tasks](https://github.com/PaddlePaddle/GraphNet/issues/44) was released. (completed ✅)
1817
## 📊 Benchmark Results
19-
We evaluate two representative tensor compiler backends, CINN (PaddlePaddle) and TorchInductor (PyTorch), on GraphNet's NLP and CV subsets. The evaluation adopts two quantitative metrics proposed in the [Technical Report](./GraphNet_technical_report.pdf):
18+
We evaluate two representative tensor compiler backends, CINN (PaddlePaddle) and TorchInductor (PyTorch), on GraphNet's NLP and CV subsets. The evaluation adopts two quantitative metrics proposed in the [Technical Report](https://arxiv.org/abs/2510.24035):
2019
- **Speedup Score** S(t) — evaluates compiler performance under varying numerical tolerance levels.
2120
<div align="center">
2221
<img src="/pics/St-result.jpg" alt="Speedup Score S_t Results" width="80%">
@@ -136,10 +135,13 @@ GraphNet is released under the [MIT License](./LICENSE).
136135
If you find this project helpful, please cite:
137136

138137
```bibtex
139-
@misc{li2025graphnet,
140-
title = {GraphNet: A Large-Scale Computational Graph Dataset for Tensor Compiler Research},
141-
author = {Xinqi Li and Yiqun Liu and Shan Jiang and Enrong Zheng and Huaijin Zheng and Wenhao Dai and Haodong Deng and Dianhai Yu and Yanjun Ma},
142-
year = {2025},
143-
url = {https://github.com/PaddlePaddle/GraphNet/blob/develop/GraphNet_technical_report.pdf}
138+
@misc{li2025graphnetlargescalecomputationalgraph,
139+
title={GraphNet: A Large-Scale Computational Graph Dataset for Tensor Compiler Research},
140+
author={Xinqi Li and Yiqun Liu and Shan Jiang and Enrong Zheng and Huaijin Zheng and Wenhao Dai and Haodong Deng and Dianhai Yu and Yanjun Ma},
141+
year={2025},
142+
eprint={2510.24035},
143+
archivePrefix={arXiv},
144+
primaryClass={cs.LG},
145+
url={https://arxiv.org/abs/2510.24035},
144146
}
145147
```

0 commit comments

Comments
 (0)