You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**GraphNet** is a large-scale dataset of deep learning **computation graphs**, built as a standard benchmark for **tensor compiler** optimization. It provides 2.7K computation graphs extracted from state-of-the-art deep learning models spanning diverse tasks and ML frameworks. With standardized formats and rich metadata, GraphNet enables fair comparison and reproducible evaluation of the general optimization capabilities of tensor compilers, thereby supporting advanced research such as AI for System on compilers ([**ai4c**](https://github.com/PaddlePaddle/ai4c)).
5
4
6
5
<br>
@@ -10,8 +9,6 @@
10
9
11
10
Compiler developers can use GraphNet samples to evaluate tensor compilers (e.g., CINN, TorchInductor, TVM) on target tasks. The figure above shows the speedup of two compilers (CINN and TorchInductor) across two tasks (CV and NLP).
12
11
13
-
14
-
15
12
## 🧱 Dataset Construction
16
13
17
14
To guarantee the dataset’s overall quality, reproducibility, and cross-compiler compatibility, we define the following construction **constraints**:
@@ -22,7 +19,6 @@ To guarantee the dataset’s overall quality, reproducibility, and cross-compile
22
19
4. Operator names within each computation graph must be statically parseable.
23
20
5. If custom operators are used, their implementation code must be fully accessible.
24
21
25
-
26
22
### Graph Extraction & Validation
27
23
28
24
We provide automated extraction and validation tools for constructing this dataset.
@@ -31,7 +27,6 @@ We provide automated extraction and validation tools for constructing this datas
After executing, one summary plot of results on all compilers (as shown in "Evaluation Results Example"), as well as multiple sub-plots of results in categories (model tasks, Library...) on a single compiler.
112
+
After executing, one summary plot of results on all compilers, as well as multiple sub-plots of results in categories (model tasks, Library...) on a single compiler will be exported.
119
113
120
114
The script is designed to process a file structure as ```/benchmark_path/compiler_name/category_name/``` (for example ```/benchmark_logs/paddle/nlp/```), and items on x-axis are identified by name of the folders. So you can modify ```read_all_speedups``` function to fit the benchmark settings on your demand.
121
115
@@ -130,10 +124,8 @@ The script is designed to process a file structure as ```/benchmark_path/compile
130
124
131
125
## 💬 GraphNet Community:
132
126
133
-
134
127
You can join GraphNet community via the following group chats.
135
128
136
-
137
129
<divalign="center">
138
130
<table>
139
131
<tr>
@@ -148,8 +140,5 @@ You can join GraphNet community via the following group chats.
148
140
</table>
149
141
</div>
150
142
151
-
152
-
153
143
## 🪪 License
154
144
This project is released under the [MIT License](LICENSE).
0 commit comments