|
6 | 6 |  |
7 | 7 |  |
8 | 8 | [](./GraphNet_technical_report.pdf) |
9 | | -<a href="https://img.shields.io/badge/微信-green?logo=wechat&"><img src="https://img.shields.io/badge/微信-green?logo=wechat&"></a> |
| 9 | +<a href="https://github.com/user-attachments/assets/125e3494-25c9-4494-9acd-8ad65ca85d03"><img src="https://img.shields.io/badge/微信-green?logo=wechat&"></a> |
10 | 10 | </div> |
11 | 11 |
|
12 | 12 | **GraphNet** is a large-scale dataset of deep learning **computation graphs**, built as a standard benchmark for **tensor compiler** optimization. It provides over 2.7K computation graphs extracted from state-of-the-art deep learning models spanning diverse tasks and ML frameworks. With standardized formats and rich metadata, GraphNet enables fair comparison and reproducible evaluation of the general optimization capabilities of tensor compilers, thereby supporting advanced research such as AI for System on compilers. |
|
16 | 16 | - [2025-8-20] 🚀 The second round of [open contribution tasks](https://github.com/PaddlePaddle/Paddle/issues/74773) was released. (completed ✅) |
17 | 17 | - [2025-7-30] 🚀 The first round of [open contribution tasks](https://github.com/PaddlePaddle/GraphNet/issues/44) was released. (completed ✅) |
18 | 18 | ## Benchmark Results |
19 | | -We evaluate two representative tensor compiler backends, CINN (PaddlePaddle) and TorchInductor (PyTorch), on GraphNet's NLP and CV subsets. The evaluation adopts two quantitative metrics proposed in the [GraphNet Technical Report](./GraphNet_technical_report.pdf): |
| 19 | +We evaluate two representative tensor compiler backends, CINN (PaddlePaddle) and TorchInductor (PyTorch), on GraphNet's NLP and CV subsets. The evaluation adopts two quantitative metrics proposed in the [Technical Report](./GraphNet_technical_report.pdf): |
20 | 20 | - **Speedup Score** S(t) — evaluates compiler performance under varying numerical tolerance levels. |
21 | 21 | <div align="center"> |
22 | 22 | <img src="/pics/St-result.jpg" alt="Speedup Score S_t Results" width="80%"> |
@@ -91,66 +91,9 @@ python -m graph_net.S_analysis \ |
91 | 91 |
|
92 | 92 | The scripts are designed to process a file structure as `/benchmark_path/category_name/`, and items on x-axis are identified by name of the sub-directories. After executing, several summary plots of result in categories (model tasks, libraries...) will be exported to `$GRAPH_NET_BENCHMARK_PATH`. |
93 | 93 |
|
94 | | -### 🧱 Contribute More Samples |
95 | | - |
96 | | -GraphNet provides automated tools for graph extraction and validation. |
97 | | - |
98 | | -<div align="center"> |
99 | | -<img src="/pics/graphnet_overview.jpg" alt="GraphNet Architecture Overview" width="65%"> |
100 | | -</div> |
101 | | - |
102 | | -**Demo: Extract & Validate ResNet‑18** |
103 | | -```bash |
104 | | -git clone https://github.com/PaddlePaddle/GraphNet.git |
105 | | -cd GraphNet |
106 | | - |
107 | | -# Set your workspace directory |
108 | | -export GRAPH_NET_EXTRACT_WORKSPACE=/home/yourname/graphnet_workspace/ |
109 | | - |
110 | | -# Extract the ResNet‑18 computation graph |
111 | | -python graph_net/test/vision_model_test.py |
112 | | - |
113 | | -# Validate the extracted graph (e.g. /home/yourname/graphnet_workspace/resnet18/) |
114 | | -python -m graph_net.torch.validate \ |
115 | | - --model-path $GRAPH_NET_EXTRACT_WORKSPACE/resnet18/ |
116 | | -``` |
117 | | - |
118 | | -**Illustration – Extraction Workflow** |
119 | | - |
120 | | -<div align="center"> |
121 | | -<img src="/pics/dataset_composition.png" alt="GraphNet Extract Sample" width="65%"> |
122 | | -</div> |
123 | | - |
124 | | -* Source code of custom_op is required **only when** corresponding operator is used in the module, and **no specific format** is required. |
125 | | - |
126 | | -**Step 1: graph_net.torch.extract** |
127 | | - |
128 | | -Wrap the model with the extractor — that’s all you need: |
129 | | - |
130 | | -```bash |
131 | | -import graph_net |
132 | | - |
133 | | -# Instantiate the model (e.g. a torchvision model) |
134 | | -model = ... |
135 | | - |
136 | | -# Extract your own model |
137 | | -model = graph_net.torch.extract(name="model_name", dynamic="True")(model) |
138 | | -``` |
139 | | - |
140 | | -After running, the extracted graph will be saved to: `$GRAPH_NET_EXTRACT_WORKSPACE/model_name/`. |
141 | | - |
142 | | -For more details, see docstring of `graph_net.torch.extract` defined in `graph_net/torch/extractor.py`. |
143 | | - |
144 | | -**Step 2: graph_net.torch.validate** |
145 | | - |
146 | | -To verify that the extracted model meets requirements, we use `graph_net.torch.validate` in CI tool and also ask contributors to self-check in advance: |
147 | | - |
148 | | -```bash |
149 | | -python -m graph_net.torch.validate \ |
150 | | - --model-path $GRAPH_NET_EXTRACT_WORKSPACE/model_name |
151 | | -``` |
152 | | - |
153 | | -All the **construction constraints** will be examined automatically. After passing validation, a unique `graph_hash.txt` will be generated and later checked in CI procedure to avoid redundant. |
| 94 | +### 🧱 Construction & Contribution Guide |
| 95 | +Want to understand how GraphNet is built or contribute new samples? |
| 96 | +Check out the [Contributors Guide](./docs/README_contribute.md) for the extraction and validation pipeline. |
154 | 97 |
|
155 | 98 |
|
156 | 99 | ## Future Roadmap |
|
0 commit comments