Skip to content

Commit ffcb0ef

Browse files
authored
Update README.md (#257)
* Update README.md * Update benchmark_demo.sh * Add files via upload * Update README.md * Delete pics/Eval_result.jpg * Update README.md * Update README.md
1 parent 5fc2081 commit ffcb0ef

File tree

4 files changed

+30
-3
lines changed

4 files changed

+30
-3
lines changed

README.md

Lines changed: 29 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -87,17 +87,44 @@ We define two key metrics here: **rectified speedup** and **GraphNet Score**. Re
8787

8888
**Demo: How to benchmark your compiler on the model:**
8989

90+
1. Benchmark
91+
92+
We use ```graph_net/benchmark_demo.sh``` to benchmark GraphNet computation graph samples:
93+
94+
```
95+
bash graph_net/benchmark_demo.sh &
96+
```
97+
98+
The script will run ```graph_net.torch.test_compiler``` with specific batch and log configurations.
99+
100+
Or you can customize and use ```graph_net.torch.test_compiler``` yourself:
101+
90102
```
91103
python3 -m graph_net.torch.test_compiler \
92104
--model-path $GRAPH_NET_EXTRACT_WORKSPACE/model_name/ \
93-
--compiler /path/to/custom/compiler
105+
--compiler /path/to/custom/compiler/ \
106+
--output-dir /path/to/save/JSON/result/file/
94107
# Note: if --compiler is omitted, PyTorch’s built-in compiler is used by default
95108
```
96109

110+
2. Analysis
111+
112+
After processing, we provide ```graph_net/analysis.py``` to generate [violin plot](https://en.m.wikipedia.org/wiki/Violin_plot) based on the JSON results.
113+
114+
```
115+
python3 graph_net/analysis.py \
116+
--benchmark-path /path/to/read/JSON/result/file/ \
117+
--output-dir /path/to/save/output/figures/
118+
```
119+
120+
After executing, one summary plot of results on all compilers (as shown below in "Evaluation Results Example"), as well as multiple sub-plots of results in categories (model tasks, Library...) on a single compiler.
121+
122+
The script is designed to process a file structure as ```/benchmark_path/compiler_name/category_name/``` (for example ```/benchmark_logs/paddle/nlp/```), and items on x-axis are identified by name of the folders. So you can modify ```read_all_speedups``` function to fit the benchmark settings on your demand.
123+
97124
### Evaluation Results Example
98125

99126
<div align="center">
100-
<img src="/pics/Eval_result.jpg" alt="Violin plots of rectified speedup distributions" width="65%">
127+
<img src="/pics/Eval_result.png" alt="Violin plots of rectified speedup distributions" width="65%">
101128
</div>
102129

103130

graph_net/benchmark_demo.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ for package_path in "${samples_dir}"/*/; do
3131

3232
echo "[$(date)] FINISHED: ${package_name}/${model_name}"
3333
fi
34-
} >> "$global_log" 2>&1 &
34+
} >> "$global_log" 2>&1
3535
done
3636
done
3737

pics/Eval_result.jpg

-104 KB
Binary file not shown.

pics/Eval_result.png

136 KB
Loading

0 commit comments

Comments
 (0)