You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Note: if --compiler is omitted, PyTorch’s built-in compiler is used by default
95
108
```
96
109
110
+
2. Analysis
111
+
112
+
After processing, we provide ```graph_net/analysis.py``` to generate [violin plot](https://en.m.wikipedia.org/wiki/Violin_plot) based on the JSON results.
After executing, one summary plot of results on all compilers (as shown below in "Evaluation Results Example"), as well as multiple sub-plots of results in categories (model tasks, Library...) on a single compiler.
121
+
122
+
The script is designed to process a file structure as ```/benchmark_path/compiler_name/category_name/``` (for example ```/benchmark_logs/paddle/nlp/```), and items on x-axis are identified by name of the folders. So you can modify ```read_all_speedups``` function to fit the benchmark settings on your demand.
123
+
97
124
### Evaluation Results Example
98
125
99
126
<divalign="center">
100
-
<imgsrc="/pics/Eval_result.jpg"alt="Violin plots of rectified speedup distributions"width="65%">
127
+
<imgsrc="/pics/Eval_result.png"alt="Violin plots of rectified speedup distributions"width="65%">
0 commit comments