Skip to content

Commit 463c530

Browse files
committed
Address comments from code review.
1 parent 2427324 commit 463c530

17 files changed

+24
-23
lines changed

README.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -117,15 +117,11 @@ The results for a multirun are written to a directory in the scheme `multirun/<d
117117

118118
A second script called `collect_results.py` provides a convenient way for collecting results from a multirun and merging them into a single CSV file. Simply running
119119
```
120-
./collect_results.py multirun/<date>/<time>/ out.csv
120+
./collect_results.py out.csv multirun/<date>/<time>/
121121
```
122122
collects all results from the particular multirun and stores the merged data structure in out.csv. `collect_results.py` not only merges the results, but it also calculates minimum, maximum and median execution time for each individual run. The resulting CSV does not contain the measured values of individual iterations anymore and only contains a single row per run. This behavior can be disabled with the `--raw` command line flag. With the flag set, the results from all runs are merged as say are and the resulting file contains rows for all individual runs, but no minimum, maximum and median values.
123123

124-
As a shortcut, you may alternatively use
125-
```
126-
./collect_results.py latest out.csv
127-
```
128-
to write the latest multirun results to `out.csv`.
124+
As a shortcut, you may omit the multirun directory to write the latest multirun results to `out.csv`.
129125

130126
## How it works
131127

runner/collect_results.py

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,14 +19,14 @@ def dir_path(string):
1919

2020
def main():
2121
parser = argparse.ArgumentParser()
22-
parser.add_argument("src_path")
2322
parser.add_argument("out_file")
23+
parser.add_argument("src_path", required=False, type=dir_path)
2424
parser.add_argument("--raw", dest="raw", action="store_true")
2525
args = parser.parse_args()
2626

2727
# collect data from all runs
2828
src_path = (
29-
args.src_path if args.src_path != "latest"
29+
args.src_path if args.src_path is not None
3030
else latest_subdirectory(latest_subdirectory("./multirun"))
3131
)
3232
data_frames = []
@@ -52,8 +52,13 @@ def main():
5252
if args.out_file.endswith(".json"):
5353
with open(args.out_file, "w") as f:
5454
json.dump((create_json(concat)), f, indent=4)
55-
else:
55+
elif args.out_file.endswith(".csv"):
5656
concat.to_csv(args.out_file)
57+
else:
58+
raise ValueError(
59+
f"Expected output file extension to be \".json\" "
60+
f"or \".csv\", not \"{args.out_file.split('.')[-1]}\""
61+
)
5762

5863
def create_json(all_data: pd.DataFrame) -> str:
5964
group_by = ["benchmark", "target", "threads", "scheduler"]

runner/conf/benchmark/savina_concurrency_banking.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
name: "Bank Transaction"
33
params:
44
accounts: 1000
5-
transactions: "${size.banking_transactions}"
5+
transactions: "${problem_size.banking_transactions}"
66

77
# target specific configuration
88
targets:

runner/conf/benchmark/savina_concurrency_bndbuffer.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ params:
44
buffer_size: 50
55
consumers: 40
66
producers: 40
7-
items_per_producer: "${size.bndbuffer_items_per_producer}"
7+
items_per_producer: "${problem_size.bndbuffer_items_per_producer}"
88
produce_cost: 25
99
consume_cost: 25
1010

runner/conf/benchmark/savina_concurrency_concsll.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
name: "Concurrent Sorted Linked List"
33
params:
44
workers: 20
5-
messages_per_worker: "${size.concsll_messages_per_worker}"
5+
messages_per_worker: "${problem_size.concsll_messages_per_worker}"
66
write_percentage: 10
77
size_percentage: 1
88

runner/conf/benchmark/savina_micro_big.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# @package benchmark
22
name: "Big"
33
params:
4-
messages: ${size.big_messages}
4+
messages: ${problem_size.big_messages}
55
actors: 120
66

77
# target specific configuration

runner/conf/benchmark/savina_micro_pingpong.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# @package benchmark
22
name: "Ping Pong"
33
params:
4-
pings: "${size.pingpong_pings}"
4+
pings: "${problem_size.pingpong_pings}"
55

66
# target specific configuration
77
targets:

runner/conf/benchmark/savina_parallelism_apsp.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# @package benchmark
22
name: "All-Pairs Shortest Path"
33
params:
4-
num_workers: ${size.apsp_num_workers}
5-
block_size: ${size.apsp_block_size}
4+
num_workers: ${problem_size.apsp_num_workers}
5+
block_size: ${problem_size.apsp_block_size}
66
max_edge_weight: 100
77

88
# target specific configuration

runner/conf/benchmark/savina_parallelism_filterbank.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# @package benchmark
22
name: "Filter Bank"
33
params:
4-
columns: ${size.filterbank_columns}
4+
columns: ${problem_size.filterbank_columns}
55
simulations: 34816
66
channels: 8
77

runner/conf/benchmark/savina_parallelism_piprecise.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
name: "Precise Pi Computation"
33
params:
44
workers: 20
5-
precision: ${size.piprecise_precision}
5+
precision: ${problem_size.piprecise_precision}
66

77
# target specific configuration
88
targets:

0 commit comments

Comments
 (0)