Skip to content

Commit 641ea91

Browse files
committed
Apply suggestions from code review
Consistency/additional context
1 parent caa400c commit 641ea91

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

episodes/optimisation-memory.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ Repeated runs show some noise to the timing, however the slowdown is consistentl
156156
You might not even be reading 1000 different files. You could be reading the same file multiple times, rather than reading it once and retaining it in memory during execution.
157157
An even greater overhead would apply.
158158

159-
## Accessing the network
159+
## Accessing the Network
160160

161161
When transfering files over a network, similar effects apply. There is a fixed overhead for every file transfer (no matter how big the file), so downloading many small files will be slower than downloading a single large file of the same total size.
162162

@@ -183,13 +183,16 @@ def sequentialDownload():
183183
downloaded_files.append(f)
184184

185185
def parallelDownload():
186+
# Initialise a pool of 6 threads to share the workload
186187
pool = ThreadPoolExecutor(max_workers=6)
187188
jobs = []
189+
# Submit each download to be executed by the thread pool
188190
for mass in range(10, 20):
189191
url = f"https://github.com/SNEWS2/snewpy-models-ccsn/raw/refs/heads/main/models/Warren_2020/stir_a1.23/stir_multimessenger_a1.23_m{mass}.0.h5"
190192
local_filename = f"par_{mass}.h5"
191193
jobs.append(pool.submit(download_file, url, local_filename))
192194

195+
# Collect the results (and errors) as the jobs are completed
193196
for result in as_completed(jobs):
194197
if result.exception() is None:
195198
# handle return values of the parallelised function

0 commit comments

Comments
 (0)