Skip to content

Commit b62cfc4

Browse files
Copilotmmcky
andcommitted
Convert remaining %time and %%time to qe.Timer() context manager
Co-authored-by: mmcky <[email protected]>
1 parent 9538d56 commit b62cfc4

File tree

3 files changed

+54
-50
lines changed

3 files changed

+54
-50
lines changed

lectures/jax_intro.md

Lines changed: 28 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -304,7 +304,8 @@ x = jnp.ones(n)
304304
How long does the function take to execute?
305305

306306
```{code-cell} ipython3
307-
%time f(x).block_until_ready()
307+
with qe.Timer():
308+
f(x).block_until_ready()
308309
```
309310

310311
```{note}
@@ -318,7 +319,8 @@ allows the Python interpreter to run ahead of numerical computations.
318319
If we run it a second time it becomes faster again:
319320

320321
```{code-cell} ipython3
321-
%time f(x).block_until_ready()
322+
with qe.Timer():
323+
f(x).block_until_ready()
322324
```
323325

324326
This is because the built in functions like `jnp.cos` are JIT compiled and the
@@ -341,7 +343,8 @@ y = jnp.ones(m)
341343
```
342344

343345
```{code-cell} ipython3
344-
%time f(y).block_until_ready()
346+
with qe.Timer():
347+
f(y).block_until_ready()
345348
```
346349

347350
Notice that the execution time increases, because now new versions of
@@ -352,14 +355,16 @@ If we run again, the code is dispatched to the correct compiled version and we
352355
get faster execution.
353356

354357
```{code-cell} ipython3
355-
%time f(y).block_until_ready()
358+
with qe.Timer():
359+
f(y).block_until_ready()
356360
```
357361

358362
The compiled versions for the previous array size are still available in memory
359363
too, and the following call is dispatched to the correct compiled code.
360364

361365
```{code-cell} ipython3
362-
%time f(x).block_until_ready()
366+
with qe.Timer():
367+
f(x).block_until_ready()
363368
```
364369

365370
### Compiling the outer function
@@ -379,7 +384,8 @@ f_jit(x)
379384
And now let's time it.
380385

381386
```{code-cell} ipython3
382-
%time f_jit(x).block_until_ready()
387+
with qe.Timer():
388+
f_jit(x).block_until_ready()
383389
```
384390

385391
Note the speed gain.
@@ -534,10 +540,10 @@ z_loops = np.empty((n, n))
534540
```
535541

536542
```{code-cell} ipython3
537-
%%time
538-
for i in range(n):
539-
for j in range(n):
540-
z_loops[i, j] = f(x[i], y[j])
543+
with qe.Timer():
544+
for i in range(n):
545+
for j in range(n):
546+
z_loops[i, j] = f(x[i], y[j])
541547
```
542548

543549
Even for this very small grid, the run time is extremely slow.
@@ -575,15 +581,15 @@ x_mesh, y_mesh = jnp.meshgrid(x, y)
575581
Now we get what we want and the execution time is very fast.
576582

577583
```{code-cell} ipython3
578-
%%time
579-
z_mesh = f(x_mesh, y_mesh).block_until_ready()
584+
with qe.Timer():
585+
z_mesh = f(x_mesh, y_mesh).block_until_ready()
580586
```
581587

582588
Let's run again to eliminate compile time.
583589

584590
```{code-cell} ipython3
585-
%%time
586-
z_mesh = f(x_mesh, y_mesh).block_until_ready()
591+
with qe.Timer():
592+
z_mesh = f(x_mesh, y_mesh).block_until_ready()
587593
```
588594

589595
Let's confirm that we got the right answer.
@@ -602,8 +608,8 @@ x_mesh, y_mesh = jnp.meshgrid(x, y)
602608
```
603609

604610
```{code-cell} ipython3
605-
%%time
606-
z_mesh = f(x_mesh, y_mesh).block_until_ready()
611+
with qe.Timer():
612+
z_mesh = f(x_mesh, y_mesh).block_until_ready()
607613
```
608614

609615
But there is one problem here: the mesh grids use a lot of memory.
@@ -641,8 +647,8 @@ f_vec = jax.vmap(f_vec_y, in_axes=(0, None))
641647
With this construction, we can now call the function $f$ on flat (low memory) arrays.
642648

643649
```{code-cell} ipython3
644-
%%time
645-
z_vmap = f_vec(x, y).block_until_ready()
650+
with qe.Timer():
651+
z_vmap = f_vec(x, y).block_until_ready()
646652
```
647653

648654
The execution time is essentially the same as the mesh operation but we are using much less memory.
@@ -711,15 +717,15 @@ def compute_call_price_jax(β=β,
711717
Let's run it once to compile it:
712718

713719
```{code-cell} ipython3
714-
%%time
715-
compute_call_price_jax().block_until_ready()
720+
with qe.Timer():
721+
compute_call_price_jax().block_until_ready()
716722
```
717723

718724
And now let's time it:
719725

720726
```{code-cell} ipython3
721-
%%time
722-
compute_call_price_jax().block_until_ready()
727+
with qe.Timer():
728+
compute_call_price_jax().block_until_ready()
723729
```
724730

725731
```{solution-end}

lectures/numpy.md

Lines changed: 18 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1192,21 +1192,19 @@ n = 1_000_000
11921192
```
11931193

11941194
```{code-cell} python3
1195-
%%time
1196-
1197-
y = 0 # Will accumulate and store sum
1198-
for i in range(n):
1199-
x = random.uniform(0, 1)
1200-
y += x**2
1195+
with qe.Timer():
1196+
y = 0 # Will accumulate and store sum
1197+
for i in range(n):
1198+
x = random.uniform(0, 1)
1199+
y += x**2
12011200
```
12021201

12031202
The following vectorized code achieves the same thing.
12041203

12051204
```{code-cell} ipython
1206-
%%time
1207-
1208-
x = np.random.uniform(0, 1, n)
1209-
y = np.sum(x**2)
1205+
with qe.Timer():
1206+
x = np.random.uniform(0, 1, n)
1207+
y = np.sum(x**2)
12101208
```
12111209

12121210
As you can see, the second code block runs much faster. Why?
@@ -1287,24 +1285,22 @@ grid = np.linspace(-3, 3, 1000)
12871285
Here's a non-vectorized version that uses Python loops.
12881286

12891287
```{code-cell} python3
1290-
%%time
1288+
with qe.Timer():
1289+
m = -np.inf
12911290
1292-
m = -np.inf
1293-
1294-
for x in grid:
1295-
for y in grid:
1296-
z = f(x, y)
1297-
if z > m:
1298-
m = z
1291+
for x in grid:
1292+
for y in grid:
1293+
z = f(x, y)
1294+
if z > m:
1295+
m = z
12991296
```
13001297

13011298
And here's a vectorized version
13021299

13031300
```{code-cell} python3
1304-
%%time
1305-
1306-
x, y = np.meshgrid(grid, grid)
1307-
np.max(f(x, y))
1301+
with qe.Timer():
1302+
x, y = np.meshgrid(grid, grid)
1303+
np.max(f(x, y))
13081304
```
13091305

13101306
In the vectorized version, all the looping takes place in compiled code.

lectures/parallelization.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -364,8 +364,8 @@ def compute_long_run_median(w0=1, T=1000, num_reps=50_000):
364364
Let's see how fast this runs:
365365

366366
```{code-cell} ipython
367-
%%time
368-
compute_long_run_median()
367+
with qe.Timer():
368+
compute_long_run_median()
369369
```
370370

371371
To speed this up, we're going to parallelize it via multithreading.
@@ -391,8 +391,8 @@ def compute_long_run_median_parallel(w0=1, T=1000, num_reps=50_000):
391391
Let's look at the timing:
392392

393393
```{code-cell} ipython
394-
%%time
395-
compute_long_run_median_parallel()
394+
with qe.Timer():
395+
compute_long_run_median_parallel()
396396
```
397397

398398
The speed-up is significant.
@@ -461,11 +461,13 @@ def calculate_pi(n=1_000_000):
461461
Now let's see how fast it runs:
462462

463463
```{code-cell} ipython3
464-
%time calculate_pi()
464+
with qe.Timer():
465+
calculate_pi()
465466
```
466467

467468
```{code-cell} ipython3
468-
%time calculate_pi()
469+
with qe.Timer():
470+
calculate_pi()
469471
```
470472

471473
By switching parallelization on and off (selecting `True` or

0 commit comments

Comments
 (0)