You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/lecture_05/lab.md
+21-3Lines changed: 21 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -110,11 +110,27 @@ There are other problems (such as repeated allocations, bad design patterns - ar
110
110
111
111
Sometimes `@code_warntype` shows that the function's return type is unstable without any hints to the possible problem, fortunately for such cases a more advanced tools such as [`Cthuhlu.jl`](https://github.com/JuliaDebug/Cthulhu.jl) or [`JET.jl`](https://github.com/aviatesk/JET.jl) have been developed and we will cover it in the next lecture. *we could use it in the ecosystem*
112
112
113
-
## Benchmarking
114
-
In the last exercise we have encountered the problem of timing of code to see if we have made any progress in speeding it up. Throughout the course we have advertised the use of the `BenchmarkTools` package, which provides easy way to test your code multiple times. In this lab we will focus on some advanced usage tips and gotchas that you may encounter while using it. *Furthermore in the homework you will create an code scalability benchmark.*
113
+
## Benchmarking (TODO)
114
+
In the last exercise we have encountered the problem of timing of code to see, if we have made any progress in speeding it up. Throughout the course we will advertise the use of the `BenchmarkTools` package, which provides an easy way to test your code multiple times. In this lab we will focus on some advanced usage tips and gotchas that you may encounter while using it. *Furthermore in the homework you will create an code scalability benchmark.*
115
115
116
+
There are few concepts to know beforehand
117
+
- evaluation - a single execution of a benchmark expression
118
+
- sample - a single time/memory measurement obtained by running multiple evaluations
119
+
- trial - experiment in which multiple samples are gathered
116
120
117
-
*BIG TODO HERE*
121
+
I think that it is important to know how much is involved in timing of code itself - wall clock | cpu clock (something I remember from python), the act of measuring does not come free of computational resources, sometimes Julia will show
122
+
123
+
124
+
The result of a benchmark is thus a trial in which we collect multiple samples of time/memory measurements, which in turn are composed of multiple executions of the code in question. This layering of repetition is required to allow for benchmarking code at different runtime magnitudes. Imagine having to benchmark really fast operations, which fall under
125
+
126
+
The number of samples/evaluations can be set manually, however most of the time we don't want to bother with them, therefore there is also the `tune!` method, that allows to tune a `@benchmarkable` job.
127
+
128
+
The most commonly used interface of `BenchmarkTools` is the `@btime` macro, which unlike the regular `@time` macro runs the code over multiple samples+evaluations and returns the minimum (a robust estimator for the location parameter of the time distribution, should not be considered an outlier - *makes sense that usually the noise puts the results to the other tail of the distribution, some miraculous noisy speedups are uncommon*).
129
+
130
+
`@benchmark` is evaluated in global scope, even if called from local scope (missing an example that would show this for me)
131
+
132
+
When should I call `@time` or`@elapsed` rather than `@btime`?
133
+
When is setup/breakdown is called when?
118
134
119
135
120
136
## Profiling
@@ -446,3 +462,5 @@ I would expect that the same piece of code that has been type unstable also show
0 commit comments