Skip to content

Commit 791cc39

Browse files
committed
update benchmark description and setup guide
1 parent a5e79e2 commit 791cc39

File tree

1 file changed

+12
-2
lines changed

1 file changed

+12
-2
lines changed

docs/src/index.md

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,13 +57,19 @@ PSFModels
5757
The benchmarks can be found in the [`bench/`](https://github.com/JuliaAstro/PSFModels.jl/tree/main/bench) folder. To run them, first install the python dependencies
5858

5959
```
60-
$ pip install -r bench/requirements.txt
60+
$ cd bench
61+
$ poetry install
62+
$ poetry shell
63+
```
64+
then get the Julia project set up
65+
```
66+
$ PYTHON=$(which python) julia --project=@. -e 'using Pkg; Pkg.instantiate(); Pkg.build("PyCall")'
6167
```
6268

6369
Then run the benchmark
6470

6571
```
66-
$ julia --project=bench bench/bench.jl
72+
$ julia --project=. bench.jl
6773
```
6874

6975
**System Information**
@@ -81,6 +87,10 @@ Environment:
8187
JULIA_NUM_THREADS = 1
8288
```
8389

90+
### Evaluation benchmark
91+
92+
This benchmark tests how long it takes to evaluate a single point in the PSF model. This may seem contrived, but we expect performance to scale directly from this measure: if it takes 1 microsecond to evaluate a single point, it should take ~1 second to evaluate a 1000x1000 image, with speedups potentially from multithreading or SIMD loop evaluation.
93+
8494
```@setup bench
8595
using CSV, DataFrames
8696
using StatsPlots

0 commit comments

Comments
 (0)