Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
a77e9cd
Add preliminary benchmarks
sfmig Jan 20, 2026
d139dbf
Add a simpler benchmark
sfmig Jan 20, 2026
4876d5a
Add contributing guidelines
sfmig Jan 20, 2026
75abf94
Update comparison guidance
sfmig Jan 20, 2026
bdbb53d
Remove use frames from file parameter (small difference) and simplify…
sfmig Jan 20, 2026
e501a3b
Small edits
sfmig Jan 20, 2026
fca2677
Some claude suggestions
sfmig Jan 15, 2026
2fcd694
Add pre-parsing of dataframe columns
sfmig Jan 20, 2026
b2422b4
Uncomment validators
sfmig Jan 20, 2026
fffcf9d
Tests pass
sfmig Jan 20, 2026
8a7323a
Remove old implementation
sfmig Jan 20, 2026
c8889a5
Remove option of df_input being parsed already
sfmig Jan 20, 2026
dc63c92
Refactor
sfmig Jan 20, 2026
6f629ec
Recover old implementation and verify result is the same as previous …
sfmig Jan 20, 2026
85e31da
Cast frame number as int explicitly and set float32
sfmig Jan 20, 2026
6d385af
Adapt tests to float32
sfmig Jan 20, 2026
cbcccf6
Convert to float64 with 6 decimals to be json serializable
sfmig Jan 20, 2026
62b6a7e
Remove foo test
sfmig Jan 20, 2026
bcb6305
Remove old implementation
sfmig Jan 20, 2026
82436d3
Partially replace old tests
sfmig Jan 20, 2026
6bafceb
Add test for filling values
sfmig Jan 20, 2026
c35674a
Refactor parsing function. Remove if-cases since file is validated be…
sfmig Jan 20, 2026
e8ea1cc
Floating point comparison
sfmig Jan 20, 2026
3a71b8c
Define pytest plugins module path independently of directory from whi…
sfmig Jan 20, 2026
43da863
Add proto benchmarks and reasons to skip
sfmig Jan 20, 2026
e9176dd
Remove skips
sfmig Jan 22, 2026
cd08dcb
Replace ast.literal_eval with json.loads
sfmig Jan 22, 2026
de97253
Try loading sparse array
sfmig Jan 22, 2026
0ae471f
Revert "Try loading sparse array"
sfmig Jan 26, 2026
9532e8a
Read VIA tracks file as parquet (skip validation for now)
sfmig Jan 26, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -95,3 +95,7 @@ venv/

# uv related
uv.lock


# benchmark results
.benchmarks/
54 changes: 54 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,6 +167,60 @@ For tests requiring experimental data, you can use [sample data](#sample-data) f
These datasets are accessible through the `pytest.DATA_PATHS` dictionary, populated in `conftest.py`.
Avoid including large data files directly in the GitHub repository.

#### Running benchmark tests
Some tests are marked as `benchmark` because we use them along with [pytest-benchmark](https://pytest-benchmark.readthedocs.io/en/stable/) to measure the performance of a section of the code. These tests are excluded from the default test run to keep CI and local test running fast.
This applies to all `pytest` runs (from the command line, VS Code, tox or in CI).

If you wish to run only the benchmark tests locally, you can do so by running:

```sh
pytest -m benchmark # only those marked as 'benchmark'
```

To run all tests, including those marked as `benchmark`:

```sh
pytest -m "" # all tests, including 'benchmark' ones
```

### Comparing benchmark runs across branches

To compare performance between branches (e.g., `main` and a PR branch), we use [pytest-benchmark](https://pytest-benchmark.readthedocs.io/en/stable/)'s save and compare functionality:

1. Run benchmarks on the `main` branch and save the results:

```sh
git checkout main
pytest -m benchmark --benchmark-save=main
```
The results are saved as JSON files with the format `.benchmarks/<machine-identifier>/0001_main.json` by default, where `<machine-identifier>` is a string with the machine specifications, `0001` is generally a counter for the benchmark run, and `main` corresponds to the string passed in the `--benchmark-save` option. Benchmark results are saved to `.benchmarks/` (a directory by default ignored by git).

2. Switch to your PR branch and run the benchmarks again:

```sh
git checkout pr-branch
pytest -m benchmark --benchmark-save=pr
```

3. Show the results from both runs together:

```sh
pytest-benchmark compare <path-to-main-result.json> <path-to-pr-result.json> --group-by=name
```
Instead of providing the paths to the results, you can also provide the identifiers of the runs (e.g. `0001_main` and `0002_pr`), or use glob patterns to match the results (e.g. `*main*` and `*pr*`).

You can sort the results by the name of the run using the `--sort='name'`, or group them with the `--group-by=<label>` option (e.g. `group-by=name` to group by the name of the run, `group-by=func` to group by the name of the test function, or `group-by=param` to group by the parameters used to test the function). For further options, check the [comparison CLI documentation](https://pytest-benchmark.readthedocs.io/en/latest/usage.html#comparison-cli).

We recommend reading the [pytest-benchmark documentation](https://pytest-benchmark.readthedocs.io/en/stable/) for more information on the available [CLI arguments](https://pytest-benchmark.readthedocs.io/en/latest/usage.html#commandline-options). Some useful options are:
- `--benchmark-warmup=on`: to enable warmup to prime caches and reduce variability between runs. This is recommended for tests involving I/O or external resources.
- `--benchmark-warmup-iterations=N`: to set the number of warmup iterations.
- `--benchmark-compare`: to run benchmarks and compare against the last saved run.
- `--benchmark-min-rounds=10`: to run more rounds for stable results.

:::{note}
High standard deviation in benchmark results often indicates bad isolation or non-deterministic behaviour (I/O, side-effects, garbage collection overhead). Before comparing past runs, it is advisable to make the benchmark runs as consistent as possible. See the [pytest-benchmark guidance on comparing runs](https://pytest-benchmark.readthedocs.io/en/latest/comparing.html) and the [pytest-benchmark FAQ](https://pytest-benchmark.readthedocs.io/en/latest/faq.html) for troubleshooting tips.
:::

### Logging
We use the {mod}`loguru<loguru._logger>`-based {class}`MovementLogger<movement.utils.logging.MovementLogger>` for logging.
The logger is configured to write logs to a rotating log file at the `DEBUG` level and to {obj}`sys.stderr` at the `WARNING` level.
Expand Down
Loading
Loading