|
1 | 1 | (target-roadmaps)= |
2 | 2 | # Roadmaps |
3 | 3 |
|
4 | | -The roadmap outlines **current development priorities** and aims to **guide core developers** and to **encourage community contributions**. It is a living document and will be updated as the project evolves. |
| 4 | +This page outlines **current development priorities** and aims to **guide core developers** and to **encourage community contributions**. It is a living document and will be updated as the project evolves. |
5 | 5 |
|
6 | | -The roadmap is **not meant to limit** movement features, as we are open to suggestions and contributions. Join our [Zulip chat](movement-zulip:) to share your ideas. We will take community demand and feedback into account when planning future releases. |
| 6 | +The roadmaps are **not meant to limit** `movement` features, as we are open to suggestions and contributions. Join our [Zulip chat](movement-zulip:) to share your ideas. We will take community demand and feedback into account when planning future releases. |
7 | 7 |
|
8 | 8 | ## Long-term vision |
9 | 9 | The following features are being considered for the first stable version `v1.0`. |
10 | 10 |
|
11 | | -- __Import/Export pose tracks from/to diverse formats__. We aim to interoperate with leading tools for animal pose estimation and behaviour classification, and to enable conversions between their formats. |
12 | | -- __Standardise the representation of pose tracks__. We represent pose tracks as [xarray data structures](xarray:user-guide/data-structures.html) to allow for labelled dimensions and performant processing. |
13 | | -- __Interactively visualise pose tracks__. We are considering [napari](napari:) as a visualisation and GUI framework. |
14 | | -- __Clean pose tracks__, including, but not limited to, handling of missing values, filtering, smoothing, and resampling. |
15 | | -- __Derive kinematic variables__ like velocity, acceleration, joint angles, etc., focusing on those prevalent in neuroscience. |
16 | | -- __Integrate spatial data about the animal's environment__ for combined analysis with pose tracks. This covers regions of interest (ROIs) such as the arena in which the animal is moving and the location of objects within it. |
| 11 | +- __Import/Export motion tracks from/to diverse formats__. We aim to interoperate with leading tools for animal tracking and behaviour classification, and to enable conversions between their formats. |
| 12 | +- __Standardise the representation of motion tracks__. We represent tracks as [xarray data structures](xarray:user-guide/data-structures.html) to allow for labelled dimensions and performant processing. |
| 13 | +- __Interactively visualise motion tracks__. We are experimenting with [napari](napari:) as a visualisation and GUI framework. |
| 14 | +- __Clean motion tracks__, including, but not limited to, handling of missing values, filtering, smoothing, and resampling. |
| 15 | +- __Derive kinematic variables__ like velocity, acceleration, joint angles, etc., focusing on those prevalent in neuroscience and ethology. |
| 16 | +- __Integrate spatial data about the animal's environment__ for combined analysis with motion tracks. This covers regions of interest (ROIs) such as the arena in which the animal is moving and the location of objects within it. |
17 | 17 | - __Define and transform coordinate systems__. Coordinates can be relative to the camera, environment, or the animal itself (egocentric). |
| 18 | +- __Provide common metrics for specialised applications__. These applications could include gait analysis, pupillometry, spatial |
| 19 | +navigation, social interactions, etc. |
| 20 | +- __Integrate with neurophysiological data analysis tools__. We eventually aim to facilitate combined analysis of motion and neural data. |
18 | 21 |
|
19 | 22 | ## Short-term milestone - `v0.1` |
20 | | -We plan to release version `v0.1` of movement in early 2024, providing a minimal set of features to demonstrate the project's potential and to gather feedback from users. At minimum, it should include: |
| 23 | +We plan to release version `v0.1` of `movement` in early 2025, providing a minimal set of features to demonstrate the project's potential and to gather feedback from users. At minimum, it should include: |
21 | 24 |
|
22 | 25 | - [x] Ability to import pose tracks from [DeepLabCut](dlc:), [SLEAP](sleap:) and [LightningPose](lp:) into a common `xarray.Dataset` structure. |
23 | 26 | - [x] At least one function for cleaning the pose tracks. |
24 | 27 | - [x] Ability to compute velocity and acceleration from pose tracks. |
25 | 28 | - [x] Public website with [documentation](target-movement). |
26 | 29 | - [x] Package released on [PyPI](https://pypi.org/project/movement/). |
27 | 30 | - [x] Package released on [conda-forge](https://anaconda.org/conda-forge/movement). |
28 | | -- [ ] Ability to visualise pose tracks using [napari](napari:). We aim to represent pose tracks via napari's [Points](napari:howtos/layers/points) and [Tracks](napari:howtos/layers/tracks) layers and overlay them on video frames. |
| 31 | +- [ ] Ability to visualise pose tracks using [napari](napari:). We aim to represent pose tracks as napari [layers](napari:howtos/layers/index.html), overlaid on video frames. |
0 commit comments