Skip to content

Commit 85904d9

Browse files
niksirbisfmigpre-commit-ci[bot]
authored
Update overview, mission, scope, and roadmaps (#352)
* update project overview * update mission statement * updated scope * update roadmaps and consistently use `movement` (monospace) * Add wheel as a dependency (#344) * implement Adam's suggestions * Apply some suggestions outright Co-authored-by: sfmig <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update project overview based on feedback * clarify statement about action recognition * updated scope * mention "keypoints" for SLEAP and DLC representations in "scope". --------- Co-authored-by: sfmig <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent e85c9f5 commit 85904d9

File tree

5 files changed

+80
-30
lines changed

5 files changed

+80
-30
lines changed

README.md

Lines changed: 14 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99

1010
# movement
1111

12-
A Python toolbox for analysing body movements across space and time, to aid the study of animal behaviour in neuroscience.
12+
A Python toolbox for analysing animal body movements across space and time.
1313

1414

1515
![](docs/source/_static/movement_overview.png)
@@ -27,10 +27,19 @@ conda activate movement-env
2727
2828
## Overview
2929

30-
Pose estimation tools, such as [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) and [SLEAP](https://sleap.ai/) are now commonplace when processing video data of animal behaviour. There is not yet a standardised, easy-to-use way to process the pose tracks produced from these software packages.
31-
32-
movement aims to provide a consistent modular interface to analyse pose tracks, allowing steps such as data cleaning, visualisation and motion quantification.
33-
We aim to support a range of pose estimation packages, along with 2D or 3D tracking of single or multiple individuals.
30+
Deep learning methods for motion tracking have revolutionised a range of
31+
scientific disciplines, from neuroscience and biomechanics, to conservation
32+
and ethology. Tools such as
33+
[DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) and
34+
[SLEAP](https://sleap.ai/) now allow researchers to track animal movements
35+
in videos with remarkable accuracy, without requiring physical markers.
36+
However, there is still a need for standardised, easy-to-use methods
37+
to process the tracks generated by these tools.
38+
39+
`movement` aims to provide a consistent, modular interface for analysing
40+
motion tracks, enabling steps such as data cleaning, visualisation,
41+
and motion quantification. We aim to support all popular animal tracking
42+
frameworks and file formats.
3443

3544
Find out more on our [mission and scope](https://movement.neuroinformatics.dev/community/mission-scope.html) statement and our [roadmap](https://movement.neuroinformatics.dev/community/roadmaps.html).
3645

docs/source/community/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Community
22

3-
Contributions to movement are absolutely encouraged, whether to fix a bug,
3+
Contributions to `movement` are absolutely encouraged, whether to fix a bug,
44
develop a new feature, or improve the documentation.
55
To help you get started, we have prepared a statement on the project's [mission and scope](target-mission),
66
a [roadmap](target-roadmaps) outlining our current priorities, and a detailed [contributing guide](target-contributing).

docs/source/community/mission-scope.md

Lines changed: 38 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -3,25 +3,55 @@
33

44
## Mission
55

6-
[movement](target-movement) aims to **facilitate the study of animal behaviour in neuroscience** by providing a suite of **Python tools to analyse body movements** across space and time.
6+
`movement` aims to **facilitate the study of animal behaviour**
7+
by providing a suite of **Python tools to analyse body movements**
8+
across space and time.
79

810
## Scope
911

10-
At its core, movement handles trajectories of *keypoints*, which are specific body parts of an *individual*. An individual's posture or *pose* is represented by a set of keypoint coordinates, given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time forms *pose tracks*. In neuroscience, these tracks are typically extracted from video data using software like [DeepLabCut](dlc:) or [SLEAP](sleap:).
11-
12-
With movement, our vision is to present a **consistent interface for pose tracks** and to **analyze them using modular and accessible tools**. We aim to accommodate data from a range of pose estimation packages, in **2D or 3D**, tracking **single or multiple individuals**. The focus will be on providing functionalities for data cleaning, visualisation and motion quantification (see the [Roadmap](target-roadmaps) for details).
13-
14-
While movement is not designed for behaviour classification or action segmentation, it may extract features useful for these tasks. We are planning to develop separate packages for this purpose, which will be compatible with movement and the existing ecosystem of related tools.
12+
At its core, `movement` handles the position and/or orientation
13+
of one or more individuals over time.
14+
15+
There are a few common ways of representing animal motion from video
16+
recordings: an animal's position could be reduced to that of a single keypoint
17+
tracked on its body (usually the centroid), or instead a set of keypoints
18+
(often referred to as the pose) to better capture its orientation as well as
19+
the positions of limbs and appendages. The animal's position could be also
20+
tracked as a bounding box drawn around each individual, or as a segmentation
21+
mask that indicates the pixels belonging to each individual. Depending on the
22+
research question or the application, one or other format may be more
23+
convenient. The spatial coordinates of these representations may be defined
24+
in 2D (x, y) or 3D (x, y, z).
25+
26+
Animal tracking frameworks such as [DeepLabCut](dlc:) or [SLEAP](sleap:) can
27+
generate keypoint representations from video data by detecting body parts and
28+
tracking them across frames. In the context of `movement`, we refer to these
29+
trajectories as _tracks_: we use _pose tracks_ to refer to the trajectories
30+
of a set of keypoints, _bounding boxes' tracks_ to refer to the trajectories
31+
of bounding boxes' centroids, or _motion tracks_ in the more general case.
32+
33+
Our vision is to present a **consistent interface for representing motion
34+
tracks** along with **modular and accessible analysis tools**. We aim to
35+
support data from a range of animal tracking frameworks, in **2D or 3D**,
36+
tracking **single or multiple individuals**. As such, `movement` can be
37+
considered as operating downstream of tools like DeepLabCut and SLEAP.
38+
The focus is on providing functionalities for data cleaning, visualisation,
39+
and motion quantification (see the [Roadmap](target-roadmaps) for details).
40+
41+
In the study of animal behaviour, motion tracks are often used to extract and
42+
label discrete actions, sometimes referred to as behavioural syllables or
43+
states. While `movement` is not designed for such tasks, it can be used to
44+
generate features that are relevant for action recognition.
1545

1646
## Design principles
1747

18-
movement is committed to:
48+
`movement` is committed to:
1949
- __Ease of installation and use__. We aim for a cross-platform installation and are mindful of dependencies that may compromise this goal.
2050
- __User accessibility__, catering to varying coding expertise by offering both a GUI and a Python API.
2151
- __Comprehensive documentation__, enriched with tutorials and examples.
2252
- __Robustness and maintainability__ through high test coverage.
2353
- __Scientific accuracy and reproducibility__ by validating inputs and outputs.
2454
- __Performance and responsiveness__, especially for large datasets, using parallel processing where appropriate.
25-
- __Modularity and flexibility__. We envision movement as a platform for new tools and analyses, offering users the building blocks to craft their own workflows.
55+
- __Modularity and flexibility__. We envision `movement` as a platform for new tools and analyses, offering users the building blocks to craft their own workflows.
2656

2757
Some of these principles are shared with, and were inspired by, napari's [Mission and Values](napari:community/mission_and_values) statement.

docs/source/community/roadmaps.md

Lines changed: 13 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,31 @@
11
(target-roadmaps)=
22
# Roadmaps
33

4-
The roadmap outlines **current development priorities** and aims to **guide core developers** and to **encourage community contributions**. It is a living document and will be updated as the project evolves.
4+
This page outlines **current development priorities** and aims to **guide core developers** and to **encourage community contributions**. It is a living document and will be updated as the project evolves.
55

6-
The roadmap is **not meant to limit** movement features, as we are open to suggestions and contributions. Join our [Zulip chat](movement-zulip:) to share your ideas. We will take community demand and feedback into account when planning future releases.
6+
The roadmaps are **not meant to limit** `movement` features, as we are open to suggestions and contributions. Join our [Zulip chat](movement-zulip:) to share your ideas. We will take community feedback into account when planning future releases.
77

88
## Long-term vision
99
The following features are being considered for the first stable version `v1.0`.
1010

11-
- __Import/Export pose tracks from/to diverse formats__. We aim to interoperate with leading tools for animal pose estimation and behaviour classification, and to enable conversions between their formats.
12-
- __Standardise the representation of pose tracks__. We represent pose tracks as [xarray data structures](xarray:user-guide/data-structures.html) to allow for labelled dimensions and performant processing.
13-
- __Interactively visualise pose tracks__. We are considering [napari](napari:) as a visualisation and GUI framework.
14-
- __Clean pose tracks__, including, but not limited to, handling of missing values, filtering, smoothing, and resampling.
15-
- __Derive kinematic variables__ like velocity, acceleration, joint angles, etc., focusing on those prevalent in neuroscience.
16-
- __Integrate spatial data about the animal's environment__ for combined analysis with pose tracks. This covers regions of interest (ROIs) such as the arena in which the animal is moving and the location of objects within it.
11+
- __Import/Export motion tracks from/to diverse formats__. We aim to interoperate with leading tools for animal tracking and behaviour classification, and to enable conversions between their formats.
12+
- __Standardise the representation of motion tracks__. We represent tracks as [xarray data structures](xarray:user-guide/data-structures.html) to allow for labelled dimensions and performant processing.
13+
- __Interactively visualise motion tracks__. We are experimenting with [napari](napari:) as a visualisation and GUI framework.
14+
- __Clean motion tracks__, including, but not limited to, handling of missing values, filtering, smoothing, and resampling.
15+
- __Derive kinematic variables__ like velocity, acceleration, joint angles, etc., focusing on those prevalent in neuroscience and ethology.
16+
- __Integrate spatial data about the animal's environment__ for combined analysis with motion tracks. This covers regions of interest (ROIs) such as the arena in which the animal is moving and the location of objects within it.
1717
- __Define and transform coordinate systems__. Coordinates can be relative to the camera, environment, or the animal itself (egocentric).
18+
- __Provide common metrics for specialised applications__. These applications could include gait analysis, pupillometry, spatial
19+
navigation, social interactions, etc.
20+
- __Integrate with neurophysiological data analysis tools__. We eventually aim to facilitate combined analysis of motion and neural data.
1821

1922
## Short-term milestone - `v0.1`
20-
We plan to release version `v0.1` of movement in early 2024, providing a minimal set of features to demonstrate the project's potential and to gather feedback from users. At minimum, it should include:
23+
We plan to release version `v0.1` of `movement` in early 2025, providing a minimal set of features to demonstrate the project's potential and to gather feedback from users. At minimum, it should include:
2124

2225
- [x] Ability to import pose tracks from [DeepLabCut](dlc:), [SLEAP](sleap:) and [LightningPose](lp:) into a common `xarray.Dataset` structure.
2326
- [x] At least one function for cleaning the pose tracks.
2427
- [x] Ability to compute velocity and acceleration from pose tracks.
2528
- [x] Public website with [documentation](target-movement).
2629
- [x] Package released on [PyPI](https://pypi.org/project/movement/).
2730
- [x] Package released on [conda-forge](https://anaconda.org/conda-forge/movement).
28-
- [ ] Ability to visualise pose tracks using [napari](napari:). We aim to represent pose tracks via napari's [Points](napari:howtos/layers/points) and [Tracks](napari:howtos/layers/tracks) layers and overlay them on video frames.
31+
- [ ] Ability to visualise pose tracks using [napari](napari:). We aim to represent pose tracks as napari [layers](napari:howtos/layers/index.html), overlaid on video frames.

docs/source/index.md

Lines changed: 14 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
(target-movement)=
22
# movement
33

4-
A Python toolbox for analysing body movements across space and time, to aid the study of animal behaviour in neuroscience.
4+
A Python toolbox for analysing animal body movements across space and time.
55

66
::::{grid} 1 2 2 3
77
:gutter: 3
@@ -17,7 +17,7 @@ Installation, first steps and key concepts.
1717
:link: examples/index
1818
:link-type: doc
1919

20-
A gallery of examples using movement.
20+
A gallery of examples using `movement`.
2121
:::
2222

2323
:::{grid-item-card} {fas}`comments;sd-text-primary` Join the movement
@@ -32,10 +32,18 @@ How to get in touch and contribute.
3232

3333
## Overview
3434

35-
Pose estimation tools, such as [DeepLabCut](dlc:) and [SLEAP](sleap:) are now commonplace when processing video data of animal behaviour. There is not yet a standardised, easy-to-use way to process the *pose tracks* produced from these software packages.
36-
37-
movement aims to provide a consistent modular interface to analyse pose tracks, allowing steps such as data cleaning, visualisation and motion quantification.
38-
We aim to support a range of pose estimation packages, along with 2D or 3D tracking of single or multiple individuals.
35+
Deep learning methods for motion tracking have revolutionised a range of
36+
scientific disciplines, from neuroscience and biomechanics, to conservation
37+
and ethology. Tools such as [DeepLabCut](dlc:) and [SLEAP](sleap:)
38+
now allow researchers to track animal movements
39+
in videos with remarkable accuracy, without requiring physical markers.
40+
However, there is still a need for standardised, easy-to-use methods
41+
to process the tracks generated by these tools.
42+
43+
`movement` aims to provide a consistent, modular interface for analysing
44+
motion tracks, enabling steps such as data cleaning, visualisation,
45+
and motion quantification. We aim to support all popular animal tracking
46+
frameworks and file formats.
3947

4048
Find out more on our [mission and scope](target-mission) statement and our [roadmap](target-roadmaps).
4149

0 commit comments

Comments
 (0)