|
3 | 3 |
|
4 | 4 | ## Mission |
5 | 5 |
|
6 | | -[movement](target-movement) aims to **facilitate the study of animal behaviour in neuroscience** by providing a suite of **Python tools to analyse body movements** across space and time. |
| 6 | +`movement` aims to **facilitate the study of animal behaviour** |
| 7 | +by providing a suite of **Python tools to analyse body movements** |
| 8 | +across space and time. |
7 | 9 |
|
8 | 10 | ## Scope |
9 | 11 |
|
10 | | -At its core, movement handles trajectories of *keypoints*, which are specific body parts of an *individual*. An individual's posture or *pose* is represented by a set of keypoint coordinates, given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time forms *pose tracks*. In neuroscience, these tracks are typically extracted from video data using software like [DeepLabCut](dlc:) or [SLEAP](sleap:). |
11 | | - |
12 | | -With movement, our vision is to present a **consistent interface for pose tracks** and to **analyze them using modular and accessible tools**. We aim to accommodate data from a range of pose estimation packages, in **2D or 3D**, tracking **single or multiple individuals**. The focus will be on providing functionalities for data cleaning, visualisation and motion quantification (see the [Roadmap](target-roadmaps) for details). |
13 | | - |
14 | | -While movement is not designed for behaviour classification or action segmentation, it may extract features useful for these tasks. We are planning to develop separate packages for this purpose, which will be compatible with movement and the existing ecosystem of related tools. |
| 12 | +At its core, `movement` handles the position and/or orientation |
| 13 | +of one or more individuals over time. |
| 14 | + |
| 15 | +There are a few common ways of representing animal motion from video |
| 16 | +recordings: an animal's position could be reduced to that of a single keypoint |
| 17 | +tracked on its body (usually the centroid), or instead a set of keypoints |
| 18 | +(often referred to as the pose) to better capture its orientation as well as |
| 19 | +the positions of limbs and appendages. The animal's position could be also |
| 20 | +tracked as a bounding box drawn around each individual, or as a segmentation |
| 21 | +mask that indicates the pixels belonging to each individual. Depending on the |
| 22 | +research question or the application, one or other format may be more |
| 23 | +convenient. The spatial coordinates of these representations may be defined |
| 24 | +in 2D (x, y) or 3D (x, y, z). |
| 25 | + |
| 26 | +Animal tracking frameworks such as [DeepLabCut](dlc:) or [SLEAP](sleap:) can |
| 27 | +generate keypoint representations from video data by detecting body parts and |
| 28 | +tracking them across frames. In the context of `movement`, we refer to these |
| 29 | +trajectories as _tracks_: we use _pose tracks_ to refer to the trajectories |
| 30 | +of a set of keypoints, _bounding boxes' tracks_ to refer to the trajectories |
| 31 | +of bounding boxes' centroids, or _motion tracks_ in the more general case. |
| 32 | + |
| 33 | +Our vision is to present a **consistent interface for representing motion |
| 34 | +tracks** along with **modular and accessible analysis tools**. We aim to |
| 35 | +support data from a range of animal tracking frameworks, in **2D or 3D**, |
| 36 | +tracking **single or multiple individuals**. As such, `movement` can be |
| 37 | +considered as operating downstream of tools like DeepLabCut and SLEAP. |
| 38 | +The focus is on providing functionalities for data cleaning, visualisation, |
| 39 | +and motion quantification (see the [Roadmap](target-roadmaps) for details). |
| 40 | + |
| 41 | +In the study of animal behaviour, motion tracks are often used to extract and |
| 42 | +label discrete actions, sometimes referred to as behavioural syllables or |
| 43 | +states. While `movement` is not designed for such tasks, it can be used to |
| 44 | +generate features that are relevant for action recognition. |
15 | 45 |
|
16 | 46 | ## Design principles |
17 | 47 |
|
18 | | -movement is committed to: |
| 48 | +`movement` is committed to: |
19 | 49 | - __Ease of installation and use__. We aim for a cross-platform installation and are mindful of dependencies that may compromise this goal. |
20 | 50 | - __User accessibility__, catering to varying coding expertise by offering both a GUI and a Python API. |
21 | 51 | - __Comprehensive documentation__, enriched with tutorials and examples. |
22 | 52 | - __Robustness and maintainability__ through high test coverage. |
23 | 53 | - __Scientific accuracy and reproducibility__ by validating inputs and outputs. |
24 | 54 | - __Performance and responsiveness__, especially for large datasets, using parallel processing where appropriate. |
25 | | -- __Modularity and flexibility__. We envision movement as a platform for new tools and analyses, offering users the building blocks to craft their own workflows. |
| 55 | +- __Modularity and flexibility__. We envision `movement` as a platform for new tools and analyses, offering users the building blocks to craft their own workflows. |
26 | 56 |
|
27 | 57 | Some of these principles are shared with, and were inspired by, napari's [Mission and Values](napari:community/mission_and_values) statement. |
0 commit comments