Skip to content

Commit a790122

Browse files
committed
updated scope
1 parent 7906d26 commit a790122

File tree

1 file changed

+31
-6
lines changed

1 file changed

+31
-6
lines changed

docs/source/community/mission-scope.md

Lines changed: 31 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,15 +3,40 @@
33

44
## Mission
55

6-
``movement`` aims to **facilitate the study of animal behaviour** by providing a suite of **Python tools to analyse body movements** across space and time.
6+
``movement`` aims to **facilitate the study of animal behaviour**
7+
by providing a suite of **Python tools to analyse body movements**
8+
across space and time.
79

810
## Scope
911

10-
At its core, movement handles trajectories of *keypoints*, which are specific body parts of an *individual*. An individual's posture or *pose* is represented by a set of keypoint coordinates, given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time forms *pose tracks*. In neuroscience, these tracks are typically extracted from video data using software like [DeepLabCut](dlc:) or [SLEAP](sleap:).
11-
12-
With movement, our vision is to present a **consistent interface for pose tracks** and to **analyze them using modular and accessible tools**. We aim to accommodate data from a range of pose estimation packages, in **2D or 3D**, tracking **single or multiple individuals**. The focus will be on providing functionalities for data cleaning, visualisation and motion quantification (see the [Roadmap](target-roadmaps) for details).
13-
14-
While movement is not designed for behaviour classification or action segmentation, it may extract features useful for these tasks. We are planning to develop separate packages for this purpose, which will be compatible with movement and the existing ecosystem of related tools.
12+
At its core, `movement` handles the positions of one or more individuals
13+
tracked over time. An individual's position at a given time can be represented
14+
in various ways: a single keypoint (usually the centroid), a set of keypoints
15+
(also known as the pose), a bounding box, or a segmentation mask.
16+
The spatial coordinates of these representations may be defined in 2D (x, y)
17+
or 3D (x, y, z). The pose and mask representations also carry some information
18+
about the individual's posture.
19+
20+
Animal tracking frameworks such as [DeepLabCut](dlc:) or [SLEAP](sleap:) can
21+
generate these representations from video data by detecting positions and
22+
tracking them across frames. In the context of `movement`, we refer to the
23+
resulting tracks according to their respective position representations—for
24+
example, pose tracks, bounding boxes' tracks, or motion tracks in general.
25+
26+
Our vision is to present a **consistent interface for motion tracks** and to
27+
**analyze them using modular and accessible tools**. We aim to accommodate data
28+
from a range of animal tracking frameworks, in **2D or 3D**, tracking
29+
**single or multiple individuals**. As such, `movement` can be considered as
30+
downstream of tools like DeepLabCut and SLEAP. The focus is on providing
31+
functionalities for data cleaning, visualization, and motion quantification
32+
(see the [Roadmap](target-roadmaps) for details).
33+
34+
In the study of animal behavior, motion tracks are often used to extract and
35+
label discrete actions, sometimes referred to as behavioral syllables or
36+
states. While `movement` is not designed for such tasks, it may generate
37+
features useful for action segmentation and recognition. We may develop
38+
packages specialized for this purpose, which will be compatible with
39+
`movement` and the existing ecosystem of related tools.
1540

1641
## Design principles
1742

0 commit comments

Comments
 (0)