Skip to content

Commit 8f7f96e

Browse files
- config : added info that a method is required in the docstrings
- config : moved some variables at the top of the module for convenience - moved example script to examples - updated README
1 parent 55b9e0e commit 8f7f96e

File tree

3 files changed

+59
-34
lines changed

3 files changed

+59
-34
lines changed

README.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,13 @@
55

66
This repository contains a package called `features_from_dlc` that is used to compute and plot behavioral metrics from DeepLabCut tracking files.
77

8-
You'll also find some utility scripts in the scripts folder, as well as separate notebooks (.ipynb files) in the notebooks directory.
8+
You'll also find some utility scripts in the `scripts` folder, notebooks (.ipynb files) in the `notebooks` directory and an example on how to use the package in the `examples` folder.
99

1010
Jump to :
1111
- [Install instruction](#quick-start)
1212
- [The `features_from_dlc` package](#the-features_from_dlc-package)
1313
- [Usage](#usage)
14+
- [The configuration file](#the-configuration-file)
1415

1516
## Installation
1617
To use the scripts and notebooks, you first need to install some things. If conda is already installed, ignore steps 1-2.
@@ -66,13 +67,13 @@ The attempt to make this modular is the idea that the principle is always the sa
6667
#### Getting started
6768
Follow the instructions in the [Quick start](#quick-start) section. Then, the idea is to edit the example script and configuration files before running it your data. In principle you can do that with any text editor, but it is recommended to use an IDE for ease of use. You can use any of your liking, below is explained how to use Visual Studio Code.
6869
69-
Note that after installation, the `features_from_dlc` package is installed inside the conda environment. The `features_from_dlc` folder is not used anymore, rather, we will use a script to import the package and use it on the data. The `ffd_quantify.py` script located in `scripts/` is a template you can copy and modify as needed.
70+
Note that after installation, the `features_from_dlc` package is installed inside the conda environment. The `features_from_dlc` folder is not used anymore, rather, we will use a script to import the package and use it on the data. The `ffd_quantify.py` script located in `examples/` is a template you can copy and modify as needed.
7071
7172
##### Visual Studio Code
7273
It's easier to use as conda is nicely integrated and it is made easy to switch between environments.
7374
1. Install [vscode](https://code.visualstudio.com/download) (it does not require admin rights).
7475
1. Install Python extension (squared pieces in the left panel).
75-
1. Open the `scripts/ffd_quantify.py` script. In the bottom right corner, you should see a "conda" entry : click on it and select the ffd conda environment. To run the script, click on the Play icon on the top right.
76+
1. Open the `examples/ffd_quantify.py` script. In the bottom right corner, you should see a "conda" entry : click on it and select the ffd conda environment. To run the script, click on the Play icon on the top right.
7677

7778
#### Requirements
7879
You need to have tracked your video clips with DeepLabCut and saved the output files (either .h5 or .csv files). One file corresponds to one and only one trial, so you might need to split your original videos into several short clips around the stimulation onsets and offsets beforehand. This can be done with [`videocutter` program](https://github.com/TeamNCMC/videocutter). All files analyzed together must :
@@ -87,7 +88,7 @@ You also need a configuration file. It defines the features one wants to extract
8788
Optionnaly, you can have a settings.toml file next to the DLC files to analyze. It specifies the experimental settings (timings and pixel size). If the file does not exist, default values from the configuration file will be used instead. See [The settings.toml file](#the-settingstoml-file).
8889

8990
#### Usage
90-
1. Copy-paste the `scripts/ffd_quantify.py` file elsewhere on your computer, open it with an editor.
91+
1. Copy-paste the `examples/ffd_quantify.py` file elsewhere on your computer, open it with an editor.
9192
1. Fill the `--- Parameters ---` section. This includes :
9293
- `directory` : the full path to the directory containing the *files to be analyzed*.
9394
- `configs_path` : the full path to the directory containing the *configuration files* (eg. `modality.py` and `config_plot.toml`).

configs/openfield.py

Lines changed: 50 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,13 @@
44
Give it a sensible name, describing to which modality it corresponds (openfield, ...).
55
66
All global variables (in CAPSLOCK before the Class definition) should exist.
7-
Remember to edit the `features_metrics_range` variable in the `get_features()` function
8-
to adjust when the in-stim quantifying metric should be computed.
97
108
This particular version :
119
modality : openfield
12-
features : speed, head angle, body angle, x, y
13-
author : Guillaume Le Goc (g.legoc@posteo.org), Rémi Proville (Acquineuro)
14-
version : 2024.11.27
10+
features : speed, heading angle, bending angle, x, y
11+
bodyparts : Left ear, Right ear, Nose, Tail base
12+
authors : Guillaume Le Goc (g.legoc@posteo.org), Rémi Proville (Acquineuro)
13+
version : 2024.12.19
1514
1615
"""
1716

@@ -41,13 +40,34 @@
4140
# Features to normalize by subtracting their pre-stim mean. This must be a tuple, so if
4241
# there is only one, write it like FEATURES_NORM = ("something",)
4342
FEATURES_NORM = ("theta_body", "theta_neck")
43+
# Select the time range in which the metric is computed, in the same units as
44+
# `stim_time`, BEFORE time-shifting is performed. This should be a dict mapping a
45+
# feature to another dict, itself mapping a metric to a 2-elements list. The metrics
46+
# names should be defined in `Config.get_features()`.
47+
FEATURES_METRICS_RANGE = {
48+
"speed": {"mean": [0.5, 1], "deceleration": [0.5, 0.60]},
49+
"theta_body": {"mean": [0.75, 1]},
50+
"theta_neck": {"mean": [0.75, 1]},
51+
"xbody": {},
52+
"ybody": {},
53+
}
54+
# Choose metrics that will have their y axis shared with the time series, eg.
55+
# when the metric is in the same units as the feature plotted. Same structure as
56+
# FEATURES_METRICS_RANGE, with True/False
57+
FEATURES_METRICS_SHARE = {
58+
"speed": {"mean": True, "deceleration": False},
59+
"theta_body": {"max": True},
60+
"theta_neck": {"max": True},
61+
"xbody": {},
62+
"ybody": {},
63+
}
4464
# Multiplier of standard deviation to define the initiation of reaction to determine the
4565
# delay from stimulation onset
4666
NSTD = 3
4767
# Number of points to fit after signal is above NSTD times the pre-stim std
4868
NPOINTS = 3
4969
# Maximum allowed delay, above which it is not considered as a response
50-
MAXDELAY = 0.5 # in seconds
70+
MAXDELAY = 0.5 # in same units as CLIP_DURATION
5171

5272
# --- Data cleaning parameters
5373
# Likelihood threshold, below which values will be interpolated.
@@ -74,9 +94,14 @@
7494
"ybody": "centroid y (mm)",
7595
}
7696
# Preset y axes limits
77-
FEATURES_YLIM = {} # must be [ymin, ymax], empty {} for automatic
78-
# Features to NOT plot (must be a list [])
79-
FEATURES_OFF = ["xbody", "ybody"]
97+
FEATURES_YLIM = {
98+
"speed": [0, 45],
99+
"theta_body": [-60, 220],
100+
"theta_neck": [-40, 120],
101+
# must be [ymin, ymax]
102+
} # must be [ymin, ymax], empty {} for automatic
103+
# Features to NOT plot
104+
FEATURES_OFF = ("xbody", "ybody")
80105

81106

82107
# --- Configuration class
@@ -177,6 +202,8 @@ def read_setting(self, setting, fallback):
177202
"""
178203
Read key from settings, with a fallback if not there.
179204
205+
Required.
206+
180207
Parameters
181208
----------
182209
setting : str
@@ -204,6 +231,8 @@ def setup_time(self):
204231
Create the common time vector and shift all time variable so that stimulation
205232
onset is time 0. get_features() should be run before.
206233
234+
Required.
235+
207236
"""
208237
# common time vector for all time series
209238
self.time_common = np.linspace(
@@ -242,6 +271,8 @@ def get_pixel_size(self):
242271
3. PIXEL_SIZE global variable at the top of this file.
243272
This value is stored in `pixel_size` attribute.
244273
274+
Required.
275+
245276
Parameters
246277
----------
247278
settings : dict
@@ -284,6 +315,8 @@ def get_features(self) -> tuple[dict, dict, dict, dict]:
284315
eg. the computed metric is in the same units as the feature itself.
285316
- features_labels : maps a feature to its displayed name on the y-axis of graph.
286317
318+
Required.
319+
287320
"""
288321
# How to compute features. It must be mapping between a feature name and a
289322
# lambda function that takes a DataFrame as a sole argument and returns a Serie
@@ -317,32 +350,20 @@ def get_features(self) -> tuple[dict, dict, dict, dict]:
317350
"mean": lambda val, _: np.mean(val),
318351
"deceleration": lambda s, t: -self.get_accel_coef(s, t),
319352
},
320-
"theta_body": {"max": lambda val, _: np.max(val)},
321-
"theta_neck": {"max": lambda val, _: np.max(val)},
353+
"theta_body": {"mean": lambda val, _: np.mean(val)},
354+
"theta_neck": {"mean": lambda val, _: np.mean(val)},
322355
"xbody": {},
323356
"ybody": {},
324357
}
325358

326359
# Select the time range in which the metric is computed, in the same units as
327360
# `stim_time`, before time-shifting is performed.
328-
features_metrics_range = {
329-
"speed": {"mean": [0.5, 1], "deceleration": [0.5, 0.75]},
330-
"theta_body": {"max": [0.5, 1]},
331-
"theta_neck": {"max": [0.5, 1]},
332-
"xbody": {},
333-
"ybody": {},
334-
}
361+
features_metrics_range = FEATURES_METRICS_RANGE
335362

336363
# Choose metrics that will have their y axis shared with the time series, eg.
337364
# when the metric is in the same units as the feature plotted. This is a similar
338365
# dict, with True and False.
339-
features_metrics_share = {
340-
"speed": {"mean": True, "deceleration": False},
341-
"theta_body": {"max": True},
342-
"theta_neck": {"max": True},
343-
"xbody": {},
344-
"ybody": {},
345-
}
366+
features_metrics_share = FEATURES_METRICS_SHARE
346367

347368
# Labels for each features, appears on the y axis of time series
348369
features_labels = FEATURES_LABELS
@@ -361,6 +382,8 @@ def write_parameters_file(
361382
"""
362383
Saves (hardcoded) parameters used to analyze data and generate figures.
363384
385+
Required.
386+
364387
Parameters
365388
----------
366389
outdir : str
@@ -394,6 +417,8 @@ def preprocess_df(self, df: pd.DataFrame) -> pd.DataFrame:
394417
highly confident on a point that is badly placed, and there might be another way
395418
to find "bad" values.
396419
420+
Required.
421+
397422
Parameters
398423
----------
399424
df : pd.DataFrame
Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,6 @@
1010

1111
import os
1212

13-
import pandas as pd
14-
1513
import features_from_dlc as ffd
1614

1715
# --- Parameters ---
@@ -22,7 +20,7 @@
2220

2321
# - Animals
2422
# Only files beginning by those will be processed. If only one, write as ("xxx",)
25-
animals = ("animal0", "animal1")
23+
animals = ("animal0", "animal1", "animal2")
2624

2725
# - Groups
2826
# This must be a dictionnary {key: values}.
@@ -33,8 +31,8 @@
3331
# it's assigned to the corresponding condition, whether there's something else eleswhere
3432
# in the file name. See get_condition() function for examples.
3533
conditions = {
36-
"condition1": ["mouse0"],
37-
"condition2": ["identifier"],
34+
"condition1": ["animal0"],
35+
"condition2": ["something"],
3836
"condition3": ["something_else"],
3937
}
4038
# Choose whether the conditions are paired, eg. if the same subject appears in several
@@ -80,6 +78,7 @@
8078
)
8179

8280
# # Alternatively, use already generated features.csv file
81+
# import pandas as pd
8382
# cfg = ffd.get_config(modality, configs_path, None) # get config
8483
# features = pd.read_csv(os.path.join(outdir, "features.csv")) # load features
8584

0 commit comments

Comments
 (0)