Skip to content

Commit 79fbf35

Browse files
committed
updated Readme.md, convert_classifier.py, settings.ini to match new release
1 parent 3170340 commit 79fbf35

File tree

3 files changed

+41
-13
lines changed

3 files changed

+41
-13
lines changed

Readme.md

Lines changed: 31 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ DeepLabStream is a python based multi-purpose tool that enables the realtime tra
1313
Our toolbox was orginally adapted from the previously published [DeepLabCut](https://github.com/AlexEMG/DeepLabCut) ([Mathis et al., 2018](https://www.nature.com/articles/s41593-018-0209-y)) and expanded on its core capabilities, but is now able to utilize a variety of different network architectures for online pose estimation
1414
([SLEAP](https://github.com/murthylab/sleap), [DLC-Live](https://github.com/DeepLabCut/DeepLabCut-live), [DeepPosekit's](https://github.com/jgraving/DeepPoseKit) StackedDenseNet, StackedHourGlass and [LEAP](https://github.com/murthylab/sleap)).
1515

16-
DeepLabStreams core feature is the utilization of real-time tracking to orchestrate closed-loop experiments. This can be achieved using any type of camera-based video stream (incl. multiple streams). It enables running experimental protocols that are dependent on a constant stream of bodypart positions and feedback activation of several input/output devices. It's capabilities range from simple region of interest (ROI) based triggers to headdirection or behavior dependent stimulation.
16+
DeepLabStreams core feature is the utilization of real-time tracking to orchestrate closed-loop experiments. This can be achieved using any type of camera-based video stream (incl. multiple streams). It enables running experimental protocols that are dependent on a constant stream of bodypart positions and feedback activation of several input/output devices. It's capabilities range from simple region of interest (ROI) based triggers to headdirection or behavior dependent stimulation, including online classification ([SiMBA](https://www.biorxiv.org/content/10.1101/2020.04.19.049452v2), [B-SOID](https://www.biorxiv.org/content/10.1101/770271v2)).
1717

1818
![DLS_Stim](docs/DLSSTim_example.gif)
1919

@@ -25,6 +25,14 @@ DeepLabStreams core feature is the utilization of real-time tracking to orchestr
2525

2626
## New features:
2727

28+
#### 03/2021: Online Behavior Classification using SiMBA and B-SOID:
29+
30+
- full integration of online classification of user-defined behavior using [SiMBA](https://github.com/sgoldenlab/simba) and [B-SOID](https://github.com/YttriLab/B-SOID).
31+
- SOCIAL CLASSIFICATION with SiMBA 14bp two animal classification (more to come!)
32+
- Unsupervised Classification with B-SOID
33+
- New wiki guide and example experiment to get started with online classification: [Advanced Behavior Classification](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/Advanced-Behavior-Classification)
34+
- this version has new requirements (numba, pure, scikit-learn), so be sure to install them (e.g. `pip install -r requirements.txt`).
35+
2836
#### 02/2021: Multiple Animal Experiments (Pre-release): Full [SLEAP](https://github.com/murthylab/sleap) integration (Full release coming soon!)
2937

3038
- Updated [Installation](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/Installation-&-Testing) (for SLEAP support)
@@ -33,7 +41,8 @@ DeepLabStreams core feature is the utilization of real-time tracking to orchestr
3341

3442
#### 01/2021: DLStream was published in [Communications Biology](https://www.nature.com/articles/s42003-021-01654-9)
3543

36-
#### 12/2021: New pose estimation model integration ([DLC-Live](https://github.com/DeepLabCut/DeepLabCut-live)) and pre-release of further integration ([DeepPosekit's](https://github.com/jgraving/DeepPoseKit) StackedDenseNet, StackedHourGlass and [LEAP](https://github.com/murthylab/sleap))
44+
#### 12/2021: New pose estimation model integration
45+
- ([DLC-Live](https://github.com/DeepLabCut/DeepLabCut-live)) and pre-release of further integration ([DeepPosekit's](https://github.com/jgraving/DeepPoseKit) StackedDenseNet, StackedHourGlass and [LEAP](https://github.com/murthylab/sleap))
3746

3847
## Quick Reference:
3948

@@ -131,7 +140,6 @@ If you encounter any issues or errors, you can check out the wiki article ([Help
131140

132141
If you use this code or data please cite:
133142

134-
135143
Schweihoff, J.F., Loshakov, M., Pavlova, I. et al. DeepLabStream enables closed-loop behavioral experiments using deep learning-based markerless, real-time posture detection.
136144

137145
Commun Biol 4, 130 (2021). https://doi.org/10.1038/s42003-021-01654-9
@@ -147,3 +155,23 @@ Developed by:
147155
- Matvey Loshakov, [email protected]
148156

149157
Corresponding Author: Martin Schwarz, [email protected]
158+
159+
## Other References
160+
161+
If you are using any of the following open-source code please cite them accordingly:
162+
163+
> Simple Behavioral Analysis (SimBA) – an open source toolkit for computer classification of complex social behaviors in experimental animals;
164+
Simon RO Nilsson, Nastacia L. Goodwin, Jia Jie Choong, Sophia Hwang, Hayden R Wright, Zane C Norville, Xiaoyu Tong, Dayu Lin, Brandon S. Bentzley, Neir Eshel, Ryan J McLaughlin, Sam A. Golden
165+
bioRxiv 2020.04.19.049452; doi: https://doi.org/10.1101/2020.04.19.049452
166+
167+
> B-SOiD: An Open Source Unsupervised Algorithm for Discovery of Spontaneous Behaviors;
168+
Alexander I. Hsu, Eric A. Yttri
169+
bioRxiv 770271; doi: https://doi.org/10.1101/770271
170+
171+
> SLEAP: Multi-animal pose tracking;
172+
Talmo D. Pereira, Nathaniel Tabris, Junyu Li, Shruthi Ravindranath, Eleni S. Papadoyannis, Z. Yan Wang, David M. Turner, Grace McKenzie-Smith, Sarah D. Kocher, Annegret L. Falkner, Joshua W. Shaevitz, Mala Murthy
173+
bioRxiv 2020.08.31.276246; doi: https://doi.org/10.1101/2020.08.31.276246
174+
175+
>Real-time, low-latency closed-loop feedback using markerless posture tracking;
176+
Gary A Kane, Gonçalo Lopes, Jonny L Saunders, Alexander Mathis, Mackenzie W Mathis;
177+
eLife 2020;9:e61909 doi: 10.7554/eLife.61909

convert_classifier.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,5 +25,5 @@ def convert_classifier(path):
2525

2626

2727
if __name__ == "__main__":
28-
path_to_classifier = r"D:\SimBa\Jens_models\pursuit_prediction_11.sav"
28+
path_to_classifier = "PATH_TO_CLASSIFIER"
2929
convert_classifier(path_to_classifier)

settings.ini

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
11
[Streaming]
2-
RESOLUTION = 1920, 1080
2+
RESOLUTION = 960, 540
33
FRAMERATE = 30
44
OUTPUT_DIRECTORY = /Output
55
#if you have connected multiple cameras (USB), you will need to select the number OpenCV has given them.
66
#Default is "0", which takes the first available camera.
77
CAMERA_SOURCE = 0
88
#you can use "camera", "ipwebcam" or "video" to select your input source
9-
STREAMING_SOURCE = video
9+
STREAMING_SOURCE = camera
1010

1111
[Pose Estimation]
1212
#possible origins are: SLEAP, DLC, DLC-LIVE,MADLC, DEEPPOSEKIT
13-
MODEL_ORIGIN = ORIGIN
13+
MODEL_ORIGIN = MODEL_ORIGIN
1414
#takes path to model or models (in case of SLEAP topdown, bottom up) in style "string" or "string , string", without ""
1515
# E.g.: MODEL_PATH = D:\SLEAP\models\baseline_model.centroids , D:\SLEAP\models\baseline_model.topdown
1616
MODEL_PATH = PATH_TO_MODEL
17-
MODEL_NAME = MODEL_NAME
17+
MODEL_NAME = NAME_OF_MODEL
1818
; only used in DLC-LIVE and DeepPoseKit for now; if left empty or to short, auto-naming will be enabled in style bp1, bp2 ...
1919
ALL_BODYPARTS = bp1, bp2, bp3, bp4
2020

@@ -27,17 +27,17 @@ EXP_NAME = ExampleExperiment
2727
RECORD_EXP = True
2828

2929
[Classification]
30-
PATH_TO_CLASSIFIER = CLASSIFIER_PATH
30+
PATH_TO_CLASSIFIER = PATH_TO_CLASSIFIER
3131
#time window used for feature extraction (currently only works with 15)
3232
TIME_WINDOW = 15
3333
#number of parallel classifiers to run, this is dependent on your performance time. You need at least 1 more classifier then your average classification time.
34-
POOL_SIZE = 4
34+
POOL_SIZE = 1
3535
#threshold to accept a classification probability as positive detection (SIMBA + )
36-
THRESHOLD = 0.5
36+
THRESHOLD = 0.9
3737
# class/category of identified behavior to use as trigger (only used for B-SOID)
38-
TRIGGER = 5
38+
TRIGGER = NUMBER_OF_CLUSTER
3939
#feature extraction currently works with millimeter not px, so be sure to enter the factor (as in simba).
40-
PIXPERMM = 6.132
40+
PIXPERMM = 1
4141

4242
[Video]
4343
#Full path to video that you want to use as input. Needs "STREAMING_SOURCE" set to "video"!

0 commit comments

Comments
 (0)