Skip to content

Commit 4c07e04

Browse files
committed
working on README
1 parent f527d38 commit 4c07e04

File tree

2 files changed

+165
-59
lines changed

2 files changed

+165
-59
lines changed

README.md

Lines changed: 161 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -11,46 +11,92 @@
1111
[![Gitter](https://badges.gitter.im/DeepLabCut/community.svg)](https://gitter.im/DeepLabCut/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
1212
[![Twitter Follow](https://img.shields.io/twitter/follow/DeepLabCut.svg?label=DeepLabCut&style=social)](https://twitter.com/DeepLabCut)
1313

14-
This package contains a [DeepLabCut](http://www.mousemotorlab.org/deeplabcut) inference pipeline for real-time applications that has minimal (software) dependencies. Thus, it is as easy to install as possible (in particular, on atypical systems like [NVIDIA Jetson boards](https://developer.nvidia.com/buy-jetson)).
15-
16-
**Performance:** If you would like to see estimates on how your model should perform given different video sizes, neural network type, and hardware, please see: https://deeplabcut.github.io/DLC-inferencespeed-benchmark/
17-
18-
If you have different hardware, please consider submitting your results too! https://github.com/DeepLabCut/DLC-inferencespeed-benchmark
19-
20-
**What this SDK provides:** This package provides a `DLCLive` class which enables pose estimation online to provide feedback. This object loads and prepares a DeepLabCut network for inference, and will return the predicted pose for single images.
21-
22-
To perform processing on poses (such as predicting the future pose of an animal given it's current pose, or to trigger external hardware like send TTL pulses to a laser for optogenetic stimulation), this object takes in a `Processor` object. Processor objects must contain two methods: process and save.
23-
24-
- The `process` method takes in a pose, performs some processing, and returns processed pose.
14+
This package contains a [DeepLabCut](http://www.mousemotorlab.org/deeplabcut) inference
15+
pipeline for real-time applications that has minimal (software) dependencies. Thus, it
16+
is as easy to install as possible (in particular, on atypical systems like [
17+
NVIDIA Jetson boards](https://developer.nvidia.com/buy-jetson)).
18+
19+
If you've used DeepLabCut-Live with TensorFlow models and want to try the PyTorch
20+
version, take a look at [_Switching from TensorFlow to PyTorch_](
21+
#Switching-from-TensorFlow-to-PyTorch)
22+
23+
**Performance of TensorFlow models:** If you would like to see estimates on how your
24+
model should perform given different video sizes, neural network type, and hardware,
25+
please see: [deeplabcut.github.io/DLC-inferencespeed-benchmark/
26+
](https://deeplabcut.github.io/DLC-inferencespeed-benchmark/). **We're working on
27+
getting these benchmarks for PyTorch architectures as well.**
28+
29+
If you have different hardware, please consider [submitting your results too](
30+
https://github.com/DeepLabCut/DLC-inferencespeed-benchmark)!
31+
32+
**What this SDK provides:** This package provides a `DLCLive` class which enables pose
33+
estimation online to provide feedback. This object loads and prepares a DeepLabCut
34+
network for inference, and will return the predicted pose for single images.
35+
36+
To perform processing on poses (such as predicting the future pose of an animal given
37+
its current pose, or to trigger external hardware like send TTL pulses to a laser for
38+
optogenetic stimulation), this object takes in a `Processor` object. Processor objects
39+
must contain two methods: `process` and `save`.
40+
41+
- The `process` method takes in a pose, performs some processing, and returns processed
42+
pose.
2543
- The `save` method saves any valuable data created by or used by the processor
2644

2745
For more details and examples, see documentation [here](dlclive/processor/README.md).
2846

29-
###### 🔥🔥🔥🔥🔥 Note :: alone, this object does not record video or capture images from a camera. This must be done separately, i.e. see our [DeepLabCut-live GUI](https://github.com/gkane26/DeepLabCut-live-GUI).🔥🔥🔥
30-
31-
### News!
32-
- March 2022: DeepLabCut-Live! 1.0.2 supports poetry installation `poetry install deeplabcut-live`, thanks to PR #60.
33-
- March 2021: DeepLabCut-Live! [**version 1.0** is released](https://pypi.org/project/deeplabcut-live/), with support for tensorflow 1 and tensorflow 2!
34-
- Feb 2021: DeepLabCut-Live! was featured in **Nature Methods**: ["Real-time behavioral analysis"](https://www.nature.com/articles/s41592-021-01072-z)
35-
- Jan 2021: full **eLife** paper is published: ["Real-time, low-latency closed-loop feedback using markerless posture tracking"](https://elifesciences.org/articles/61909)
36-
- Dec 2020: we talked to **RTS Suisse Radio** about DLC-Live!: ["Capture animal movements in real time"](https://www.rts.ch/play/radio/cqfd/audio/capturer-les-mouvements-des-animaux-en-temps-reel?id=11782529)
37-
38-
39-
### Installation:
40-
41-
Please see our instruction manual to install on a [Windows or Linux machine](docs/install_desktop.md) or on a [NVIDIA Jetson Development Board](docs/install_jetson.md). Note, this code works with tensorflow (TF) 1 or TF 2 models, but TF requires that whatever version you exported your model with, you must import with the same version (i.e., export with TF1.13, then use TF1.13 with DlC-Live; export with TF2.3, then use TF2.3 with DLC-live).
47+
**🔥🔥🔥🔥🔥 Note :: alone, this object does not record video or capture images from a
48+
camera. This must be done separately, i.e. see our [DeepLabCut-live GUI](
49+
https://github.com/DeepLabCut/DeepLabCut-live-GUI).🔥🔥🔥🔥🔥**
50+
51+
### News!
52+
53+
- **WIP 2025**: DeepLabCut-Live is implemented for models trained with the PyTorch engine!
54+
- March 2022: DeepLabCut-Live! 1.0.2 supports poetry installation `poetry install
55+
deeplabcut-live`, thanks to PR #60.
56+
- March 2021: DeepLabCut-Live! [**version 1.0** is released](https://pypi.org/project/deeplabcut-live/), with support for
57+
tensorflow 1 and tensorflow 2!
58+
- Feb 2021: DeepLabCut-Live! was featured in **Nature Methods**: [
59+
"Real-time behavioral analysis"](https://www.nature.com/articles/s41592-021-01072-z)
60+
- Jan 2021: full **eLife** paper is published: ["Real-time, low-latency closed-loop
61+
feedback using markerless posture tracking"](https://elifesciences.org/articles/61909)
62+
- Dec 2020: we talked to **RTS Suisse Radio** about DLC-Live!: ["Capture animal
63+
movements in real time"](
64+
https://www.rts.ch/play/radio/cqfd/audio/capturer-les-mouvements-des-animaux-en-temps-reel?id=11782529)
65+
66+
### Installation
67+
68+
Please see our instruction manual to install on a [Windows or Linux machine](
69+
docs/install_desktop.md) or on a [NVIDIA Jetson Development Board](
70+
docs/install_jetson.md). Note, this code works with PyTorch, TensorFlow 1 or TensorFlow
71+
2 models, but whatever engine you exported your model with, you must import with the
72+
same version (i.e., export a PyTorch model, then install PyTorch, export with TF1.13,
73+
then use TF1.13 with DlC-Live; export with TF2.3, then use TF2.3 with DLC-live).
4274

4375
- available on pypi as: `pip install deeplabcut-live`
4476

4577
Note, you can then test your installation by running:
4678

4779
`dlc-live-test`
4880

49-
If installed properly, this script will i) create a temporary folder ii) download the full_dog model from the [DeepLabCut Model Zoo](http://www.mousemotorlab.org/dlc-modelzoo), iii) download a short video clip of a dog, and iv) run inference while displaying keypoints. v) remove the temporary folder.
81+
If installed properly, this script will i) create a temporary folder ii) download the
82+
full_dog model from the [DeepLabCut Model Zoo](
83+
http://www.mousemotorlab.org/dlc-modelzoo), iii) download a short video clip of
84+
a dog, and iv) run inference while displaying keypoints. v) remove the temporary folder.
5085

5186
<img src="https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1606081086014-TG9GWH63ZGGOO7K779G3/ke17ZwdGBToddI8pDm48kHiSoSToKfKUI9t99vKErWoUqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKcOoIGycwr1shdgJWzLuxyzjLbSRGBFFxjYMBr42yCvRK5HHsLZWtMlAHzDU294nCd/dlclivetest.png?format=1000w" width="650" title="DLC-live-test" alt="DLC LIVE TEST" align="center" vspace = "50">
5287

53-
### Quick Start: instructions for use:
88+
PyTorch and TensorFlow can be installed as extras with `deeplabcut-live` - though be
89+
careful with the versions you install!
90+
91+
```bash
92+
# Install deeplabcut-live and PyTorch
93+
`pip install deeplabcut-live[pytorch]`
94+
95+
# Install deeplabcut-live and TensorFlow
96+
`pip install deeplabcut-live[tf]`
97+
```
98+
99+
### Quick Start: instructions for use
54100

55101
1. Initialize `Processor` (if desired)
56102
2. Initialize the `DLCLive` object
@@ -85,62 +131,125 @@ dlc_live.get_pose(<your image>)
85131
- `<path to exported model directory>` = path to the folder that has the `.pb` files that you acquire after running `deeplabcut.export_model`
86132
- `<your image>` = is a numpy array of each frame
87133

134+
### Switching from TensorFlow to PyTorch
135+
136+
This section is for users who **have already used DeepLabCut-Live** with
137+
TensorFlow models (through DeepLabCut 1.X or 2.X) and want to switch to using the
138+
PyTorch Engine. Some quick notes:
139+
140+
- You may need to adapt your code slightly when creating the DLCLive instance.
141+
- Processors that were created for TensorFlow models will function the same way with
142+
PyTorch models. As multi-animal models can be used with PyTorch, the shape of the `pose`
143+
array given to the processor may be `(num_individuals, num_keypoints, 3)`. Just call
144+
`DLCLive(..., single_animal=True)` and it will work.
88145

89146
### Benchmarking/Analyzing your exported DeepLabCut models
90147

91-
DeepLabCut-live offers some analysis tools that allow users to peform the following operations on videos, from python or from the command line:
148+
DeepLabCut-live offers some analysis tools that allow users to perform the following
149+
operations on videos, from python or from the command line:
150+
151+
#### Test inference speed across a range of image sizes
152+
153+
Downsizing images can be done by specifying the `resize` or `pixels` parameter. Using
154+
the `pixels` parameter will resize images to the desired number of `pixels`, without
155+
changing the aspect ratio. Results will be saved (along with system info) to a pickle
156+
file if you specify an output directory.
157+
158+
Inside a **python** shell or script, you can run:
92159

93-
1. Test inference speed across a range of image sizes, downsizing images by specifying the `resize` or `pixels` parameter. Using the `pixels` parameter will resize images to the desired number of `pixels`, without changing the aspect ratio. Results will be saved (along with system info) to a pickle file if you specify an output directory.
94-
##### python
95160
```python
96-
dlclive.benchmark_videos('/path/to/exported/model', ['/path/to/video1', '/path/to/video2'], output='/path/to/output', resize=[1.0, 0.75, '0.5'])
97-
```
98-
##### command line
161+
dlclive.benchmark_videos(
162+
"/path/to/exported/model",
163+
["/path/to/video1", "/path/to/video2"],
164+
output="/path/to/output",
165+
resize=[1.0, 0.75, '0.5'],
166+
)
99167
```
168+
169+
From the **command line**, you can run:
170+
171+
```bash
100172
dlc-live-benchmark /path/to/exported/model /path/to/video1 /path/to/video2 -o /path/to/output -r 1.0 0.75 0.5
101173
```
102174

103-
2. Display keypoints to visually inspect the accuracy of exported models on different image sizes (note, this is slow and only for testing purposes):
175+
#### Display keypoints to visually inspect the accuracy of exported models on different image sizes (note, this is slow and only for testing purposes):
176+
177+
Inside a **python** shell or script, you can run:
104178

105-
##### python
106179
```python
107-
dlclive.benchmark_videos('/path/to/exported/model', '/path/to/video', resize=0.5, display=True, pcutoff=0.5, display_radius=4, cmap='bmy')
108-
```
109-
##### command line
180+
dlclive.benchmark_videos(
181+
"/path/to/exported/model",
182+
"/path/to/video",
183+
resize=0.5,
184+
display=True,
185+
pcutoff=0.5,
186+
display_radius=4,
187+
cmap='bmy'
188+
)
110189
```
190+
191+
From the **command line**, you can run:
192+
193+
```bash
111194
dlc-live-benchmark /path/to/exported/model /path/to/video -r 0.5 --display --pcutoff 0.5 --display-radius 4 --cmap bmy
112195
```
113196

114-
3. Analyze and create a labeled video using the exported model and desired resize parameters. This option functions similar to `deeplabcut.benchmark_videos` and `deeplabcut.create_labeled_video` (note, this is slow and only for testing purposes).
197+
#### Analyze and create a labeled video using the exported model and desired resize parameters.
198+
199+
This option functions similar to `deeplabcut.benchmark_videos` and
200+
`deeplabcut.create_labeled_video` (note, this is slow and only for testing purposes).
201+
202+
Inside a **python** shell or script, you can run:
115203

116-
##### python
117204
```python
118-
dlclive.benchmark_videos('/path/to/exported/model', '/path/to/video', resize=[1.0, 0.75, 0.5], pcutoff=0.5, display_radius=4, cmap='bmy', save_poses=True, save_video=True)
205+
dlclive.benchmark_videos(
206+
"/path/to/exported/model",
207+
"/path/to/video",
208+
resize=[1.0, 0.75, 0.5],
209+
pcutoff=0.5,
210+
display_radius=4,
211+
cmap='bmy',
212+
save_poses=True,
213+
save_video=True,
214+
)
119215
```
120-
##### command line
216+
217+
From the **command line**, you can run:
218+
121219
```
122220
dlc-live-benchmark /path/to/exported/model /path/to/video -r 0.5 --pcutoff 0.5 --display-radius 4 --cmap bmy --save-poses --save-video
123221
```
124222

125223
## License:
126224

127-
This project is licensed under the GNU AGPLv3. Note that the software is provided "as is", without warranty of any kind, express or implied. If you use the code or data, we ask that you please cite us! This software is available for licensing via the EPFL Technology Transfer Office (https://tto.epfl.ch/, [email protected]).
225+
This project is licensed under the GNU AGPLv3. Note that the software is provided "as
226+
is", without warranty of any kind, express or implied. If you use the code or data, we
227+
ask that you please cite us! This software is available for licensing via the EPFL
228+
Technology Transfer Office (https://tto.epfl.ch/, [email protected]).
128229

129230
## Community Support, Developers, & Help:
130231

131-
This is an actively developed package and we welcome community development and involvement.
132-
133-
- If you want to contribute to the code, please read our guide [here](https://github.com/DeepLabCut/DeepLabCut/blob/master/CONTRIBUTING.md), which is provided at the main repository of DeepLabCut.
134-
135-
- We are a community partner on the [![Image.sc forum](https://img.shields.io/badge/dynamic/json.svg?label=forum&amp;url=https%3A%2F%2Fforum.image.sc%2Ftags%2Fdeeplabcut.json&amp;query=%24.topic_list.tags.0.topic_count&amp;colorB=brightgreen&amp;&amp;suffix=%20topics&amp;logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAYAAAAfSC3RAAABPklEQVR42m3SyyqFURTA8Y2BER0TDyExZ+aSPIKUlPIITFzKeQWXwhBlQrmFgUzMMFLKZeguBu5y+//17dP3nc5vuPdee6299gohUYYaDGOyyACq4JmQVoFujOMR77hNfOAGM+hBOQqB9TjHD36xhAa04RCuuXeKOvwHVWIKL9jCK2bRiV284QgL8MwEjAneeo9VNOEaBhzALGtoRy02cIcWhE34jj5YxgW+E5Z4iTPkMYpPLCNY3hdOYEfNbKYdmNngZ1jyEzw7h7AIb3fRTQ95OAZ6yQpGYHMMtOTgouktYwxuXsHgWLLl+4x++Kx1FJrjLTagA77bTPvYgw1rRqY56e+w7GNYsqX6JfPwi7aR+Y5SA+BXtKIRfkfJAYgj14tpOF6+I46c4/cAM3UhM3JxyKsxiOIhH0IO6SH/A1Kb1WBeUjbkAAAAAElFTkSuQmCC)](https://forum.image.sc/tags/deeplabcut). Please post help and support questions on the forum with the tag DeepLabCut. Check out their mission statement [Scientific Community Image Forum: A discussion forum for scientific image software](https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000340).
136-
137-
- If you encounter a previously unreported bug/code issue, please post here (we encourage you to search issues first): https://github.com/DeepLabCut/DeepLabCut-live/issues
138-
139-
- For quick discussions here: [![Gitter](https://badges.gitter.im/DeepLabCut/community.svg)](https://gitter.im/DeepLabCut/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
232+
This is an actively developed package, and we welcome community development and
233+
involvement.
234+
235+
- If you want to contribute to the code, please read our guide [here](
236+
https://github.com/DeepLabCut/DeepLabCut/blob/master/CONTRIBUTING.md), which is provided
237+
at the main repository of DeepLabCut.
238+
- We are a community partner on the [![Image.sc forum](https://img.shields.io/badge/dynamic/json.svg?label=forum&amp;url=https%3A%2F%2Fforum.image.sc%2Ftags%2Fdeeplabcut.json&amp;query=%24.topic_list.tags.0.topic_count&amp;colorB=brightgreen&amp;&amp;suffix=%20topics&amp;logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAYAAAAfSC3RAAABPklEQVR42m3SyyqFURTA8Y2BER0TDyExZ+aSPIKUlPIITFzKeQWXwhBlQrmFgUzMMFLKZeguBu5y+//17dP3nc5vuPdee6299gohUYYaDGOyyACq4JmQVoFujOMR77hNfOAGM+hBOQqB9TjHD36xhAa04RCuuXeKOvwHVWIKL9jCK2bRiV284QgL8MwEjAneeo9VNOEaBhzALGtoRy02cIcWhE34jj5YxgW+E5Z4iTPkMYpPLCNY3hdOYEfNbKYdmNngZ1jyEzw7h7AIb3fRTQ95OAZ6yQpGYHMMtOTgouktYwxuXsHgWLLl+4x++Kx1FJrjLTagA77bTPvYgw1rRqY56e+w7GNYsqX6JfPwi7aR+Y5SA+BXtKIRfkfJAYgj14tpOF6+I46c4/cAM3UhM3JxyKsxiOIhH0IO6SH/A1Kb1WBeUjbkAAAAAElFTkSuQmCC)](https://forum.image.sc/tags/deeplabcut). Please post help and
239+
support questions on the forum with the tag DeepLabCut. Check out their mission
240+
statement [Scientific Community Image Forum: A discussion forum for scientific image
241+
software](https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000340).
242+
- If you encounter a previously unreported bug/code issue, please post here (we
243+
encourage you to search issues first): [github.com/DeepLabCut/DeepLabCut-live/issues](
244+
https://github.com/DeepLabCut/DeepLabCut-live/issues)
245+
- For quick discussions here: [![Gitter](
246+
https://badges.gitter.im/DeepLabCut/community.svg)](
247+
https://gitter.im/DeepLabCut/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
140248

141249
### Reference:
142250

143-
If you utilize our tool, please [cite Kane et al, eLife 2020](https://elifesciences.org/articles/61909). The preprint is available here: https://www.biorxiv.org/content/10.1101/2020.08.04.236422v2
251+
If you utilize our tool, please [cite Kane et al, eLife 2020](https://elifesciences.org/articles/61909). The preprint is
252+
available here: https://www.biorxiv.org/content/10.1101/2020.08.04.236422v2
144253

145254
```
146255
@Article{Kane2020dlclive,
@@ -150,4 +259,3 @@ If you utilize our tool, please [cite Kane et al, eLife 2020](https://elifescien
150259
year = {2020},
151260
}
152261
```
153-

pyproject.toml

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -43,16 +43,14 @@ torch = { version = ">=2.0.0", optional = true }
4343
torchvision = { version = ">=0.15", optional = true }
4444
# TensorFlow models
4545
tensorflow = [
46-
{ version = ">=2.0,<=2.10", optional = true, platform = "win32" },
47-
{ version = ">=2.0,<=2.12", optional = true, platform = "linux" },
46+
{ version = "^2.7.0,<=2.10", optional = true, platform = "win32" },
47+
{ version = "^2.7.0,<=2.12", optional = true, platform = "linux" },
4848
]
49-
tensorflow-macos = { version = ">=2.0,<=2.12", optional = true, markers = "sys_platform == 'darwin'" }
49+
tensorflow-macos = { version = "^2.7.0,<=2.12", optional = true, markers = "sys_platform == 'darwin'" }
5050
tensorflow-metal = { version = "<1.3.0", optional = true, markers = "sys_platform == 'darwin'" }
51-
tensorpack = {version = ">=0.11", optional = true }
52-
tf_slim = {version = ">=1.1.0", optional = true }
5351

5452
[tool.poetry.extras]
55-
tf = [ "tensorflow", "tensorflow-macos", "tensorflow-metal", "tensorpack", "tf_slim"]
53+
tf = [ "tensorflow", "tensorflow-macos", "tensorflow-metal"]
5654
pytorch = ["scipy", "timm", "torch", "torchvision"]
5755

5856
[tool.poetry.dev-dependencies]

0 commit comments

Comments
 (0)