-
Notifications
You must be signed in to change notification settings - Fork 10
Installation & Testing
Your hardware, obviously, should be in compliance with DeepLabCut requirements to be able to run DeepLabStream. But DeepLabStream also requires more processing power than DeepLabCut to be easy and convenient to use. We are really not recommending using this software without GPU, even though DeepLabCut supports this.
In general, you need:
- CUDA-compatible Nvidia videocard (Nvidia GTX 1080 or better is recommended);
- CPU with at least 4 cores to properly utilize parallelization;
- Decent amount of RAM, at least 16 Gb;
We tested DeepLabStream with different setups and would say that a minimum reasonable configuration would be:
CPU: Intel Core i7-7700K CPU @ 4.20GHz
RAM: 32GB DDR4
GPU: Nvidia GeForce GTX 1050 (3GB)
However, our recommended setup, with which we did achieve constant 30 FPS with a two camera setup at 848x480 resolution:
CPU: Intel Core i7-9700K @ 3.60GHz
RAM: 64 GB DDR4
GPU: Nvidia GeForce RTX 2080 (12GB)
In short, you need to be able to run DeepLabCut and/or DeepPoseKit on your system before installing and running DeepLabStream.
DeepLabStream was originally designed with DeepLabCut v1.11 in mind, but for the ease of installation and future-proofing we recommend current DeepLabCut 2.x (Nath, Mathis et al, 2019) if you are interested in the newest versions of DLC, DLC-LIVE if you want a simple setup using DLC based networks and DeepPoseKit if you are interested in any DeepPoseKit networks (StackedDenseNet, StackedHourGlass) or LEAP. All versions and networks trained with them worked fine within our tests.
Using DeepLabCut-Live:
If you want to use DLC-Live based networks (meaning networks exported to work with DLC-Live) please install the dlclive package along with tensorflow:
pip install deeplabcut-live
Using DeepPoseKit:
If you want to use DeepPoseKit derived networks (meaning networks trained and exported by DeepPoseKit: StackedDenseNet, StackedHourGlass or LEAP) please install the deepposekit package along with tensorflow:
pip install deepposekit
Using the originial DeepLabCut:
Here is a full instruction by DeepLabCut, but we will provide a short version/checklist below.
- Make sure that you have the proper Nvidia drivers installed;
- Install CUDA by Nvidia. Please refer to this table to ensure you have the correct driver/CUDA combination;
- Verify your CUDA installation on Windows or Linux;
- Create an environment. We strongly recommend using environments provided by DeepLabCut;
- If you are not using DeepLabCut-provided environments for step 4, install cuDNN. Otherwise, skip this step;
- Make sure that Tensorflow is installed in your environment. Manual installation goes as followed:
But a lot of different problems could arise, depending on your software and hardware setup;
pip install tensorflow-gpu==1.12
- Verify that your TensorFlow is working correctly by using this (Linux) or this (Windows) manual. The latter also provides a great overview of the whole process with the previous six steps.
The easiest way of installing DeepLabStream would be the following:
(Make sure that you are working in the same environment that you installed DeepLabCut in!)
git clone https://github.com/SchwarzNeuroconLab/DeepLabStream.git
cd DeepLabStream
pip install -r requirements.txtYou still need to install DeepPoseKit, DLC-live or DLC including tensorflow on top of this!
You need to modify the DeepLabStream config in settings.ini after installation to specify with which model it will work.
-
Change variables in the
[Streaming]portion of the config to the most suitable for you:- RESOLUTION - choose the resolution, supported by your camera and network
- FRAMERATE - choose the framerate, supported by your camera
- OUTPUT_DIRECTORY - folder for data and video output
- CAMERA_SOURCE - if you are not using RealSense or Basler cameras, you need to choose the correct source for your camera manually. It should be recognized by openCV.
- STREAMING_SOURCE - you can use "camera", "ipwebcam" or "video" to select your input source
-
Change the variables in the
[Pose Estimation]portion of the config to select your network choice- MODEL_ORIGIN = possible origins are
DLC,DLC-LIVE,MADLC,DEEPPOSEKIT - MODEL_PATH = The full path to the exported model (DLC-LIVE, DEEPPOSEKIT) or folder of your DLC installation (see below)
- MODEL_NAME = Name of the model you want to use. Only necessary for original DLC and for benchmarking.
- ALL_BODYPARTS = used in DLC-LIVE and DeepPoseKit for now to create posture (has to be in right order!); if left empty or to short, auto-naming will be enabled in style bp1, bp2 ...
- MODEL_ORIGIN = possible origins are
For original DLC and early DLStream versions:
- Change
MODEL_PATH(early:DLC_PATH) variable to wherever your DeepLabCut installation is.
If you installed it like a package with DeepLabCut's provided environment files, it would be approximately here in your Anaconda environment:
../anaconda3/envs/dlc-ubuntu-GPU/lib/python3.6/site-packages/deeplabcut. Of course, the specific folder may vary.
- Change the
MODEL_NAME(early:MODEL) variable to the name of your model, found in../deeplabcut/pose_estimation_tensorflow/models(../deeplabcut/pose_estimation/modelsfor DLC v1.11) folder. if you are using DeepLabCut 2.+ you first have to copy the model folder from the corresponding DLC project directory into the aforementioned pose estimation models folder.
To correctly enable multiple camera support, you need not only to set the variable MULTIPLE_DEVICES to True in the config, but also edit one of the DeepLabCut files.
Locate the file predict.py in your DeepLabCut folder (for DLC v2.x it would be in ../deeplabcut/pose_estimation_tensorflow/nnet folder), and change the line in function setup_pose_prediction
sess = TF.Session()to the following lines, maintaining the correct indentation
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)DeepLabStream was written with Intel RealSense cameras support in mind, to be able to get depth data and use infrared cameras for experiments in low-lighting conditions.
To enable these features, you need to install an additional Python library: PyRealSense2
pip install pyrealsense2In an ideal scenario, that would install it fully, but in some specific cases, for example, if you are using Python 3.5 on a Windows machine, the corresponding wheel file can not be available. If that is the case, you need to manually build it from source from this official GitHub repository.
DeepLabStream also supports the usage of Basler cameras through their Python wrapper pypylon .
To enable this, you need to install an additional Python library: PyPylon
pip install pypylonor use the official provided instruction
git clone https://github.com/basler/pypylon.git
cd pypylon
pip install .If you wish to not use either Intel RealSense or Basler cameras, DeepLabStream can work with any camera supported by opencv.
By default, DeepLabStream would try to open a camera from source 0 (like cv2.VideoCapture(0)), but you can modify this and use a camera from any source.
Your resolution and framerate, described in the config would also apply, but beware that opencv does not always support every native camera resolution and/or framerate. Some experimenting might be required.
Very important note: with this generic camera mode you will not be able to use multiple cameras!
If you wish to use a generic webcam connected to a computer on your network (rather than directly on your DLStream computer), you can use the [IPWEBCAM] section to configure this. We are using the SmoothStream code as a basis. So you will need to setup your webcam on the sending computer using their repo. Note, that this will most likely result in a framerate drop due to network traffic and is not recommended. IPWEBCAM = True will overwrite any other camera input, but not video.
If you wish to use a prerecorded Video as input for DLStream, you can use the setting parameters in the [Video] section for it. Note, that VIDEO = True will overwrite any camera input.
Currently, DLStream supports NI, Raspberry PI and Arduino Boards for GPIO output to trigger stimulation from external devices.
Check out the OUT-OF-THE-BOX section to see how to set up those devices.
To properly test your DeepLabStream installation, we included a testing script that you can run in three different modes. DeepLabStream.py allows you to test your cameras, your DeepLabcut installation,
and to benchmark your DeepLabStream performance.
- Run the following command to test your cameras:
python DeepLabStream.py- Next, you can test how your DeepLabCut installation behaves and if you did correctly set the DeepLabCut path in the config:
python DeepLabStream.py --dlc-enabled- And finally you can benchmark your system automatically:
python DeepLabStream.py --dlc-enabled --benchmark-enabledThe stream would run until it gets 3000 analyzed frames (you can always stop it manually at any point, just press 'Q' while the stream window is in focus). Then it will show you a detailed statistic of the overall performance timings, analysis timings, percentage of frames where it did lose tracking and your average FPS.
Additionally, you can test and see the results of the build-in video recorder. Run the following command to test it:
python DeepLabStream.py --recording-enabledThis will record the videofeed from the camera to your OUTPUT_DIRECTORY. You can also add this flag to any of the previously mentioned tests to check performance with recording enabled.
Important note: recording will always save only "raw" video, without analysis, with framerate as close to specified as possible