Skip to content

Commit 595fe33

Browse files
authored
Update Readme.md
1 parent 33e4ec8 commit 595fe33

File tree

1 file changed

+11
-52
lines changed

1 file changed

+11
-52
lines changed

Readme.md

Lines changed: 11 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -16,21 +16,25 @@ DeepLabStreams core feature is the real-time analysis using any type of camera-b
1616

1717
![DLS_Stim](docs/DLSSTim_example.gif)
1818

19-
### Quick Reference:
19+
## Quick Reference:
20+
21+
### Check out or wiki: [DLStream Wiki](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki)
2022

2123
### Read the preprint: [Schweihoff et al, 2019](https://doi.org/10.1101/2019.12.20.884478).
2224

2325
### 1. [Installation & Testing](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/Installation-&-Testing)
2426

25-
### 2. [Introduction to experiments](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/Introduction)
27+
### 2. [How to use DLStream GUI](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/How-to-use-DLStream)
28+
29+
### 3. [Introduction to experiments](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/Introduction)
2630

27-
### 3. [Design your first experiment](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/My-first-experiment)
31+
### 4. [Design your first experiment](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/My-first-experiment)
2832

29-
### 4. [Adapting an existing experiment to your own needs](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/Adapting-an-existing-experiment-to-your-own-needs)
33+
### 5. [Adapting an existing experiment to your own needs](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/Adapting-an-existing-experiment-to-your-own-needs)
3034

3135
#### Check out our [Out-of-the-Box](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/Out-Of-The-Box:-What-Triggers-are-available%3F) section to get a good idea, what DLStream has in stock for your own experiments!
3236

33-
### How does this work
37+
## How does this work
3438

3539
DeepLabStream uses the camera's video stream to simultaneously record a raw (read as unmodified) video of the ongoing experiment,
3640
send frames one-by-one to the neuronal network for analysis, and use returned analysed data to plot and show a video stream for the experimenter to observe and control the experiment.
@@ -39,53 +43,6 @@ and to end, prolong or modify parts of experimental protocol.
3943

4044
![Flowchart](docs/flowchart2.png)
4145

42-
## Usage
43-
44-
### How to use DeepLabStream
45-
46-
Just run
47-
```
48-
cd DeepLabStream
49-
python app.py
50-
```
51-
52-
You will see the main control panel of a GUI app.
53-
54-
![Main](docs/screen_gui.png)
55-
56-
To start working with DeepLabStream, press the `Start Stream` button. It will activate the camera manager and show you the current view from the connected cameras.
57-
58-
![Stream](docs/screen_stream.png)
59-
60-
After that you can `Start Analysis` to start DeepLabCut and receive a pose estimations for each frame, or, additionally, you can `Start Recording` to record a
61-
video of the current feed (visible in the stream window). You will see your current video timestamp (counted in frames) and FPS after you pressed the `Start Analysis` button.
62-
63-
![Analisys](docs/screen_analysis.png)
64-
65-
As you can see, we track three points that represent three body parts of the mouse - nose, neck and tail root.
66-
Every single frame where the animal was tracked is outputted to the dataframe, which would create a .csv file after the analysis is finished.
67-
68-
After you finish with tracking and/or recording the video, you can stop either function by specifically pressing on corresponding "stop" button
69-
(so, `Stop Analysis` or `Stop Recording`) or you can stop the app and refresh all the timing at once, by pressing `Stop Streaming` button.
70-
71-
#### Experiments
72-
73-
DeepLabStream was build specifically for closed-loop experiments, so with a properly implemented experiment protocol, running experiments on this system is as easy as
74-
pressing the `Start Experiment` button. Depending on your protocol and experimental goals, experiments could run and finish without any further engagement from the user.
75-
76-
![Start](docs/screen_exp_start.png)
77-
78-
In the provided `ExampleExperiment` two regions of interest (ROIs) are created inside an arena. The experiment is designed to count the number of times the mouse enters a ROI and trigger a corresponding visual stimulus on a screen.
79-
The high contrast stimuli (image files) are located within the `experiments/src` folder and specified within the `experiments.py` `ExampleExperiments` Class.
80-
81-
![Experiment](docs/screen_exp.png)
82-
83-
As a visual representation of this event, the border of the ROI will turn green.
84-
85-
All experimental output will be stored to a .csv file for easy postprocessing.
86-
87-
Look at the [Introduction to experiments](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/Introduction) to get an idea how to design your own experiment in DeepLabStream or learn how to adapt one of the already published experiments at [Adapting an existing experiment](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/Adapting-an-existing-experiment-to-your-own-needs).
88-
8946
### Known issues
9047

9148
If you encounter any issues or errors, you can check out the wiki article ([Help there is an error!](https://github.com/SchwarzNeuroconLab/DeepLabStream/wiki/Help-there-is-an-error!)). If your issue is not listed yet, please refer to the issues and either submit a new issue or find a reported issue (which might be already solved) there. Thank you!
@@ -96,7 +53,9 @@ If you use this code or data please cite [Schweihoff et al, 2019](https://doi.or
9653

9754
## License
9855
This project is licensed under the GNU General Public License v3.0. Note that the software is provided "as is", without warranty of any kind, expressed or implied.
56+
9957
## Authors
58+
10059
Lead Researcher: Jens Schweihoff, [email protected]
10160

10261
Lead Developer: Matvey Loshakov, [email protected]

0 commit comments

Comments
 (0)