Skip to content

Commit 6ee8322

Browse files
author
SzabolcsGergely
committed
Merge remote-tracking branch 'origin/main' into HEAD
2 parents 14f57ce + b747e5c commit 6ee8322

File tree

4 files changed

+221
-44
lines changed

4 files changed

+221
-44
lines changed

depthai-core

docs/source/components/nodes/color_camera.rst

Lines changed: 29 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -24,27 +24,40 @@ Inputs and Outputs
2424

2525
.. code-block::
2626
27-
┌───────────────────┐ still
28-
│ ├───────────►
29-
inputConfig │ │
30-
──────────────►│ │ preview
31-
│ ColorCamera ├───────────►
32-
inputControl │ │
33-
──────────────►│ │ video
34-
│ ├───────────►
35-
└───────────────────┘
27+
ColorCamera node
28+
┌──────────────────────────────┐
29+
│ ┌─────────────┐ │
30+
│ │ Image │ raw │ raw
31+
│ │ Sensor │---┬--------├────────►
32+
│ └────▲────────┘ | │
33+
│ │ ┌--------┘ │
34+
│ ┌─┴───▼─┐ │ isp
35+
inputControl │ │ │-------┬-------├────────►
36+
──────────────►│------│ ISP │ ┌─────▼────┐ │ video
37+
│ │ │ | |--├────────►
38+
│ └───────┘ │ Image │ │ still
39+
inputConfig │ │ Post- │--├────────►
40+
──────────────►│----------------|Processing│ │ preview
41+
│ │ │--├────────►
42+
│ └──────────┘ │
43+
└──────────────────────────────┘
3644
3745
**Message types**
3846

3947
- :code:`inputConfig` - :ref:`ImageManipConfig`
4048
- :code:`inputControl` - :ref:`CameraControl`
41-
- :code:`still` - :ref:`ImgFrame`
42-
- :code:`preview` - :ref:`ImgFrame`
43-
- :code:`video` - :ref:`ImgFrame`
44-
45-
:code:`Preview` is RGB (or BGR planar/interleaved if configured) and is mostly suited for small size previews and to feed the image
46-
into :ref:`NeuralNetwork`. :code:`video` and :code:`still` are both NV12, so are suitable for bigger sizes. :code:`still` image gets created when
47-
a capture event is sent to the ColorCamera, so it's like taking a photo.
49+
- :code:`raw` - :ref:`ImgFrame` - RAW10 bayer data. Demo code for unpacking `here <https://github.com/luxonis/depthai-experiments/blob/3f1b2b2/gen2-color-isp-raw/main.py#L13-L32>`__
50+
- :code:`isp` - :ref:`ImgFrame` - YUV420 planar (same as YU12/IYUV/I420)
51+
- :code:`still` - :ref:`ImgFrame` - NV12, suitable for bigger size frames. The image gets created when a capture event is sent to the ColorCamera, so it's like taking a photo
52+
- :code:`preview` - :ref:`ImgFrame` - RGB (or BGR planar/interleaved if configured), mostly suited for small size previews and to feed the image into :ref:`NeuralNetwork`
53+
- :code:`video` - :ref:`ImgFrame` - NV12, suitable for bigger size frames
54+
55+
**ISP** (image signal processor) is used for bayer transformation, demosaicing, noise reduction, and other image enhancements.
56+
It interacts with the 3A algorithms: **auto-focus**, **auto-exposure**, and **auto-white-balance**, which are handling image sensor
57+
adjustments such as exposure time, sensitivity (ISO), and lens position (if the camera module has a motorized lens) at runtime.
58+
Click `here <https://en.wikipedia.org/wiki/Image_processor>`__ for more information.
59+
60+
**Image Post-Processing** converts YUV420 planar frames from the **ISP** into :code:`video`/:code:`preview`/:code:`still` frames.
4861

4962
Usage
5063
#####

docs/source/components/nodes/stereo_depth.rst

Lines changed: 46 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -53,26 +53,62 @@ Inputs and Outputs
5353
Disparity
5454
#########
5555

56-
When calculating the disparity, each pixel in the disparity map gets assigned a confidence value 0..255 by the stereo matching algorithm, as:
57-
- 0 - maximum confidence that it holds a valid value
58-
- 255 - minimum confidence, so there are chances the value is incorrect
56+
When calculating the disparity, each pixel in the disparity map gets assigned a confidence value :code:`0..255` by the stereo matching algorithm,
57+
as:
58+
59+
- :code:`0` - maximum confidence that it holds a valid value
60+
- :code:`255` - minimum confidence, so there is more chance that the value is incorrect
61+
5962
(this confidence score is kind-of inverted, if say comparing with NN)
6063

6164
For the final disparity map, a filtering is applied based on the confidence threshold value: the pixels that have their confidence score larger than
62-
the threshold get invalidated, i.e. their disparity value is set to zero.
65+
the threshold get invalidated, i.e. their disparity value is set to zero. You can set the confidence threshold with :code:`stereo.setConfidenceThreshold()`.
6366

6467
Current limitations
6568
###################
6669

67-
If one or more of the additional depth modes (lrcheck, extended, subpixel) are enabled, then:
70+
If one or more of the additional depth modes (:code:`lrcheck`, :code:`extended`, :code:`subpixel`) are enabled, then:
71+
72+
- median filtering is disabled on device
73+
- with subpixel, if both :code:`depth` and :code:`disparity` are used, only :code:`depth` will have valid output
74+
75+
Otherwise, :code:`depth` output is **U16** (in millimeters) and median is functional.
76+
77+
Depth Modes
78+
###########
79+
80+
Left-Right Check
81+
****************
82+
83+
Left-Right Check or LR-Check is used to remove incorrectly calculated disparity pixels due to occlusions at object borders (Left and Right camera views
84+
are slightly different).
85+
86+
#. Computes disparity by matching in R->L direction
87+
#. Computes disparity by matching in L->R direction
88+
#. Combines results from 1 and 2, running on Shave: each pixel d = disparity_LR(x,y) is compared with disparity_RL(x-d,y). If the difference is above a threshold, the pixel at (x,y) in the final disparity map is invalidated.
89+
90+
Extended Disparity
91+
******************
92+
93+
The :code:`extended disparity` allows detecting closer distance objects for the given baseline. This increases the maximum disparity search from 96 to 191.
94+
So this cuts the minimum perceivable distance in half, given that the minimum distance is now :code:`focal_length * base_line_dist / 190` instead
95+
of :code:`focal_length * base_line_dist / 95`.
96+
97+
#. Computes disparity on the original size images (e.g. 1280x720)
98+
#. Computes disparity on 2x downscaled images (e.g. 640x360)
99+
#. Combines the two level disparities on Shave, effectively covering a total disparity range of 191 pixels (in relation to the original resolution).
100+
101+
Subpixel Disparity
102+
******************
68103

69-
- :code:`depth` output is FP16.
70-
- median filtering is disabled on device.
71-
- with subpixel, either depth or disparity has valid data.
104+
Subpixel improves the precision and is especially useful for long range measurements. It also helps for better estimating surface normals
72105

73-
Otherwise, depth output is U16 (in milimeters) and median is functional.
106+
Besides the integer disparity output, the Stereo engine is programmed to dump to memory the cost volume, that is 96 levels (disparities) per pixel,
107+
then software interpolation is done on Shave, resulting a final disparity with 5 fractional bits, resulting in significantly more granular depth
108+
steps (32 additional steps between the integer-pixel depth steps), and also theoretically, longer-distance depth viewing - as the maximum depth
109+
is no longer limited by a feature being a full integer pixel-step apart, but rather 1/32 of a pixel.
74110

75-
Like on Gen1, either :code:`depth` or :code:`disparity` has valid data.
111+
For comparison of normal disparity vs. subpixel disparity images, click `here <https://github.com/luxonis/depthai/issues/184>`__.
76112

77113
Usage
78114
#####

docs/source/install.rst

Lines changed: 145 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,11 @@ We keep up-to-date, pre-compiled, libraries for the following platforms. Note t
1919
======================== ============================================== ================================================================================
2020
Platform Instructions Support
2121
======================== ============================================== ================================================================================
22-
Windows 10 :ref:`Platform dependencies <Windows>` `Discord <https://discord.com/channels/790680891252932659/798284448323731456>`__
22+
Windows 10 :ref:`Platform dependencies <Windows 10>` `Discord <https://discord.com/channels/790680891252932659/798284448323731456>`__
2323
macOS :ref:`Platform dependencies <macOS>` `Discord <https://discord.com/channels/790680891252932659/798283911989690368>`__
2424
Ubuntu & Jetson/Xavier :ref:`Platform dependencies <Ubuntu>` `Discord <https://discord.com/channels/790680891252932659/798302162160451594>`__
2525
Raspberry Pi OS :ref:`Platform dependencies <Raspberry Pi OS>` `Discord <https://discord.com/channels/790680891252932659/798302708070350859>`__
26+
Jestson Nano :ref:`Platform dependencies <Jetson Nano>` `Discord <https://discord.com/channels/790680891252932659/795742008119132250>`__
2627
======================== ============================================== ================================================================================
2728

2829
And the following platforms are also supported by a combination of the community and Luxonis.
@@ -34,8 +35,9 @@ Fedora `Di
3435
Robot Operating System `Discord <https://discord.com/channels/790680891252932659/795749142793420861>`__
3536
Windows 7 :ref:`WinUSB driver <Windows 7>` `Discord <https://discord.com/channels/790680891252932659/798284448323731456>`__
3637
Docker :ref:`Pull and run official images <Docker>` `Discord <https://discord.com/channels/790680891252932659/796794747275837520>`__
37-
Kernel Virtual Machine :ref:`Run on KVM <KVM>` `Discord <https://discord.com/channels/790680891252932659/819663531003346994>`__
38+
Kernel Virtual Machine :ref:`Run on KVM <Kernel Virtual Machine>` `Discord <https://discord.com/channels/790680891252932659/819663531003346994>`__
3839
VMware :ref:`Run on VMware <vmware>` `Discord <https://discord.com/channels/790680891252932659/819663531003346994>`__
40+
Virtual Box :ref:`Run on Virtual Box <Virtual Box>` `Discord <https://discord.com/channels/790680891252932659/819663531003346994>`__
3941
====================== ===================================================== ================================================================================
4042

4143
macOS
@@ -58,13 +60,6 @@ following:
5860
5961
See the `Video preview window fails to appear on macOS <https://discuss.luxonis.com/d/95-video-preview-window-fails-to-appear-on-macos>`_ thread on our forum for more information.
6062

61-
Raspberry Pi OS
62-
***************
63-
64-
.. code-block:: bash
65-
66-
sudo curl -fL http://docs.luxonis.com/_static/install_dependencies.sh | bash
67-
6863
Ubuntu
6964
******
7065

@@ -83,13 +78,111 @@ Note! If opencv fails with illegal instruction after installing from PyPi, add:
8378
source ~/.bashrc
8479
8580
81+
Raspberry Pi OS
82+
***************
83+
84+
.. code-block:: bash
85+
86+
sudo curl -fL http://docs.luxonis.com/_static/install_dependencies.sh | bash
87+
88+
89+
Jetson Nano
90+
***********
91+
92+
To install DepthAI on Jetson Nano, perform the following steps, after completing a fresh install and setup. On the first log in,
93+
**do not** immediately run updates.
94+
95+
This first step is optional: go to the *Software* (App Store) and delete the apps or software that you probably will not use.
96+
97+
Open a terminal window and run the following commands:
98+
99+
.. code-block:: bash
100+
101+
sudo apt update && sudo apt upgrade
102+
sudo reboot now
103+
104+
Change the size of your SWAP. These instructions come from the `Getting Started with AI on Jetson Nano <https://developer.nvidia.com/embedded/learn/jetson-ai-certification-programs>`__ from nvidia:
105+
106+
.. code-block:: bash
107+
108+
# Disable ZRAM:
109+
sudo systemctl disable nvzramconfig
110+
# Create 4GB swap file
111+
sudo fallocate -l 4G /mnt/4GB.swap
112+
sudo chmod 600 /mnt/4GB.swap
113+
sudo mkswap /mnt/4GB.swap
114+
115+
If you have an issue with the final command, you can try the following:
116+
117+
.. code-block:: bash
118+
119+
sudo vi /etc/fstab
120+
121+
# Add this line at the bottom of the file
122+
/mnt/4GB.swap swap swap defaults 0 0
123+
124+
# Reboot
125+
sudo reboot now
126+
127+
The next step is to install :code:`pip` and :code:`python3`:
128+
129+
.. code-block:: bash
130+
131+
sudo -H apt install -y python3-pip
132+
133+
After that, install and set up virtual environment:
134+
135+
.. code-block:: bash
136+
137+
sudo -H pip3 install virtualenv virtualenvwrapper
138+
139+
Add following lines to the bash script:
140+
141+
.. code-block:: bash
142+
143+
sudo vi ~/.bashrc
144+
145+
# Virtual Env Wrapper Configuration
146+
export WORKON_HOME=$HOME/.virtualenvs
147+
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
148+
source /usr/local/bin/virtualenvwrapper.sh
149+
150+
Save and reload the script by running the command :code:`source ~/.bashrc`. Then create a virtual environment (in this example it's called :code:`depthAI`).
151+
152+
.. code-block:: bash
153+
154+
mkvirtualenv depthAI -p python3
155+
156+
157+
**Note!** Before installing :code:`depthai`, make sure you're in the virtual environment.
158+
159+
.. code-block:: bash
160+
161+
#Download and install the dependency package
162+
sudo wget-qO- http://docs.luxonis.com/_static/install_dependencies.sh | bash
163+
164+
#Clone github repository
165+
git clone https://github.com/luxonis/depthai-python.git
166+
cd depthai-python
167+
168+
Last step is to edit :code:`.bashrc` with the line:
169+
170+
.. code-block:: bash
171+
172+
echo "export OPENBLAS_CORETYPE=AMRV8" >> ~/.bashrc
173+
174+
175+
Navigate to the folder with :code:`depthai` examples folder, run :code:`python install_requirements.py` and then run :code:`python 01_rgb_preview.py`.
176+
177+
Solution provided by `iacisme <https://github.com/iacisme>`__ via our `Discord <https://discord.com/channels/790680891252932659/795742008119132250>`__ channel.
178+
86179
openSUSE
87180
********
88181

89182
For openSUSE, available `in this official article <https://en.opensuse.org/SDB:Install_OAK_AI_Kit>`__ how to install the OAK device on the openSUSE platform.
90183

91-
Windows
92-
*******
184+
Windows 10
185+
**********
93186

94187
We recommend using the Chocolatey package manager to install DepthAI's
95188
dependencies on Windows. Chocolatey is very similar to Homebrew for macOS.
@@ -112,7 +205,7 @@ use it to install DepthAI's dependencies do the following:
112205
choco install cmake git python pycharm-community -y
113206
114207
Windows 7
115-
---------
208+
*********
116209

117210
Although we do not officially support Windows 7, members of the community `have
118211
had success <https://discuss.luxonis.com/d/105-run-on-win7-sp1-x64-manual-instal-usb-driver>`__ manually installing WinUSB using `Zadig
@@ -143,13 +236,12 @@ Run the :code:`01_rgb_preview.py` example inside a Docker container on a Linux h
143236
luxonis/depthai-library:latest \
144237
python3 /depthai-python/examples/01_rgb_preview.py
145238
146-
To allow the container to update X11 you may need to run :code:`xhost local:root` on
147-
the host.
239+
To allow the container to update X11 you may need to run :code:`xhost local:root` on the host.
148240

149-
KVM
150-
***
241+
Kernel Virtual Machine
242+
**********************
151243

152-
To access the OAK-D camera in the `Kernel Virtual Machine <https://www.linux-kvm.org/page/Main_Page>`__, there is a need to attach and detach USB
244+
To access the OAK-D camera in the `Kernel Virtual Machine <https://www.linux-kvm.org/page/Main_Page>`__, there is a need to attach and detach USB
153245
devices on the fly when the host machine detects changes in the USB bus.
154246

155247
OAK-D camera changes the USB device type when it is used by DepthAI API. This happens in backgound when the camera is used natively.
@@ -233,6 +325,42 @@ the DepthAI example again inside the VM. Choose to route to VM and select to *no
233325
watchdog could get triggered if the host doesn't start communication in few seconds). You may need to repeat running the script a few times, until all gets
234326
set properly for VMware.
235327

328+
Virtual Box
329+
***********
330+
331+
If you want to use VirtualBox to run the DepthAI source code, please make sure that you allow the VM to access the USB devices. Also, be aware that
332+
by default, it supports only USB 1.1 devices, and DepthAI operates in two stages:
333+
334+
#. For showing up when plugged in. We use this endpoint to load the firmware onto the device, which is a usb-boot technique. This device is USB2.
335+
#. For running the actual code. This shows up after USB booting and is USB3.
336+
337+
In order to support the DepthAI modes, you need to download and install `Oracle VM VirtualBox Extension Pack <https://www.virtualbox.org/wiki/Downloads>`__. Once this is installed, enable USB3 (xHCI) Controller in the USB settings.
338+
339+
Once this is done, you'll need to route the Myriad as USB device from Host to the VBox. This is the filter for depthai before it has booted, which is
340+
at that point a USB2 device:
341+
342+
.. image:: https://user-images.githubusercontent.com/32992551/105070455-8d4d6b00-5a40-11eb-9bc6-19b164a55b4c.png
343+
:alt: Routing the not-yet-booted depthai to the VirtualBox.
344+
345+
The last step is to add the USB Intel Loopback device. The depthai device boots its firmware over USB, and after it has booted, it shows up as a new device.
346+
347+
This device shows just up when the depthai/OAK is trying to reconnect (during runntime, so right after running a pipeline on depthai, such as `:bash: python3 depthai_demo.py`).
348+
349+
It might take a few tries to get this loopback device shown up and added, as you need to do this while depthai is trying to connect after a pipeline has been built (and so it has at that point now booted its internal firmware over USB2).
350+
351+
For enabling it only once, you can see the loopback device here (after the pipeline has been started):
352+
353+
.. image:: https://user-images.githubusercontent.com/32992551/105112208-c527d300-5a7f-11eb-96b4-d14bcf974313.png
354+
:alt: Find the loopback device right after you tell depthai to start the pipeline, and select it.
355+
356+
And then for permanently enabling this pass-through to virtual box, enable this in setting below:
357+
358+
.. image:: https://user-images.githubusercontent.com/32992551/105070474-93dbe280-5a40-11eb-94b3-6557cd83fe1f.png
359+
:alt: Making the USB Loopback Device for depthai/OAK, to allow the booted device to communicate in virtualbox
360+
361+
And then for each additional depthai/OAK device you would like to pass through, repeat just this last loopback settings step for each unit (as each unit will have its own unique ID).
362+
363+
236364
Install from PyPI
237365
#################
238366

0 commit comments

Comments
 (0)