You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/components/device.rst
+49-6Lines changed: 49 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,8 +3,15 @@
3
3
Device
4
4
======
5
5
6
-
Device is a DepthAI `module <https://docs.luxonis.com/en/latest/pages/products/>`__. After the :ref:`Pipeline` is defined, it can be uploaded to the device.
7
-
When you create the device in the code, firmware is uploaded together with the pipeline.
6
+
Device represents an `OAK camera <https://docs.luxonis.com/projects/hardware/en/latest/>`__. On all of our devices there's a powerful vision processing unit
7
+
(**VPU**), called `Myriad X <https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu.html>`__.
8
+
The VPU is optimized for performing AI inference algorithms and for processing sensory inputs (eg. calculating stereo disparity from two cameras).
9
+
10
+
Device API
11
+
##########
12
+
13
+
:code:`Device` object represents an OAK device. When starting the device, you have to upload a :ref:`Pipeline` to it, which will get executed on the VPU.
14
+
When you create the device in the code, firmware is uploaded together with the pipeline and other assets (such as NN blobs).
8
15
9
16
.. code-block:: python
10
17
@@ -14,8 +21,10 @@ When you create the device in the code, firmware is uploaded together with the p
14
21
15
22
# Upload the pipeline to the device
16
23
with depthai.Device(pipeline) as device:
17
-
# Start the pipeline that is now on the device
18
-
device.startPipeline()
24
+
# Print Myriad X Id (MxID), USB speed, and available cameras on the device
When obtaining the output queue (example code below), the :code:`maxSize` and :code:`blocking` arguments should be set depending on how
78
+
the messages are intended to be used, where :code:`name` is the name of the outputting stream.
79
+
80
+
Since queues are on the host computer, memory (RAM) usually isn't that scarce. But if you are using a small SBC like RPI Zero, where there's only 0.5GB RAM,
Copy file name to clipboardExpand all lines: docs/source/components/messages.rst
+46-4Lines changed: 46 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,12 +3,54 @@
3
3
Messages
4
4
========
5
5
6
-
Messages are sent between linked :ref:`Nodes`. The only way nodes communicate with each other is by sending messages from one to another.
6
+
Messages are sent between linked :ref:`Nodes`. The only way nodes communicate with each other is by sending messages from one to another. On the
7
+
table of contents (left side of the page) **all DepthAI messages are listed** under the :code:`Messages` entry. You can click on them to find out more.
7
8
8
-
If we have :code:`Node1` whose output is linked with :code:`Node2`'s input, a **message** is created in the :code:`Node1`,
9
-
sent out of the :code:`Node1`'s output and to the :code:`Node2`'s input.
9
+
.. rubric:: Creating a message in Script node
10
10
11
-
On the table of contents (left side of the page) all messages are listed under the :code:`Messages` entry. You can click on them to find out more.
11
+
A DepthAI message can be created either on the device, by a node automatically or manually inside the :ref:`Script` node. In below example,
12
+
the code is taken from the :ref:`Script camera control` example, where :ref:`CameraControl` is created inside the Script node every second
13
+
and sent to the :ref:`ColorCamera`'s input (:code:`cam.inputControl`).
14
+
15
+
.. code-block:: python
16
+
17
+
script = pipeline.create(dai.node.Script)
18
+
script.setScript("""
19
+
# Create a message
20
+
ctrl = CameraControl()
21
+
# Configure the message
22
+
ctrl.setCaptureStill(True)
23
+
# Send the message from the Script node
24
+
node.io['out'].send(ctrl)
25
+
""")
26
+
27
+
.. rubric:: Creating a message on a Host
28
+
29
+
It can also be created on a host computer and sent to the device via :ref:`XLinkIn` node. :ref:`RGB Camera Control`, :ref:`Video & MobilenetSSD`
30
+
and :ref:`Stereo Depth from host` code examples demonstrate this functionality perfectly. In the example below, we have removed all the code
31
+
that isn't relevant to showcase how a message can be created on the host and sent to the device via XLink.
32
+
33
+
.. code-block:: python
34
+
35
+
# Create XLinkIn node and configure it
36
+
xin = pipeline.create(dai.node.XLinkIn)
37
+
xin.setStreamName("frameIn")
38
+
xin.out.link(nn.input) # Connect it to NeuralNetwork's input
39
+
40
+
with dai.Device(pipeline) as device:
41
+
# Create input queue, which allows you to send messages to the device
42
+
qIn = device.getInputQueue("frameIn")
43
+
# Create ImgFrame message
44
+
img = dai.ImgFrame()
45
+
img.setData(frame)
46
+
img.setWidth(300)
47
+
img.setHeight(300)
48
+
qIn.send(img) # Send the message to the device
49
+
50
+
.. rubric:: Creating a message on an external MCU
51
+
52
+
A message can also be created on an external MCU and sent to the device via :ref:`SPIIn` node. An demo of such functionality is the
Copy file name to clipboardExpand all lines: docs/source/components/nodes/script.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,8 +3,8 @@ Script
3
3
4
4
Script node allows users to run **custom Python scripts on the device**. Due to the computational resource constraints,
5
5
script node shouldn't be used for heavy computing (eg. image manipulation/CV), but for managing the flow
6
-
of the pipeline. Example use cases would be controlling nodes like :ref:`ImageManip`, :ref:`ColorCamera`, :ref:`SpatialLocationCalculator`,
7
-
decoding :ref:`NeuralNetwork` results, or interfacing with GPIOs.
6
+
of the pipeline (business logic). Example use cases would be controlling nodes like :ref:`ImageManip`, :ref:`ColorCamera`, :ref:`SpatialLocationCalculator`,
7
+
decoding :ref:`NeuralNetwork` results, or interfacing with GPIOs. For **debugging scripts**, we suggest :ref:`Script node logging <script_logging>`.
Left-Right Check or LR-Check is used to remove incorrectly calculated disparity pixels due to occlusions at object borders (Left and Right camera views
96
+
**Left-Right Check** or LR-Check is used to remove incorrectly calculated disparity pixels due to occlusions at object borders (Left and Right camera views
77
97
are slightly different).
78
98
79
99
#. Computes disparity by matching in R->L direction
80
100
#. Computes disparity by matching in L->R direction
81
101
#. Combines results from 1 and 2, running on Shave: each pixel d = disparity_LR(x,y) is compared with disparity_RL(x-d,y). If the difference is above a threshold, the pixel at (x,y) in the final disparity map is invalidated.
82
102
103
+
You can use :code:`debugDispLrCheckIt1` and :code:`debugDispLrCheckIt2` debug outputs for debugging/fine-tuning purposes.
104
+
83
105
.. tab:: Extended Disparity
84
106
85
-
The :code:`extended disparity` allows detecting closer distance objects for the given baseline. This increases the maximum disparity search from 96 to 191, meaning the range is now: **[0..190]**.
107
+
**Extended disparity mode** allows detecting closer distance objects for the given baseline. This increases the maximum disparity search from 96 to 191, meaning the range is now: **[0..190]**.
86
108
So this cuts the minimum perceivable distance in half, given that the minimum distance is now :code:`focal_length * base_line_dist / 190` instead
87
109
of :code:`focal_length * base_line_dist / 95`.
88
110
89
111
#. Computes disparity on the original size images (e.g. 1280x720)
90
112
#. Computes disparity on 2x downscaled images (e.g. 640x360)
91
113
#. Combines the two level disparities on Shave, effectively covering a total disparity range of 191 pixels (in relation to the original resolution).
92
114
115
+
You can use :code:`debugExtDispLrCheckIt1` and :code:`debugExtDispLrCheckIt2` debug outputs for debugging/fine-tuning purposes.
116
+
93
117
.. tab:: Subpixel Disparity
94
118
95
-
Subpixel improves the precision and is especially useful for long range measurements. It also helps for better estimating surface normals.
119
+
**Subpixel mode** improves the precision and is especially useful for long range measurements. It also helps for better estimating surface normals.
96
120
97
121
Besides the integer disparity output, the Stereo engine is programmed to dump to memory the cost volume, that is 96 levels (disparities) per pixel,
98
-
then software interpolation is done on Shave, resulting a final disparity with 5 fractional bits, resulting in significantly more granular depth
99
-
steps (32 additional steps between the integer-pixel depth steps), and also theoretically, longer-distance depth viewing - as the maximum depth
100
-
is no longer limited by a feature being a full integer pixel-step apart, but rather 1/32 of a pixel. In this mode, stereo cameras perform: :code:`96 depth steps * 32 subpixel depth steps = 3,072 depth steps.`
101
-
Note that Subpixel and Extended Disparity are not yet supported simultaneously (which would result in :code:`191 * 32 = 6,112 depth steps`), but should be available in the near future (`Pull Request <https://github.com/luxonis/depthai-python/pull/347>`__).
122
+
then software interpolation is done on Shave, resulting a final disparity with 3 fractional bits, resulting in significantly more granular depth
123
+
steps (8 additional steps between the integer-pixel depth steps), and also theoretically, longer-distance depth viewing - as the maximum depth
124
+
is no longer limited by a feature being a full integer pixel-step apart, but rather 1/8 of a pixel. In this mode, stereo cameras perform: :code:`94 depth steps * 8 subpixel depth steps + 2 (min/max values) = 754 depth steps`
125
+
Note that Subpixel and Extended Disparity are not yet supported simultaneously.
102
126
103
127
For comparison of normal disparity vs. subpixel disparity images, click `here <https://github.com/luxonis/depthai/issues/184>`__.
104
128
@@ -138,12 +162,7 @@ Currently configurable blocks
138
162
Current limitations
139
163
###################
140
164
141
-
If one or more of the additional depth modes (:code:`lrcheck`, :code:`extended`, :code:`subpixel`) are enabled, then:
142
-
143
-
- median filtering is disabled on device
144
-
- with subpixel, if both :code:`depth` and :code:`disparity` are used, only :code:`depth` will have valid output
145
-
146
-
Otherwise, :code:`depth` output is **U16** (in millimeters) and median is functional.
165
+
- Median filtering is disabled when subpixel mode is set to 4 or 5 bits.
0 commit comments