Skip to content

Commit e511f96

Browse files
authored
Merge pull request #430 from luxonis/remove_gen1_from_docs
Docs refactor
2 parents 3cbc455 + 0593d4a commit e511f96

File tree

9 files changed

+139
-176
lines changed

9 files changed

+139
-176
lines changed
40.4 KB
Loading

docs/source/components/device.rst

Lines changed: 49 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,15 @@
33
Device
44
======
55

6-
Device is a DepthAI `module <https://docs.luxonis.com/en/latest/pages/products/>`__. After the :ref:`Pipeline` is defined, it can be uploaded to the device.
7-
When you create the device in the code, firmware is uploaded together with the pipeline.
6+
Device represents an `OAK camera <https://docs.luxonis.com/projects/hardware/en/latest/>`__. On all of our devices there's a powerful vision processing unit
7+
(**VPU**), called `Myriad X <https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu.html>`__.
8+
The VPU is optimized for performing AI inference algorithms and for processing sensory inputs (eg. calculating stereo disparity from two cameras).
9+
10+
Device API
11+
##########
12+
13+
:code:`Device` object represents an OAK device. When starting the device, you have to upload a :ref:`Pipeline` to it, which will get executed on the VPU.
14+
When you create the device in the code, firmware is uploaded together with the pipeline and other assets (such as NN blobs).
815

916
.. code-block:: python
1017
@@ -14,8 +21,10 @@ When you create the device in the code, firmware is uploaded together with the p
1421
1522
# Upload the pipeline to the device
1623
with depthai.Device(pipeline) as device:
17-
# Start the pipeline that is now on the device
18-
device.startPipeline()
24+
# Print Myriad X Id (MxID), USB speed, and available cameras on the device
25+
print('MxId:',device.getDeviceInfo().getMxId())
26+
print('USB speed:',device.getUsbSpeed())
27+
print('Connected cameras:',device.getConnectedCameras())
1928
2029
# Input queue, to send message from the host to the device (you can recieve the message on the device with XLinkIn)
2130
input_q = device.getInputQueue("input_name", maxSize=4, blocking=False)
@@ -24,7 +33,7 @@ When you create the device in the code, firmware is uploaded together with the p
2433
output_q = device.getOutputQueue("output_name", maxSize=4, blocking=False)
2534
2635
while True:
27-
# Get the message from the queue
36+
# Get a message that came from the queue
2837
output_q.get() # Or output_q.tryGet() for non-blocking
2938
3039
# Send a message to the device
@@ -40,7 +49,7 @@ If you want to use multiple devices on a host, check :ref:`Multiple DepthAI per
4049
Device queues
4150
#############
4251

43-
After initializing the device, one has to initialize the input/output queues as well.
52+
After initializing the device, one has to initialize the input/output queues as well. These queues will be located on the host computer (in RAM).
4453

4554
.. code-block:: python
4655
@@ -62,6 +71,40 @@ flags determine the behavior of the queue in this case. You can set these flags
6271
queue.setMaxSize(10)
6372
queue.setBlocking(True)
6473
74+
Specifying arguments for :code:`getOutputQueue` method
75+
######################################################
76+
77+
When obtaining the output queue (example code below), the :code:`maxSize` and :code:`blocking` arguments should be set depending on how
78+
the messages are intended to be used, where :code:`name` is the name of the outputting stream.
79+
80+
Since queues are on the host computer, memory (RAM) usually isn't that scarce. But if you are using a small SBC like RPI Zero, where there's only 0.5GB RAM,
81+
you might need to specify max queue size as well.
82+
83+
.. code-block:: python
84+
85+
with dai.Device(pipeline) as device:
86+
queueLeft = device.getOutputQueue(name="manip_left", maxSize=8, blocking=False)
87+
88+
If only the latest results are relevant and previous do not matter, one can set :code:`maxSize = 1` and :code:`blocking = False`.
89+
That way only latest message will be kept (:code:`maxSize = 1`) and it might also be overwritten in order to avoid waiting for
90+
the host to process every frame, thus providing only the latest data (:code:`blocking = False`).
91+
However, if there are a lot of dropped/overwritten frames, because the host isn't able to process them fast enough
92+
(eg. one-threaded environment which does some heavy computing), the :code:`maxSize` could be set to a higher
93+
number, which would increase the queue size and reduce the number of dropped frames.
94+
Specifically, at 30 FPS, a new frame is recieved every ~33ms, so if your host is able to process a frame in that time, the :code:`maxSize`
95+
could be set to :code:`1`, otherwise to :code:`2` for processing times up to 66ms and so on.
96+
97+
If, however, there is a need to have some intervals of wait between retrieving messages, one could specify that differently.
98+
An example would be checking the results of :code:`DetectionNetwork` for the last 1 second based on some other event,
99+
in which case one could set :code:`maxSize = 30` and :code:`blocking = False`
100+
(assuming :code:`DetectionNetwork` produces messages at ~30FPS).
101+
102+
The :code:`blocking = True` option is mostly used when correct order of messages is needed.
103+
Two examples would be:
104+
105+
- matching passthrough frames and their original frames (eg. full 4K frames and smaller preview frames that went into NN),
106+
- encoding (most prominently H264/H265 as frame drops can lead to artifacts).
107+
65108
Blocking behaviour
66109
******************
67110

docs/source/components/messages.rst

Lines changed: 46 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,54 @@
33
Messages
44
========
55

6-
Messages are sent between linked :ref:`Nodes`. The only way nodes communicate with each other is by sending messages from one to another.
6+
Messages are sent between linked :ref:`Nodes`. The only way nodes communicate with each other is by sending messages from one to another. On the
7+
table of contents (left side of the page) **all DepthAI messages are listed** under the :code:`Messages` entry. You can click on them to find out more.
78

8-
If we have :code:`Node1` whose output is linked with :code:`Node2`'s input, a **message** is created in the :code:`Node1`,
9-
sent out of the :code:`Node1`'s output and to the :code:`Node2`'s input.
9+
.. rubric:: Creating a message in Script node
1010

11-
On the table of contents (left side of the page) all messages are listed under the :code:`Messages` entry. You can click on them to find out more.
11+
A DepthAI message can be created either on the device, by a node automatically or manually inside the :ref:`Script` node. In below example,
12+
the code is taken from the :ref:`Script camera control` example, where :ref:`CameraControl` is created inside the Script node every second
13+
and sent to the :ref:`ColorCamera`'s input (:code:`cam.inputControl`).
14+
15+
.. code-block:: python
16+
17+
script = pipeline.create(dai.node.Script)
18+
script.setScript("""
19+
# Create a message
20+
ctrl = CameraControl()
21+
# Configure the message
22+
ctrl.setCaptureStill(True)
23+
# Send the message from the Script node
24+
node.io['out'].send(ctrl)
25+
""")
26+
27+
.. rubric:: Creating a message on a Host
28+
29+
It can also be created on a host computer and sent to the device via :ref:`XLinkIn` node. :ref:`RGB Camera Control`, :ref:`Video & MobilenetSSD`
30+
and :ref:`Stereo Depth from host` code examples demonstrate this functionality perfectly. In the example below, we have removed all the code
31+
that isn't relevant to showcase how a message can be created on the host and sent to the device via XLink.
32+
33+
.. code-block:: python
34+
35+
# Create XLinkIn node and configure it
36+
xin = pipeline.create(dai.node.XLinkIn)
37+
xin.setStreamName("frameIn")
38+
xin.out.link(nn.input) # Connect it to NeuralNetwork's input
39+
40+
with dai.Device(pipeline) as device:
41+
# Create input queue, which allows you to send messages to the device
42+
qIn = device.getInputQueue("frameIn")
43+
# Create ImgFrame message
44+
img = dai.ImgFrame()
45+
img.setData(frame)
46+
img.setWidth(300)
47+
img.setHeight(300)
48+
qIn.send(img) # Send the message to the device
49+
50+
.. rubric:: Creating a message on an external MCU
51+
52+
A message can also be created on an external MCU and sent to the device via :ref:`SPIIn` node. An demo of such functionality is the
53+
`spi_in_landmark <https://github.com/luxonis/esp32-spi-message-demo/tree/main/spi_in_landmark>`__ example.
1254

1355
.. toctree::
1456
:maxdepth: 0

docs/source/components/pipeline.rst

Lines changed: 11 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -3,36 +3,28 @@
33
Pipeline
44
========
55

6-
Pipeline is a collection of :ref:`nodes <Nodes>` and links between them. This flow provides extensive flexibility that users get for their
7-
DepthAI device.
8-
6+
Pipeline is a collection of :ref:`nodes <Nodes>` and links between them. This flow provides an extensive flexibility that users get for their
7+
OAK device. When pipeline object is passed to the :ref:`Device` object, pipeline gets serialized to JSON and sent to the OAK device via XLink.
98

109
Pipeline first steps
1110
####################
1211

13-
To get DepthAI up and running, one has to define a pipeline, populate it with nodes, configure the nodes and link them together. After that, the pipeline
12+
To get DepthAI up and running, you have to create a pipeline, populate it with nodes, configure the nodes and link them together. After that, the pipeline
1413
can be loaded onto the :ref:`Device` and be started.
1514

1615
.. code-block:: python
1716
1817
pipeline = depthai.Pipeline()
1918
19+
# If required, specify OpenVINO version
20+
pipeline.setOpenVINOVersion(depthai.OpenVINO.Version.VERSION_2021_4)
21+
2022
# Create nodes, configure them and link them together
2123
2224
# Upload the pipeline to the device
2325
with depthai.Device(pipeline) as device:
24-
# Start the pipeline that is now on the device
25-
device.startPipeline()
26-
2726
# Set input/output queues to configure device/host communication through the XLink...
2827
29-
Using multiple devices
30-
######################
31-
32-
If user has multiple DepthAI devices, each device can run a separate pipeline or the same pipeline
33-
(`demo here <https://github.com/luxonis/depthai-experiments/tree/master/gen2-multiple-devices>`__). To use different pipeline for each device,
34-
you can create multiple pipelines and pass the desired pipeline to the desired device on initialization.
35-
3628
Specifying OpenVINO version
3729
###########################
3830

@@ -45,36 +37,12 @@ The reason behind this is that OpenVINO doesn't provide version inside the blob.
4537
# Set the correct version:
4638
pipeline.setOpenVINOVersion(depthai.OpenVINO.Version.VERSION_2021_4)
4739
48-
Specifying arguments for :code:`getOutputQueue` method
49-
######################################################
50-
51-
When obtaining the output queue (example code below), the :code:`maxSize` and :code:`blocking` arguments should be set depending on how
52-
the messages are intended to be used, where :code:`name` is the name of the outputting stream.
53-
54-
.. code-block:: python
55-
56-
with dai.Device(pipeline) as device:
57-
queueLeft = device.getOutputQueue(name="manip_left", maxSize=8, blocking=False)
58-
59-
If only the latest results are relevant and previous do not matter, one can set :code:`maxSize = 1` and :code:`blocking = False`.
60-
That way only latest message will be kept (:code:`maxSize = 1`) and it might also be overwritten in order to avoid waiting for
61-
the host to process every frame, thus providing only the latest data (:code:`blocking = False`).
62-
However, if there are a lot of dropped/overwritten frames, because the host isn't able to process them fast enough
63-
(eg. one-threaded environment which does some heavy computing), the :code:`maxSize` could be set to a higher
64-
number, which would increase the queue size and reduce the number of dropped frames.
65-
Specifically, at 30 FPS, a new frame is recieved every ~33ms, so if your host is able to process a frame in that time, the :code:`maxSize`
66-
should be set to :code:`1`, otherwise to :code:`2` for processing times up to 66ms and so on.
67-
68-
If, however, there is a need to have some intervals of wait between retrieving messages, one could specify that differently.
69-
An example would be checking the results of :code:`DetectionNetwork` for the last 1 second based on some other event,
70-
in which case one could set :code:`maxSize = 30` and :code:`blocking = False`
71-
(assuming :code:`DetectionNetwork` produces messages at ~30FPS).
72-
73-
The :code:`blocking = True` option is mostly used when correct order of messages is needed.
74-
Two examples would be:
40+
Using multiple devices
41+
######################
7542

76-
- matching passthrough frames and their original frames (eg. full 4K frames and smaller preview frames that went into NN),
77-
- encoding (most prominently H264/H265 as frame drops can lead to artifacts).
43+
If user has multiple DepthAI devices, each device can run a different pipeline or the same pipeline
44+
(`demo here <https://github.com/luxonis/depthai-experiments/tree/master/gen2-multiple-devices>`__). To use different pipeline for each device,
45+
you can create multiple pipelines and pass the desired pipeline to the desired device on initialization.
7846

7947
How to place it
8048
###############

docs/source/index.rst

Lines changed: 15 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -3,52 +3,32 @@
33
You can adapt this file completely to your liking, but it should at least
44
contain the root `toctree` directive.
55
6-
Welcome to DepthAI Gen2 API Documentation
7-
=========================================
6+
DepthAI API Documentation
7+
=========================
88

99
.. image:: https://github.com/luxonis/depthai-python/workflows/Python%20Wheel%20CI/badge.svg?branch=gen2_develop
1010
:target: https://github.com/luxonis/depthai-python/actions?query=workflow%3A%22Python+Wheel+CI%22+branch%3A%22gen2_develop%22
1111

12-
On this page you can find the details regarding the Gen2 DepthAI API that will allow you to interact with the DepthAI device.
13-
We support both :ref:`Python API <Python API Reference>` and :ref:`C++ API <C++ API Reference>`
12+
DepthAI API allows users to connect to, configure and communicate with their OAK devices.
13+
We support both :ref:`Python API <Python API Reference>` and :ref:`C++ API <C++ API Reference>`.
1414

15-
What is Gen2?
16-
-------------
15+
.. image:: /_static/images/api_diagram.png
1716

18-
Gen2 is a step forward in DepthAI integration, allowing users to define their own flow of data using pipelines, nodes
19-
and connections. Gen2 was created based on user's feedback from Gen1 and from raising capabilities of both DepthAI and
20-
supporting software like OpenVINO.
21-
22-
Basic glossary
23-
--------------
24-
25-
- **Host side** is the device, like PC or RPi, to which the DepthAI is connected to. If something is happening on the host side, it means that this device is involved in it, not DepthAI itself
26-
27-
- **Device side** is the DepthAI itself. If something is happening on the device side, it means that the DepthAI is responsible for it
28-
29-
- **Pipeline** is a complete workflow on the device side, consisting of nodes and connections between them - these cannot exist outside of pipeline.
30-
31-
- **Node** is a single functionality of the DepthAI. It have either inputs or outputs or both, together with properties to be defined (like resolution on the camera node or blob path in neural network node)
32-
33-
- **Connection** is a link between one node's output and another one's input. In order to define the pipeline dataflow, the connections define where to send data in order to achieve an expected result
34-
35-
- **XLink** is a middleware that is capable to exchange data between device and host. XLinkIn node allows to send the data from host to device, XLinkOut does the opposite.
17+
- **Host side** is a computer, like PC or RPi, to which an OAK device is connected.
18+
- **Device side** is the OAK device itself. If something is happening on the device side, it means that it's running on the `Myriad X VPU <https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu/movidius-myriad-x.html>`__. More :ref:`information here <components_device>`.
19+
- **Pipeline** is a complete workflow on the device side, consisting of :ref:`nodes <Nodes>` and connections between them. More :ref:`information here <components_device>`.
20+
- **Node** is a single functionality of the DepthAI. :ref:`Nodes` have inputs or outputs, and have configurable properties (like resolution on the camera node).
21+
- **Connection** is a link between one node's output and another one's input. In order to define the pipeline dataflow, the connections define where to send `messages <Messages>` in order to achieve an expected result
22+
- **XLink** is a middleware that is capable to exchange data between device and host. :ref:`XLinkIn` node allows sending the data from the host to a device, while :ref:`XLinkOut` does the opposite.
23+
- **Messages** are transferred between nodes, as defined by a connection. More :ref:`information here <components_messages>`.
3624

3725
Getting started
3826
---------------
3927

40-
To help you get started with Gen2 API, we have prepared multiple examples of it's usage, with more yet to come, together
41-
with some insightful tutorials.
42-
43-
Before running the example, install the DepthAI Python library using the command below
44-
45-
.. code-block:: python
46-
:substitutions:
47-
48-
python3 -m pip install -U --force-reinstall depthai
49-
28+
First, you need to :ref:`install the DepthAI <Installation>` library and its dependencies.
5029

51-
Now, pick a tutorial or code sample and start utilizing Gen2 capabilities
30+
After installation, you can continue with an insightful :ref:`Hello World tutorial <Hello World>`, or with :ref:`code examples <Code Samples>`, where different
31+
node functionalities are presented with code.
5232

5333
.. toctree::
5434
:maxdepth: 0
@@ -57,7 +37,6 @@ Now, pick a tutorial or code sample and start utilizing Gen2 capabilities
5737

5838
Home <self>
5939
install.rst
60-
tutorials/overview.rst
6140

6241
.. toctree::
6342
:maxdepth: 1

0 commit comments

Comments
 (0)