|
3 | 3 | Device |
4 | 4 | ====== |
5 | 5 |
|
6 | | -Device represents an `OAK camera <https://docs.luxonis.com/projects/hardware/en/latest/>`__. On all of our devices there's a powerful Robotics Vision Core |
7 | | -(`RVC <https://docs.luxonis.com/projects/hardware/en/latest/pages/rvc/rvc2.html#rvc2>`__). The RVC is optimized for performing AI inference algorithms and |
8 | | -for processing sensory inputs (eg. calculating stereo disparity from two cameras). |
| 6 | +Device is an `OAK camera <https://docs.luxonis.com/projects/hardware/en/latest/>`__ or a RAE robot. On all of our devices there's a powerful Robotics Vision Core |
| 7 | +(`RVC <https://docs.luxonis.com/projects/hardware/en/latest/pages/rvc/rvc2.html#rvc2>`__). The RVC is optimized for performing AI inference, CV operations, and |
| 8 | +for processing sensory inputs (eg. stereo depth, video encoders, etc.). |
9 | 9 |
|
10 | 10 | Device API |
11 | 11 | ########## |
@@ -55,17 +55,107 @@ subnet, you can specify the device (either with MxID, IP, or USB port name) you |
55 | 55 | with depthai.Device(pipeline, device_info) as device: |
56 | 56 | # ... |
57 | 57 |
|
| 58 | +Host clock syncing |
| 59 | +================== |
| 60 | + |
| 61 | +When depthai library connects to a device, it automatically syncs device's timestamp to host's timestamp. Timestamp syncing happens continuously at around 5 second intervals, |
| 62 | +and can be configured via API (example script below). |
| 63 | + |
| 64 | +.. image:: /_static/images/components/device_timesync.jpg |
| 65 | + |
| 66 | +Device clocks are synced at below 2.5ms accuracy for PoE cameras, and below 1ms accuracy for USB cameras at 1σ (standard deviation) with host clock. |
| 67 | + |
| 68 | +.. image:: /_static/images/components/clock-syncing.png |
| 69 | + |
| 70 | +A graph representing the accuracy of the device clock with respect to the host clock. We had 3 devices connected (OAK PoE cameras), all were hardware synchronized using `FSYNC Y-adapter <https://docs.luxonis.com/projects/hardware/en/latest/pages/FSYNC_Yadapter/>`__. |
| 71 | +Raspberry Pi (the host) had an interrupt pin connected to the FSYNC line, so at the start of each frame the interrupt happened and the host clock was recorded. Then we compared frame (synced) timestamps with |
| 72 | +host timestamps and computed the standard deviation. For the histogram above we ran this test for about 7 hours. |
| 73 | + |
| 74 | +.. code-block:: python |
| 75 | +
|
| 76 | + # Configure host clock syncing exmaple |
| 77 | +
|
| 78 | + import depthai as dai |
| 79 | + from datetime import timedelta |
| 80 | + # Configure pipeline |
| 81 | + with dai.Device(pipeline) as device: |
| 82 | + # 1st value: Interval between timesync runs |
| 83 | + # 2nd value: Number of timesync samples per run which are used to compute a better value |
| 84 | + # 3rd value: If true partial timesync requests will be performed at random intervals, otherwise at fixed intervals |
| 85 | + device.setTimesync(timedelta(seconds=5), 10, True) # (These are default values) |
| 86 | +
|
| 87 | +
|
| 88 | +Multiple devices |
| 89 | +################ |
| 90 | + |
| 91 | +If you want to use multiple devices on a host, check :ref:`Multiple DepthAI per Host`. |
| 92 | + |
| 93 | +Device queues |
| 94 | +############# |
| 95 | + |
| 96 | +After initializing the device, you can create input/output queues that match :ref:`XLinkIn`/:ref:`XLinkOut` nodes in the pipeline. These queues will be located on the host computer (in RAM). |
| 97 | + |
| 98 | +.. code-block:: python |
| 99 | +
|
| 100 | + pipeline = dai.Pipeline() |
| 101 | +
|
| 102 | + xout = pipeline.createXLinkOut() |
| 103 | + xout.setStreamName("output_name") |
| 104 | + # ... |
| 105 | + xin = pipeline.createXLinkIn() |
| 106 | + xin.setStreamName("input_name") |
| 107 | + # ... |
| 108 | + with dai.Device(pipeline) as device: |
| 109 | +
|
| 110 | + outputQueue = device.getOutputQueue("output_name", maxSize=5, blocking=False) |
| 111 | + inputQueue = device.getInputQueue("input_name") |
| 112 | +
|
| 113 | + outputQueue.get() # Read from the queue, blocks until message arrives |
| 114 | + outputQueue.tryGet() # Read from the queue, returns None if there's no msg (doesn't block) |
| 115 | + if outputQueue.has(): # Check if there are any messages in the queue |
| 116 | +
|
| 117 | +
|
| 118 | +When you define an output queue, the device can push new messages to it at any time, and the host can read from it at any time. |
| 119 | + |
| 120 | + |
| 121 | +Output queue - `maxSize` and `blocking` |
| 122 | +####################################### |
| 123 | + |
| 124 | +When the host is reading very fast from the queue (inside `while True` loop), the queue, regardless of its size, will stay empty most of |
| 125 | +the time. But as we add things on the host side (additional processing, analysis, etc), it may happen that the device will be pushing messages to |
| 126 | +the queue faster than the host can read from it. And then the messages in the queue will start to increase - and both `maxSize` and `blocking` |
| 127 | +flags determine the behavior of the queue in this case. Two common configurations are: |
| 128 | + |
| 129 | +.. code-block:: python |
| 130 | +
|
| 131 | + with dai.Device(pipeline) as device: |
| 132 | + # If you want only the latest message, and don't care about previous ones; |
| 133 | + # When a new msg arrives to the host, it will overwrite the previous (oldest) one if it's still in the queue |
| 134 | + q1 = device.getOutputQueue(name="name1", maxSize=1, blocking=False) |
| 135 | +
|
| 136 | +
|
| 137 | + # If you care about every single message (eg. H264/5 encoded video; if you miss a frame, you will get artifacts); |
| 138 | + # If the queue is full, the device will wait until the host reads a message from the queue |
| 139 | + q2 = device.getOutputQueue(name="name2", maxSize=30, blocking=True) # Also default values (maxSize=30/blocking=True) |
| 140 | +
|
| 141 | +We used `maxSize=30` just as an example, but it can be any `int16` number. Since device queues are on the host computer, memory (RAM) usually isn't that scarce, so `maxSize` wouldn't matter that much. |
| 142 | +But if you are using a small SBC like RPI Zero (512MB RAM), and are streaming large frames (eg. 4K unencoded), you could quickly run out of memory if you set `maxSize` to a high |
| 143 | +value (and don't read from the queue fast enough). |
| 144 | + |
| 145 | +Some additional information |
| 146 | +--------------------------- |
| 147 | + |
| 148 | +- Queues are thread-safe - they can be accessed from any thread. |
| 149 | +- Queues are created such that each queue is its own thread which takes care of receiving, serializing/deserializing, and sending the messages forward (same for input/output queues). |
| 150 | +- The :code:`Device` object isn't fully thread-safe. Some RPC calls (eg. :code:`getLogLevel`, :code:`setLogLevel`, :code:`getDdrMemoryUsage`) will get thread-safe once the mutex is set in place (right now there could be races). |
58 | 151 |
|
59 | 152 | Watchdog |
60 | 153 | ######## |
61 | 154 |
|
62 | | -Understanding the Watchdog Mechanism in POE Devices |
63 | | ----------------------------------------------------- |
64 | | - |
65 | | -The watchdog is a crucial component in the operation of POE (Power over Ethernet) devices with DepthAI. When DepthAI disconnects from a POE device, the watchdog mechanism is the first to respond, initiating a reset of the camera. This reset is followed by a complete system reboot, which includes the loading of the DepthAI bootloader and the initialization of the entire networking stack. |
| 155 | +The watchdog is a crucial component in the operation of POE (Power over Ethernet) devices with DepthAI. When DepthAI disconnects from a POE device, the watchdog mechanism is the first to respond, |
| 156 | +initiating a reset of the camera. This reset is followed by a complete system reboot, which includes the loading of the DepthAI bootloader and the initialization of the entire networking stack. |
66 | 157 |
|
67 | | -.. note:: |
68 | | - This process is necessary to make the camera available for reconnection and typically takes about 10 seconds, which means the fastest possible reconnection time is 10 seconds. |
| 158 | +The watchdog process is necessary to make the camera available for reconnection and **typically takes about 10 seconds**, which means the fastest possible reconnection time is 10 seconds. |
69 | 159 |
|
70 | 160 |
|
71 | 161 | Customizing the Watchdog Timeout |
@@ -101,9 +191,6 @@ Customizing the Watchdog Timeout |
101 | 191 | set DEPTHAI_BOOTUP_TIMEOUT=<my_value> |
102 | 192 | python3 script.py |
103 | 193 |
|
104 | | -Code-Based Configuration |
105 | | ------------------------- |
106 | | - |
107 | 194 | Alternatively, you can set the timeout directly in your code: |
108 | 195 |
|
109 | 196 | .. code-block:: python |
@@ -164,104 +251,6 @@ The following table lists various environment variables used in the system, alon |
164 | 251 | * - `DEPTHAI_BOOTLOADER_BINARY_ETH` |
165 | 252 | - Overrides device Network Bootloader binary. Mostly for internal debugging purposes. |
166 | 253 |
|
167 | | - |
168 | | - |
169 | | -Multiple devices |
170 | | -################ |
171 | | - |
172 | | -If you want to use multiple devices on a host, check :ref:`Multiple DepthAI per Host`. |
173 | | - |
174 | | -Device queues |
175 | | -############# |
176 | | - |
177 | | -After initializing the device, one has to initialize the input/output queues as well. These queues will be located on the host computer (in RAM). |
178 | | - |
179 | | -.. code-block:: python |
180 | | -
|
181 | | - outputQueue = device.getOutputQueue("output_name") |
182 | | - inputQueue = device.getInputQueue("input_name") |
183 | | -
|
184 | | -When you define an output queue, the device can push new messages to it at any point in time, and the host can read from it at any point in time. |
185 | | -Usually, when the host is reading very fast from the queue, the queue (regardless of its size) will stay empty most of |
186 | | -the time. But as we add things on the host side (additional processing, analysis, etc), it may happen that the device will be writing to |
187 | | -the queue faster than the host can read from it. And then the messages in the queue will start to add up - and both maxSize and blocking |
188 | | -flags determine the behavior of the queue in this case. You can set these flags with: |
189 | | - |
190 | | -.. code-block:: python |
191 | | -
|
192 | | - # When initializing the queue |
193 | | - queue = device.getOutputQueue(name="name", maxSize=5, blocking=False) |
194 | | -
|
195 | | - # Or afterwards |
196 | | - queue.setMaxSize(10) |
197 | | - queue.setBlocking(True) |
198 | | -
|
199 | | -Specifying arguments for :code:`getOutputQueue` method |
200 | | -###################################################### |
201 | | - |
202 | | -When obtaining the output queue (example code below), the :code:`maxSize` and :code:`blocking` arguments should be set depending on how |
203 | | -the messages are intended to be used, where :code:`name` is the name of the outputting stream. |
204 | | - |
205 | | -Since queues are on the host computer, memory (RAM) usually isn't that scarce. But if you are using a small SBC like RPI Zero, where there's only 0.5GB RAM, |
206 | | -you might need to specify max queue size as well. |
207 | | - |
208 | | -.. code-block:: python |
209 | | -
|
210 | | - with dai.Device(pipeline) as device: |
211 | | - queueLeft = device.getOutputQueue(name="manip_left", maxSize=8, blocking=False) |
212 | | -
|
213 | | -If only the latest results are relevant and previous do not matter, one can set :code:`maxSize = 1` and :code:`blocking = False`. |
214 | | -That way only latest message will be kept (:code:`maxSize = 1`) and it might also be overwritten in order to avoid waiting for |
215 | | -the host to process every frame, thus providing only the latest data (:code:`blocking = False`). |
216 | | -However, if there are a lot of dropped/overwritten frames, because the host isn't able to process them fast enough |
217 | | -(eg. one-threaded environment which does some heavy computing), the :code:`maxSize` could be set to a higher |
218 | | -number, which would increase the queue size and reduce the number of dropped frames. |
219 | | -Specifically, at 30 FPS, a new frame is received every ~33ms, so if your host is able to process a frame in that time, the :code:`maxSize` |
220 | | -could be set to :code:`1`, otherwise to :code:`2` for processing times up to 66ms and so on. |
221 | | - |
222 | | -If, however, there is a need to have some intervals of wait between retrieving messages, one could specify that differently. |
223 | | -An example would be checking the results of :code:`DetectionNetwork` for the last 1 second based on some other event, |
224 | | -in which case one could set :code:`maxSize = 30` and :code:`blocking = False` |
225 | | -(assuming :code:`DetectionNetwork` produces messages at ~30FPS). |
226 | | - |
227 | | -The :code:`blocking = True` option is mostly used when correct order of messages is needed. |
228 | | -Two examples would be: |
229 | | - |
230 | | -- matching passthrough frames and their original frames (eg. full 4K frames and smaller preview frames that went into NN), |
231 | | -- encoding (most prominently H264/H265 as frame drops can lead to artifacts). |
232 | | - |
233 | | -Blocking behaviour |
234 | | ------------------- |
235 | | - |
236 | | -By default, queues are **blocking** and their size is **30**, so when the device fills up a queue and when the limit is |
237 | | -reached, any additional messages from the device will be blocked and the library will wait until it can add new messages to the queue. |
238 | | -It will wait for the host to consume (eg. :code:`queue.get()`) a message before putting a new one into the queue. |
239 | | - |
240 | | -.. note:: |
241 | | - After the host queue gets filled up, the XLinkOut.input queue on the device will start filling up. If that queue is |
242 | | - set to blocking, other nodes that are sending messages to it will have to wait as well. This is a usual cause for a |
243 | | - blocked pipeline, where one of the queues isn't emptied in timely manner and the rest of the pipeline waits for it |
244 | | - to be empty again. |
245 | | - |
246 | | -Non-Blocking behaviour |
247 | | ----------------------- |
248 | | - |
249 | | -Making the queue non-blocking will change the behavior in the situation described above - instead of waiting, the library will discard |
250 | | -the oldest message and add the new one to the queue, and then continue its processing loop (so it won't get blocked). |
251 | | -:code:`maxSize` determines the size of the queue and it also helps to control memory usage. |
252 | | - |
253 | | -For example, if a message has 5MB of data, and the queue size is 30, this queue can effectively store |
254 | | -up to 150MB of data in the memory on the host (the messages can also get really big, for instance, a single 4K NV12 encoded frame takes about ~12MB). |
255 | | - |
256 | | -Some additional information |
257 | | ---------------------------- |
258 | | - |
259 | | -- Decreasing the queue size to 1 and setting non-blocking behavior will effectively mean "I only want the latest packet from the queue". |
260 | | -- Queues are thread-safe - they can be accessed from any thread. |
261 | | -- Queues are created such that each queue is its own thread which takes care of receiving, serializing/deserializing, and sending the messages forward (same for input/output queues). |
262 | | -- The :code:`Device` object isn't fully thread-safe. Some RPC calls (eg. :code:`getLogLevel`, :code:`setLogLevel`, :code:`getDdrMemoryUsage`) will get thread-safe once the mutex is set in place (right now there could be races). |
263 | | - |
264 | | - |
265 | 254 | Reference |
266 | 255 | ######### |
267 | 256 |
|
|
0 commit comments