Skip to content

Commit 535364d

Browse files
authored
Collision avoidance example (#1020)
* Addding collision avoidance example * Updating debugging docs
1 parent 67ff436 commit 535364d

File tree

6 files changed

+196
-2
lines changed

6 files changed

+196
-2
lines changed
9.84 MB
Loading

docs/source/samples/SpatialDetection/spatial_location_calculator.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ You can also calculate spatial coordiantes on host side, `demo here <https://git
1212
- :ref:`RGB & MobilenetSSD with spatial data`
1313
- :ref:`Mono & MobilenetSSD with spatial data`
1414
- :ref:`RGB & TinyYolo with spatial data`
15+
- :ref:`Collision avoidance`
1516

1617
Demo
1718
####
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
Collision avoidance
2+
===================
3+
4+
This example demonstrates how to use DepthAI to implement a collision avoidance system with the OAK-D camera. The script measures objects distance from the camera in real-time, displaying warnings based on predefined distance thresholds.
5+
6+
The script uses stereo cameras to calculate the distance of objects from the camera. The depth map is then aligned to center (color) camera in order to overlay the distance information on the color frame.
7+
8+
User-defined constants **`WARNING`** and **`CRITICAL`** are used to define distance thresholds for orange and red alerts respectively.
9+
10+
Similar examples
11+
################
12+
13+
- :ref:`Spatial Location Calculator`
14+
- :ref:`RGB Depth Alignment`
15+
16+
Demo
17+
####
18+
19+
.. image:: ../../_static/images/examples/collision_avoidance.gif
20+
:width: 100%
21+
:alt: Collision Avoidance
22+
23+
Setup
24+
#####
25+
26+
.. include:: /includes/install_from_pypi.rst
27+
28+
.. include:: /includes/install_req.rst
29+
30+
Source code
31+
###########
32+
33+
.. tabs::
34+
35+
.. tab:: Python
36+
37+
Also `available on GitHub <https://github.com/luxonis/depthai-python/blob/main/examples/mixed/collision_avoidance.py>`__
38+
39+
.. literalinclude:: ../../../../examples/mixed/collision_avoidance.py
40+
:language: python
41+
:linenos:
42+
43+
.. include:: /includes/footer-short.rst

docs/source/tutorials/code_samples.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,7 @@ are presented with code.
103103
- :ref:`RGB Encoding & Mono & MobilenetSSD` - Runs MobileNetSSD on mono frames and displays detections on the frame + encodes RGB to :code:`.h265`
104104
- :ref:`RGB Encoding & Mono with MobilenetSSD & Depth` - A combination of **RGB Encoding** and **Mono & MobilenetSSD & Depth** code samples
105105
- :ref:`Spatial detections on rotated OAK` - Spatail detections on upside down OAK camera
106+
- :ref:`Collision avoidance` - Collision avoidance system using depth and RGB
106107

107108
.. rubric:: MobileNet
108109

docs/source/tutorials/debugging.rst

Lines changed: 37 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,10 @@ Level Logging
2929
:code:`trace` Trace will print out a :ref:`Message <components_messages>` whenever one is received from the device.
3030
================ =======
3131

32-
Debugging can be enabled either **in code**:
32+
Debugging can be enabled either:
33+
34+
In code
35+
*******
3336

3437
.. code-block:: python
3538
@@ -42,7 +45,35 @@ Where :code:`setLogLevel` sets verbosity which filters messages that get sent fr
4245
verbosity which filters messages that get printed on the host (stdout). This difference allows to capture the log messages internally and
4346
not print them to stdout, and use those to eg. display them somewhere else or analyze them.
4447

45-
You can also enable debugging using an **environmental variable DEPTHAI_LEVEL**:
48+
49+
Using an environmental variable `DEPTHAI_LEVEL`
50+
***********************************************
51+
52+
Using an environment variable to set the debugging level, rather than configuring it directly in code, provides additional detailed information.
53+
This includes metrics such as CMX and SHAVE usage, and the time taken by each node in the pipeline to process a single frame.
54+
55+
Example of a log message for :ref:`RGB Preview` in **INFO** mode:
56+
57+
.. code-block:: bash
58+
59+
[184430102189660F00] [2.1] [0.675] [system] [info] SIPP (Signal Image Processing Pipeline) internal buffer size '18432'B, DMA buffer size: '16384'B
60+
[184430102189660F00] [2.1] [0.711] [system] [info] ImageManip internal buffer size '285440'B, shave buffer size '34816'B
61+
[184430102189660F00] [2.1] [0.711] [system] [info] ColorCamera allocated resources: no shaves; cmx slices: [13-15]
62+
ImageManip allocated resources: shaves: [15-15] no cmx slices.
63+
64+
65+
Example of a log message for :ref:`Depth Preview` in **TRACE** mode:
66+
67+
.. code-block:: bash
68+
69+
[19443010513F4D1300] [0.1.2] [2.014] [MonoCamera(0)] [trace] Mono ISP took '0.866377' ms.
70+
[19443010513F4D1300] [0.1.2] [2.016] [MonoCamera(1)] [trace] Mono ISP took '1.272838' ms.
71+
[19443010513F4D1300] [0.1.2] [2.019] [StereoDepth(2)] [trace] Stereo rectification took '2.661958' ms.
72+
[19443010513F4D1300] [0.1.2] [2.027] [StereoDepth(2)] [trace] Stereo took '7.144515' ms.
73+
[19443010513F4D1300] [0.1.2] [2.028] [StereoDepth(2)] [trace] 'Median' pipeline took '0.772257' ms.
74+
[19443010513F4D1300] [0.1.2] [2.028] [StereoDepth(2)] [trace] Stereo post processing (total) took '0.810216' ms.
75+
[2024-05-16 14:27:51.294] [depthai] [trace] Received message from device (disparity) - parsing time: 11µs, data size: 256000
76+
4677
4778
.. tabs::
4879

@@ -107,6 +138,10 @@ Code above will print the following values to the user:
107138
Resource Debugging
108139
==================
109140

141+
.. warning::
142+
143+
Resource debugging in only available when setting the debug level using environmental variable `DEPTHAI_LEVEL`. It's **not** available when setting the debug level in code.
144+
110145
By enabling ``info`` log level (or lower), depthai will print usage of `hardware resources <https://docs.luxonis.com/projects/hardware/en/latest/pages/rvc/rvc2.html#hardware-blocks-and-accelerators>`__,
111146
specifically SHAVE core and CMX memory usage:
112147

Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
import depthai as dai
2+
import cv2
3+
import numpy as np
4+
import math
5+
6+
# User-defined constants
7+
WARNING = 500 # 50cm, orange
8+
CRITICAL = 300 # 30cm, red
9+
10+
# Create pipeline
11+
pipeline = dai.Pipeline()
12+
13+
# Color camera
14+
camRgb = pipeline.create(dai.node.ColorCamera)
15+
camRgb.setPreviewSize(300, 300)
16+
camRgb.setInterleaved(False)
17+
18+
# Define source - stereo depth cameras
19+
left = pipeline.create(dai.node.MonoCamera)
20+
left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)
21+
left.setBoardSocket(dai.CameraBoardSocket.LEFT)
22+
23+
right = pipeline.create(dai.node.MonoCamera)
24+
right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)
25+
right.setBoardSocket(dai.CameraBoardSocket.RIGHT)
26+
27+
# Create stereo depth node
28+
stereo = pipeline.create(dai.node.StereoDepth)
29+
stereo.setConfidenceThreshold(50)
30+
stereo.setLeftRightCheck(True)
31+
stereo.setExtendedDisparity(True)
32+
33+
# Linking
34+
left.out.link(stereo.left)
35+
right.out.link(stereo.right)
36+
37+
# Spatial location calculator configuration
38+
slc = pipeline.create(dai.node.SpatialLocationCalculator)
39+
for x in range(15):
40+
for y in range(9):
41+
config = dai.SpatialLocationCalculatorConfigData()
42+
config.depthThresholds.lowerThreshold = 200
43+
config.depthThresholds.upperThreshold = 10000
44+
config.roi = dai.Rect(dai.Point2f((x+0.5)*0.0625, (y+0.5)*0.1), dai.Point2f((x+1.5)*0.0625, (y+1.5)*0.1))
45+
config.calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MEDIAN
46+
slc.initialConfig.addROI(config)
47+
48+
stereo.depth.link(slc.inputDepth)
49+
stereo.setDepthAlign(dai.CameraBoardSocket.RGB)
50+
51+
# Create output
52+
slcOut = pipeline.create(dai.node.XLinkOut)
53+
slcOut.setStreamName('slc')
54+
slc.out.link(slcOut.input)
55+
56+
colorOut = pipeline.create(dai.node.XLinkOut)
57+
colorOut.setStreamName('color')
58+
camRgb.video.link(colorOut.input)
59+
60+
# Connect to device and start pipeline
61+
with dai.Device(pipeline) as device:
62+
# Output queues will be used to get the color mono frames and spatial location data
63+
qColor = device.getOutputQueue(name="color", maxSize=4, blocking=False)
64+
qSlc = device.getOutputQueue(name="slc", maxSize=4, blocking=False)
65+
66+
fontType = cv2.FONT_HERSHEY_TRIPLEX
67+
68+
while True:
69+
inColor = qColor.get() # Try to get a frame from the color camera
70+
inSlc = qSlc.get() # Try to get spatial location data
71+
72+
if inColor is None:
73+
print("No color camera data")
74+
if inSlc is None:
75+
print("No spatial location data")
76+
77+
colorFrame = None
78+
if inColor is not None:
79+
colorFrame = inColor.getCvFrame() # Fetch the frame from the color mono camera
80+
81+
82+
if inSlc is not None and colorFrame is not None:
83+
slc_data = inSlc.getSpatialLocations()
84+
for depthData in slc_data:
85+
roi = depthData.config.roi
86+
roi = roi.denormalize(width=colorFrame.shape[1], height=colorFrame.shape[0])
87+
88+
xmin = int(roi.topLeft().x)
89+
ymin = int(roi.topLeft().y)
90+
xmax = int(roi.bottomRight().x)
91+
ymax = int(roi.bottomRight().y)
92+
93+
coords = depthData.spatialCoordinates
94+
distance = math.sqrt(coords.x ** 2 + coords.y ** 2 + coords.z ** 2)
95+
96+
if distance == 0: # Invalid
97+
continue
98+
99+
# Determine color based on distance
100+
if distance < CRITICAL:
101+
color = (0, 0, 255) # Red
102+
elif distance < WARNING:
103+
color = (0, 140, 255) # Orange
104+
else:
105+
continue # Skip drawing for non-critical/non-warning distances
106+
107+
# Draw rectangle and distance text on the color frame
108+
cv2.rectangle(colorFrame, (xmin, ymin), (xmax, ymax), color, thickness=2)
109+
cv2.putText(colorFrame, "{:.1f}m".format(distance / 1000), (xmin + 10, ymin + 20), fontType, 0.5, color)
110+
111+
# Display the color frame
112+
cv2.imshow('Left Mono Camera', colorFrame)
113+
if cv2.waitKey(1) == ord('q'):
114+
break

0 commit comments

Comments
 (0)