Skip to content

Commit fa01e36

Browse files
Merge pull request #1012 from luxonis/develop
DepthAI SDK 1.10.0
2 parents 3c51927 + 153de22 commit fa01e36

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+2351
-598
lines changed

depthai_sdk/MANIFEST.in

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
include src/depthai_sdk/classes/*.py
22
recursive-include src/depthai_sdk/components *.py
33
recursive-include src/depthai_sdk/integrations *.py
4+
recursive-include src/depthai_sdk/trigger_action *.py
45
include src/depthai_sdk/managers/*.py
56

67
recursive-include src/depthai_sdk/nn_models handler.py
Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
Conditional actions
2+
===================
3+
4+
DepthAI SDK provides a way to perform actions based on some conditions.
5+
For example, you can perform an action when a certain number of objects is detected in the frame.
6+
This functionality can be achieved by using Trigger-Action API.
7+
8+
Overview
9+
--------
10+
Trigger-Action API is a way to define a set of conditions and actions that should be performed when these conditions are met.
11+
DepthAI SDK provides a set of predefined conditions and actions, but you can also define your own.
12+
13+
Basic concepts:
14+
15+
- **Trigger** - a condition that should be met to perform an action.
16+
- **Action** - an action that should be performed when a trigger is met.
17+
18+
.. note:: Trigger-Action API is implemented in the :mod:`depthai.trigger_action` module.
19+
20+
Triggers
21+
--------
22+
23+
The base class for all triggers is :class:`Trigger <depthai_sdk.trigger_action.Trigger>`.
24+
In order to create a trigger, you need to use the :class:`Trigger <depthai_sdk.trigger_action.Trigger>` class and pass the following parameters:
25+
26+
- ``input`` - a component that should be used as a trigger source.
27+
- ``condition`` - a function that should return ``True`` or ``False`` based on the trigger source.
28+
- ``cooldown`` - defines how often a trigger can be activated (in seconds).
29+
30+
The set of predefined triggers:
31+
32+
- :class:`DetectionTrigger <depthai_sdk.trigger_action.DetectionTrigger>` - a trigger that is activated when a certain number of objects is detected in the frame.
33+
34+
Actions
35+
-------
36+
37+
An action can be represented by either a function or a class derived from :class:`Action <depthai_sdk.trigger_action.Action>` class.
38+
The custom action should implement :meth:`activate() <depthai_sdk.trigger_action.Action.activate>` and optionally :meth:`on_new_packets() <depthai_sdk.trigger_action.Action.on_new_packets>` methods.
39+
40+
The set of predefined actions:
41+
42+
- :class:`RecordAction <depthai_sdk.trigger_action.RecordAction>` - records a video of a given duration when a trigger is activated.
43+
44+
Usage
45+
-----
46+
47+
The following example shows how to create a trigger that is activated when at least 1 person is detected in the frame.
48+
When the trigger is activated, it records a 15 seconds video (5 seconds before the trigger is activated and 10 seconds after).
49+
50+
.. literalinclude:: ../../../examples/trigger_action/person_record.py
51+
:language: python
52+
53+
Reference
54+
-------------
55+
56+
.. autoclass:: depthai_sdk.trigger_action.TriggerAction
57+
:members:
58+
:undoc-members:
59+
60+
.. autoclass:: depthai_sdk.trigger_action.Trigger
61+
:members:
62+
:undoc-members:
63+
64+
.. autoclass:: depthai_sdk.trigger_action.Action
65+
:members:
66+
:undoc-members:
67+
68+
.. autoclass:: depthai_sdk.trigger_action.DetectionTrigger
69+
:members:
70+
:undoc-members:
71+
72+
.. autoclass:: depthai_sdk.trigger_action.RecordAction
73+
:members:
74+
:undoc-members:
75+
Lines changed: 2 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,5 @@
11
.. raw:: html
22

3-
<h2>Got questions?</h2>
3+
<h2>Got questions?</h2>
44

5-
We're always happy to help with development or other questions you might have.
6-
7-
.. raw:: html
8-
9-
<div class="cta-row cta-row-short">
10-
<div class="cta-box">
11-
<a href="https://luxonis.com/discord">
12-
<img src="/_images/discord.png" alt="Discord"/>
13-
<h5 class="cta-title">Community Discord</h5>
14-
</a>
15-
</div>
16-
<div class="cta-box">
17-
<a href="https://discuss.luxonis.com/">
18-
<img src="/_images/forum.png" alt="forum"/>
19-
<h5 class="cta-title">Discussion Forum</h5>
20-
</a>
21-
</div>
22-
<div class="cta-box">
23-
<a href="mailto:support@luxonis.com">
24-
<img src="/_images/email.png" alt="forum"/>
25-
<h5 class="cta-title">Email</h5>
26-
</a>
27-
</div>
28-
</div>
5+
Head over to <a href="https://discuss.luxonis.com/"><strong>Discussion Forum</strong></a> for technical support or any other questions you might have.
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
from depthai_sdk import OakCamera
2+
3+
with OakCamera() as oak:
4+
color = oak.create_camera('color')
5+
oak.visualize(color, fps=True, scale=2/3)
6+
oak.start()
7+
8+
while oak.running():
9+
key = oak.poll()
10+
if key == ord('i'):
11+
color.control.exposure_time_down()
12+
elif key == ord('o'):
13+
color.control.exposure_time_up()
14+
elif key == ord('k'):
15+
color.control.sensitivity_down()
16+
elif key == ord('l'):
17+
color.control.sensitivity_up()
18+
19+
elif key == ord('e'): # Switch to auto exposure
20+
color.control.send_controls({'exposure': {'auto': True}})
21+

depthai_sdk/examples/NNComponent/object_tracking.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
# https://docs.luxonis.com/projects/sdk/en/latest/features/ai_models/#sdk-supported-models
88
nn = oak.create_nn('yolov6nr3_coco_640x352', color, tracker=True)
99

10+
nn.config_nn(resize_mode='stretch')
1011
nn.config_tracker(
1112
tracker_type=dai.TrackerType.ZERO_TERM_COLOR_HISTOGRAM,
1213
track_labels=[0], # Track only 1st object from the object map. If unspecified, track all object types
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
from depthai_sdk import OakCamera
2+
3+
with OakCamera() as oak:
4+
left = oak.create_camera('left')
5+
right = oak.create_camera('right')
6+
stereo = oak.create_stereo(left=left, right=right)
7+
stereo.config_stereo(lr_check=True)
8+
9+
oak.visualize([right, stereo.out.disparity], fps=True)
10+
oak.start()
11+
12+
while oak.running():
13+
key = oak.poll()
14+
15+
if key == ord('i'):
16+
stereo.control.confidence_threshold_down()
17+
if key == ord('o'):
18+
stereo.control.confidence_threshold_up()
19+
if key == ord('k'):
20+
stereo.control.switch_median_filter()
21+
22+
if key == ord('1'):
23+
stereo.control.send_controls({'postprocessing': {'decimation': {'factor': 1}}})
24+
if key == ord('2'):
25+
stereo.control.send_controls({'postprocessing': {'decimation': {'factor': 2}}})
26+
if key == ord('3'):
27+
stereo.control.send_controls({'postprocessing': {'decimation': {'factor': 3}}})

depthai_sdk/examples/mixed/car_tracking.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,10 @@
44
with OakCamera(replay='cars-tracking-above-01') as oak:
55
# Create color camera, add video encoder
66
color = oak.create_camera('color')
7+
78
# Download & run pretrained vehicle detection model and track detections
89
nn = oak.create_nn('vehicle-detection-0202', color, tracker=True)
9-
nn.config_nn(resize_mode=ResizeMode.STRETCH)
10+
1011
# Visualize tracklets, show FPS
1112
visualizer = oak.visualize(nn.out.tracker, fps=True, record_path='./car_tracking.avi')
1213
visualizer.tracking(line_thickness=5).text(auto_scale=True)

depthai_sdk/examples/mixed/speed_calculation.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,6 @@ def callback(packet):
1717
stereo.config_stereo(subpixel=False, lr_check=True)
1818

1919
nn = oak.create_nn('face-detection-retail-0004', color, spatial=stereo, tracker=True)
20-
nn.config_nn(resize_mode='stretch')
2120

2221
visualizer = oak.visualize(nn.out.tracker, callback=callback, fps=True)
2322
visualizer.tracking(speed=True).text(auto_scale=True)

depthai_sdk/examples/mixed/switch_between_models.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ def cb(packet: DetectionPacket):
2626
if len(outputs) <= i:
2727
i = 0
2828
29-
node.io[outputs[i]].send(frame)
29+
node.io[outputs[i]].send(frame)
3030
""")
3131
color.stream.link(script.inputs['frames'])
3232

@@ -39,6 +39,7 @@ def cb(packet: DetectionPacket):
3939
xin.out.link(script.inputs['switch'])
4040

4141
oak.visualize([nn1, nn2], fps=True, callback=cb)
42+
oak.visualize([nn1.out.passthrough, nn2.out.passthrough], fps=True)
4243

4344
# oak.show_graph()
4445

depthai_sdk/examples/recording/stereo_record.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,6 @@
2121
# oak.record([color.out.main, stereo.out.disparity], 'records')
2222

2323
# Record depth only
24-
oak.visualize(stereo.out.disparity, record_path='disparity.avi')
24+
oak.visualize(stereo.out.disparity, record_path='disparity.mp4')
2525

2626
oak.start(blocking=True)

0 commit comments

Comments
 (0)