Skip to content

Commit bee20fc

Browse files
authored
Merge pull request #335 from luxonis/imagemanip-rotating-mono
ImageManip tiling and rotating example
2 parents c90cebb + c94f5e3 commit bee20fc

File tree

7 files changed

+202
-0
lines changed

7 files changed

+202
-0
lines changed

docs/source/components/nodes/image_manip.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -87,6 +87,8 @@ Examples of functionality
8787
- :ref:`Mono & MobilenetSSD`
8888
- :ref:`RGB Encoding & Mono & MobilenetSSD`
8989
- :ref:`RGB Camera Control`
90+
- :ref:`ImageManip tiling` - Using ImageManip for frame tiling
91+
- :ref:`ImageManip rotate` - Using ImageManip to rotate color/mono frames
9092

9193
Reference
9294
#########
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
ImageManip Rotate
2+
=================
3+
4+
This example showcases how to rotate color and mono frames with the help of :ref:`ImageManip` node.
5+
In the example, we are rotating by 90°.
6+
7+
.. note::
8+
Due to HW warp constraint, input image (to be rotated) has to have **width value of multiples of 16.**
9+
10+
Demos
11+
#####
12+
13+
.. image:: https://user-images.githubusercontent.com/18037362/128074634-d2baa78e-8f35-40fc-8661-321f3a3c3850.png
14+
:alt: Rotated mono and color frames
15+
16+
Here I have DepthAI device positioned vertically on my desk.
17+
18+
Setup
19+
#####
20+
21+
.. include:: /includes/install_from_pypi.rst
22+
23+
Source code
24+
###########
25+
26+
.. tabs::
27+
28+
.. tab:: Python
29+
30+
Also `available on GitHub <https://github.com/luxonis/depthai-python/blob/main/examples/image_manip_rotate.py>`__
31+
32+
.. literalinclude:: ../../../examples/image_manip_rotate.py
33+
:language: python
34+
:linenos:
35+
36+
.. tab:: C++
37+
38+
Also `available on GitHub <https://github.com/luxonis/depthai-core/blob/main/examples/src/image_manip_rotate.cpp>`__
39+
40+
.. literalinclude:: ../../../depthai-core/examples/src/image_manip_rotate.cpp
41+
:language: cpp
42+
:linenos:
43+
44+
.. include:: /includes/footer-short.rst
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
ImageManip Tiling
2+
=================
3+
4+
Frame tiling could be useful for eg. feeding large frame into a :ref:`NeuralNetwork` whose input size isn't as large. In such case,
5+
you can tile the large frame into multiple smaller ones and feed smaller frames to the :ref:`NeuralNetwork`.
6+
7+
In this example we use 2 :ref:`ImageManip` for splitting the original :code:`1000x500` preview frame into two :code:`500x500` frames.
8+
9+
Demo
10+
####
11+
12+
.. image:: https://user-images.githubusercontent.com/18037362/128074673-045ed4b6-ac8c-4a76-83bb-0f3dc996f7a5.png
13+
:alt: Tiling preview into 2 frames/tiles
14+
15+
Setup
16+
#####
17+
18+
.. include:: /includes/install_from_pypi.rst
19+
20+
Source code
21+
###########
22+
23+
.. tabs::
24+
25+
.. tab:: Python
26+
27+
Also `available on GitHub <https://github.com/luxonis/depthai-python/blob/main/examples/image_manip_tiling.py>`__
28+
29+
.. literalinclude:: ../../../examples/image_manip_tiling.py
30+
:language: python
31+
:linenos:
32+
33+
.. tab:: C++
34+
35+
Also `available on GitHub <https://github.com/luxonis/depthai-core/blob/main/examples/src/image_manip_tiling.cpp>`__
36+
37+
.. literalinclude:: ../../../depthai-core/examples/src/image_manip_tiling.cpp
38+
:language: cpp
39+
:linenos:
40+
41+
.. include:: /includes/footer-short.rst

docs/source/tutorials/code_samples.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,8 @@ Code samples are used for automated testing. They are also a great starting poin
3232
- :ref:`Edge detector` - Edge detection on input frame
3333
- :ref:`Script camera control` - Controlling the camera with the Script node
3434
- :ref:`Bootloader version` - Retrieves Version of Bootloader on the device
35+
- :ref:`ImageManip tiling` - Using ImageManip for frame tiling
36+
- :ref:`ImageManip rotate` - Using ImageManip to rotate color/mono frames
3537

3638
.. rubric:: Complex
3739

docs/source/tutorials/simple_samples.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,8 @@ Simple
2424
../samples/edge_detector.rst
2525
../samples/script_camera_control.rst
2626
../samples/bootloader_version.rst
27+
../samples/image_manip_tiling.rst
28+
../samples/image_manip_rotate.rst
2729

2830
These samples are great starting point for the gen2 API.
2931

@@ -41,4 +43,7 @@ These samples are great starting point for the gen2 API.
4143
- :ref:`Mono & MobilenetSSD` - Runs MobileNetSSD on mono frames and displays detections on the frame
4244
- :ref:`Video & MobilenetSSD` - Runs MobileNetSSD on the video from the host
4345
- :ref:`Edge detector` - Edge detection on input frame
46+
- :ref:`Script camera control` - Controlling the camera with the Script node
4447
- :ref:`Bootloader Version` - Retrieves Version of Bootloader on the device
48+
- :ref:`ImageManip Tiling` - Using ImageManip for frame tiling
49+
- :ref:`ImageManip Rotate` - Using ImageManip to rotate color/mono frames

examples/image_manip_rotate.py

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
#!/usr/bin/env python3
2+
3+
import cv2
4+
import depthai as dai
5+
6+
# Create pipeline
7+
pipeline = dai.Pipeline()
8+
9+
# Rotate color frames
10+
camRgb = pipeline.createColorCamera()
11+
camRgb.setPreviewSize(640, 400)
12+
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
13+
camRgb.setInterleaved(False)
14+
15+
manipRgb = pipeline.createImageManip()
16+
rgbRr = dai.RotatedRect()
17+
rgbRr.center.x, rgbRr.center.y = camRgb.getPreviewWidth() // 2, camRgb.getPreviewHeight() // 2
18+
rgbRr.size.width, rgbRr.size.height = camRgb.getPreviewHeight(), camRgb.getPreviewWidth()
19+
rgbRr.angle = 90
20+
manipRgb.initialConfig.setCropRotatedRect(rgbRr, False)
21+
camRgb.preview.link(manipRgb.inputImage)
22+
23+
manipRgbOut = pipeline.createXLinkOut()
24+
manipRgbOut.setStreamName("manip_rgb")
25+
manipRgb.out.link(manipRgbOut.input)
26+
27+
# Rotate mono frames
28+
monoLeft = pipeline.createMonoCamera()
29+
monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
30+
monoLeft.setBoardSocket(dai.CameraBoardSocket.LEFT)
31+
32+
manipLeft = pipeline.createImageManip()
33+
rr = dai.RotatedRect()
34+
rr.center.x, rr.center.y = monoLeft.getResolutionWidth() // 2, monoLeft.getResolutionHeight() // 2
35+
rr.size.width, rr.size.height = monoLeft.getResolutionHeight(), monoLeft.getResolutionWidth()
36+
rr.angle = 90
37+
manipLeft.initialConfig.setCropRotatedRect(rr, False)
38+
monoLeft.out.link(manipLeft.inputImage)
39+
40+
manipLeftOut = pipeline.createXLinkOut()
41+
manipLeftOut.setStreamName("manip_left")
42+
manipLeft.out.link(manipLeftOut.input)
43+
44+
with dai.Device(pipeline) as device:
45+
qLeft = device.getOutputQueue(name="manip_left", maxSize=8, blocking=False)
46+
qRgb = device.getOutputQueue(name="manip_rgb", maxSize=8, blocking=False)
47+
48+
while True:
49+
inLeft = qLeft.tryGet()
50+
if inLeft is not None:
51+
cv2.imshow('Left rotated', inLeft.getCvFrame())
52+
53+
inRgb = qRgb.tryGet()
54+
if inRgb is not None:
55+
cv2.imshow('Color rotated', inRgb.getCvFrame())
56+
57+
if cv2.waitKey(1) == ord('q'):
58+
break

examples/image_manip_tiling.py

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
#!/usr/bin/env python3
2+
3+
import cv2
4+
import depthai as dai
5+
6+
# Create pipeline
7+
pipeline = dai.Pipeline()
8+
9+
camRgb = pipeline.createColorCamera()
10+
camRgb.setPreviewSize(1000, 500)
11+
camRgb.setInterleaved(False)
12+
maxFrameSize = camRgb.getPreviewHeight() * camRgb.getPreviewHeight() * 3
13+
14+
# In this example we use 2 imageManips for splitting the original 1000x500
15+
# preview frame into 2 500x500 frames
16+
manip1 = pipeline.createImageManip()
17+
manip1.initialConfig.setCropRect(0, 0, 0.5, 1)
18+
manip1.setMaxOutputFrameSize(maxFrameSize)
19+
camRgb.preview.link(manip1.inputImage)
20+
21+
manip2 = pipeline.createImageManip()
22+
manip2.initialConfig.setCropRect(0.5, 0, 1, 1)
23+
manip2.setMaxOutputFrameSize(maxFrameSize)
24+
camRgb.preview.link(manip2.inputImage)
25+
26+
xout1 = pipeline.createXLinkOut()
27+
xout1.setStreamName('out1')
28+
manip1.out.link(xout1.input)
29+
30+
xout2 = pipeline.createXLinkOut()
31+
xout2.setStreamName('out2')
32+
manip2.out.link(xout2.input)
33+
34+
# Connect to device and start pipeline
35+
with dai.Device(pipeline) as device:
36+
# Output queue will be used to get the rgb frames from the output defined above
37+
q1 = device.getOutputQueue(name="out1", maxSize=4, blocking=False)
38+
q2 = device.getOutputQueue(name="out2", maxSize=4, blocking=False)
39+
40+
while True:
41+
in1 = q1.tryGet()
42+
if in1 is not None:
43+
cv2.imshow("Tile 1", in1.getCvFrame())
44+
45+
in2 = q2.tryGet()
46+
if in2 is not None:
47+
cv2.imshow("Tile 2", in2.getCvFrame())
48+
49+
if cv2.waitKey(1) == ord('q'):
50+
break

0 commit comments

Comments
 (0)