|
| 1 | +Camera |
| 2 | +====== |
| 3 | + |
| 4 | +Camera node is a source of :ref:`image frames <ImgFrame>`. You can control in at runtime with the :code:`InputControl` and :code:`InputConfig`. |
| 5 | +It aims to unify the :ref:`ColorCamera` and :ref:`MonoCamera` into one node. |
| 6 | + |
| 7 | +Compared to :ref:`ColorCamera` node, Camera node: |
| 8 | + |
| 9 | +- Supports **cam.setSize()**, which replaces both ``cam.setResolution()`` and ``cam.setIspScale()``. Camera node will automatically find resolution that fits best, and apply correct scaling to achieve user-selected size |
| 10 | +- Supports **cam.setCalibrationAlpha()**, example here: :ref:`Undistort camera stream` |
| 11 | +- Supports **cam.loadMeshData()** and **cam.setMeshStep()**, which can be used for custom image warping (undistortion, perspective correction, etc.) |
| 12 | + |
| 13 | +Besides points above, compared to :ref:`MonoCamera` node, Camera node: |
| 14 | + |
| 15 | +- Doesn't have ``out`` output, as it has the same outputs as :ref:`ColorCamera` (``raw``, ``isp``, ``still``, ``preview``, ``video``). This means that ``preview`` will output 3 planes of the same grayscale frame (3x overhead), and ``isp`` / ``video`` / ``still`` will output luma (useful grayscale information) + chroma (all values are 128), which will result in 1.5x bandwidth overhead |
| 16 | + |
| 17 | +How to place it |
| 18 | +############### |
| 19 | + |
| 20 | +.. tabs:: |
| 21 | + |
| 22 | + .. code-tab:: py |
| 23 | + |
| 24 | + pipeline = dai.Pipeline() |
| 25 | + cam = pipeline.create(dai.node.Camera) |
| 26 | + |
| 27 | + .. code-tab:: c++ |
| 28 | + |
| 29 | + dai::Pipeline pipeline; |
| 30 | + auto cam = pipeline.create<dai::node::Camera>(); |
| 31 | + |
| 32 | + |
| 33 | +Inputs and Outputs |
| 34 | +################## |
| 35 | + |
| 36 | +.. code-block:: |
| 37 | +
|
| 38 | + Camera node |
| 39 | + ┌──────────────────────────────┐ |
| 40 | + │ ┌─────────────┐ │ |
| 41 | + │ │ Image │ raw │ raw |
| 42 | + │ │ Sensor │---┬--------├────────► |
| 43 | + │ └────▲────────┘ | │ |
| 44 | + │ │ ┌--------┘ │ |
| 45 | + │ ┌─┴───▼─┐ │ isp |
| 46 | + inputControl │ │ │-------┬-------├────────► |
| 47 | + ──────────────►│------│ ISP │ ┌─────▼────┐ │ video |
| 48 | + │ │ │ | |--├────────► |
| 49 | + │ └───────┘ │ Image │ │ still |
| 50 | + inputConfig │ │ Post- │--├────────► |
| 51 | + ──────────────►│----------------|Processing│ │ preview |
| 52 | + │ │ │--├────────► |
| 53 | + │ └──────────┘ │ |
| 54 | + └──────────────────────────────┘ |
| 55 | +
|
| 56 | +**Message types** |
| 57 | + |
| 58 | +- :code:`inputConfig` - :ref:`ImageManipConfig` |
| 59 | +- :code:`inputControl` - :ref:`CameraControl` |
| 60 | +- :code:`raw` - :ref:`ImgFrame` - RAW10 bayer data. Demo code for unpacking `here <https://github.com/luxonis/depthai-experiments/blob/3f1b2b2/gen2-color-isp-raw/main.py#L13-L32>`__ |
| 61 | +- :code:`isp` - :ref:`ImgFrame` - YUV420 planar (same as YU12/IYUV/I420) |
| 62 | +- :code:`still` - :ref:`ImgFrame` - NV12, suitable for bigger size frames. The image gets created when a capture event is sent to the Camera, so it's like taking a photo |
| 63 | +- :code:`preview` - :ref:`ImgFrame` - RGB (or BGR planar/interleaved if configured), mostly suited for small size previews and to feed the image into :ref:`NeuralNetwork` |
| 64 | +- :code:`video` - :ref:`ImgFrame` - NV12, suitable for bigger size frames |
| 65 | + |
| 66 | +**ISP** (image signal processor) is used for bayer transformation, demosaicing, noise reduction, and other image enhancements. |
| 67 | +It interacts with the 3A algorithms: **auto-focus**, **auto-exposure**, and **auto-white-balance**, which are handling image sensor |
| 68 | +adjustments such as exposure time, sensitivity (ISO), and lens position (if the camera module has a motorized lens) at runtime. |
| 69 | +Click `here <https://en.wikipedia.org/wiki/Image_processor>`__ for more information. |
| 70 | + |
| 71 | +**Image Post-Processing** converts YUV420 planar frames from the **ISP** into :code:`video`/:code:`preview`/:code:`still` frames. |
| 72 | + |
| 73 | +``still`` (when a capture is triggered) and ``isp`` work at the max camera resolution, while ``video`` and ``preview`` are |
| 74 | +limited to max 4K (3840 x 2160) resolution, which is cropped from ``isp``. |
| 75 | +For IMX378 (12MP), the **post-processing** works like this: |
| 76 | + |
| 77 | +.. code-block:: |
| 78 | +
|
| 79 | + ┌─────┐ Cropping to ┌─────────┐ Downscaling ┌──────────┐ |
| 80 | + │ ISP ├────────────────►│ video ├───────────────►│ preview │ |
| 81 | + └─────┘ max 3840x2160 └─────────┘ and cropping └──────────┘ |
| 82 | +
|
| 83 | +.. image:: /_static/images/tutorials/isp.jpg |
| 84 | + |
| 85 | +The image above is the ``isp`` output from the Camera (12MP resolution from IMX378). If you aren't downscaling ISP, |
| 86 | +the ``video`` output is cropped to 4k (max 3840x2160 due to the limitation of the ``video`` output) as represented by |
| 87 | +the blue rectangle. The Yellow rectangle represents a cropped ``preview`` output when the preview size is set to a 1:1 aspect |
| 88 | +ratio (eg. when using a 300x300 preview size for the MobileNet-SSD NN model) because the ``preview`` output is derived from |
| 89 | +the ``video`` output. |
| 90 | + |
| 91 | +Usage |
| 92 | +##### |
| 93 | + |
| 94 | +.. tabs:: |
| 95 | + |
| 96 | + .. code-tab:: py |
| 97 | + |
| 98 | + pipeline = dai.Pipeline() |
| 99 | + cam = pipeline.create(dai.node.Camera) |
| 100 | + cam.setPreviewSize(300, 300) |
| 101 | + cam.setBoardSocket(dai.CameraBoardSocket.CAM_A) |
| 102 | + # Instead of setting the resolution, user can specify size, which will set |
| 103 | + # sensor resolution to best fit, and also apply scaling |
| 104 | + cam.setSize(1280, 720) |
| 105 | + |
| 106 | + .. code-tab:: c++ |
| 107 | + |
| 108 | + dai::Pipeline pipeline; |
| 109 | + auto cam = pipeline.create<dai::node::Camera>(); |
| 110 | + cam->setPreviewSize(300, 300); |
| 111 | + cam->setBoardSocket(dai::CameraBoardSocket::CAM_A); |
| 112 | + // Instead of setting the resolution, user can specify size, which will set |
| 113 | + // sensor resolution to best fit, and also apply scaling |
| 114 | + cam->setSize(1280, 720); |
| 115 | + |
| 116 | +Limitations |
| 117 | +########### |
| 118 | + |
| 119 | +Here are known camera limitations for the `RVC2 <https://docs.luxonis.com/projects/hardware/en/latest/pages/rvc/rvc2.html#rvc2>`__: |
| 120 | + |
| 121 | +- **ISP can process about 600 MP/s**, and about **500 MP/s** when the pipeline is also running NNs and video encoder in parallel |
| 122 | +- **3A algorithms** can process about **200..250 FPS overall** (for all camera streams). This is a current limitation of our implementation, and we have plans for a workaround to run 3A algorithms on every Xth frame, no ETA yet |
| 123 | + |
| 124 | +Examples of functionality |
| 125 | +######################### |
| 126 | + |
| 127 | +- :ref:`Undistort camera stream` |
| 128 | + |
| 129 | +Reference |
| 130 | +######### |
| 131 | + |
| 132 | +.. tabs:: |
| 133 | + |
| 134 | + .. tab:: Python |
| 135 | + |
| 136 | + .. autoclass:: depthai.node.Camera |
| 137 | + :members: |
| 138 | + :inherited-members: |
| 139 | + :noindex: |
| 140 | + |
| 141 | + .. tab:: C++ |
| 142 | + |
| 143 | + .. doxygenclass:: dai::node::Camera |
| 144 | + :project: depthai-core |
| 145 | + :members: |
| 146 | + :private-members: |
| 147 | + :undoc-members: |
| 148 | + |
| 149 | +.. include:: ../../includes/footer-short.rst |
0 commit comments