|
| 1 | +# DepthAI v2 → v3 Porting Guide |
| 2 | + |
| 3 | +This document describes the changes between the v2 and v3 APIs of DepthAI and how to migrate existing code. |
| 4 | + |
| 5 | +## What's new in the v3 API |
| 6 | + |
| 7 | +* No more **explicit** XLink nodes – the XLink “bridges” are created automatically. |
| 8 | +* Host nodes – nodes that run on the host machine now work cleanly with device‑side nodes. |
| 9 | +* Custom host nodes – users can create custom nodes that run on the host machine |
| 10 | + |
| 11 | + * Both `ThreadedHostNode` and `HostNode` are supported. |
| 12 | + * `ThreadedHostNode` works similarly to `ScriptNode`; the user **specifies** a `run` function that executes in a separate thread. |
| 13 | + * `HostNode` exposes an input map `inputs` whose entries are implicitly synced. |
| 14 | + * Available in both Python and C++. |
| 15 | +* Record‑and‑replay nodes. |
| 16 | +* `Pipeline` now has a live device that can be queried during pipeline creation. |
| 17 | +* Support for the new **Model Zoo**. |
| 18 | +* `ImageManip` has a refreshed API with better‑defined behaviour. |
| 19 | +* `ColorCamera` and `MonoCamera` are deprecated in favour of the new `Camera` node. |
| 20 | + |
| 21 | +--- |
| 22 | + |
| 23 | +## Minimal changes required |
| 24 | + |
| 25 | +* Remove the explicit creation of `dai.Device` (unless you intentionally pass a live device handle via the pipeline constructor – a rare edge case). |
| 26 | +* Remove explicit XLink nodes. |
| 27 | +* Replace `dai.Device(pipeline)` with `pipeline.start()`. |
| 28 | +* Replace any `.getOutputQueue()` calls with `output.createOutputQueue()`. |
| 29 | +* Replace any `.getInputQueue()` calls with `input.createInputQueue()`. |
| 30 | + |
| 31 | +--- |
| 32 | + |
| 33 | +## Quick port: simple RGB stream example |
| 34 | + |
| 35 | +Below, the old v2 code is commented with `# ORIG` and the new code with `# NEW`. |
| 36 | + |
| 37 | +```python |
| 38 | +#!/usr/bin/env python3 |
| 39 | + |
| 40 | +import cv2 |
| 41 | +import depthai as dai |
| 42 | + |
| 43 | +# Create pipeline |
| 44 | +pipeline = dai.Pipeline() |
| 45 | + |
| 46 | +# Define source and output |
| 47 | +camRgb = pipeline.create(dai.node.ColorCamera) |
| 48 | + |
| 49 | +# ORIG – explicit XLink removed in v3 |
| 50 | +# xoutVideo = pipeline.create(dai.node.XLinkOut) |
| 51 | +# xoutVideo.setStreamName("video") |
| 52 | + |
| 53 | +# Properties |
| 54 | +camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A) |
| 55 | +camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P) |
| 56 | +camRgb.setVideoSize(1920, 1080) |
| 57 | + |
| 58 | +# Linking |
| 59 | +# ORIG |
| 60 | +# camRgb.video.link(xoutVideo.input) |
| 61 | +# NEW – output queue straight from the node |
| 62 | +videoQueue = camRgb.video.createOutputQueue() |
| 63 | + |
| 64 | +# ORIG – entire `with dai.Device` block removed |
| 65 | +# with dai.Device(pipeline) as device: |
| 66 | +# video = device.getOutputQueue(name="video", maxSize=1, blocking=False) |
| 67 | +# while True: |
| 68 | +# NEW – start the pipeline |
| 69 | +pipeline.start() |
| 70 | +while pipeline.isRunning(): |
| 71 | + videoIn = videoQueue.get() # blocking |
| 72 | + cv2.imshow("video", videoIn.getCvFrame()) |
| 73 | + if cv2.waitKey(1) == ord('q'): |
| 74 | + break |
| 75 | +``` |
| 76 | + |
| 77 | +This runs on RVC2 devices. Note that `ColorCamera`/`MonoCamera` nodes are deprecated on RVC4; see the next section for using `Camera` instead. |
| 78 | + |
| 79 | +--- |
| 80 | + |
| 81 | +## Porting `ColorCamera` / `MonoCamera` usage to `Camera` |
| 82 | + |
| 83 | +The new `Camera` node can expose as many outputs as you request. |
| 84 | + |
| 85 | +```python |
| 86 | +camRgb = pipeline.create(dai.node.ColorCamera) |
| 87 | +camRgb.setPreviewSize(300, 300) |
| 88 | +camRgb.setInterleaved(False) |
| 89 | +camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB) |
| 90 | +outputQueue = camRgb.preview.createOutputQueue() |
| 91 | +``` |
| 92 | + |
| 93 | +turns into |
| 94 | + |
| 95 | +```python |
| 96 | +camRgb = pipeline.create(dai.node.Camera).build() # don’t forget .build() |
| 97 | +cameraOutput = camRgb.requestOutput((300, 300), type=dai.ImgFrame.Type.RGB888p) # replaces .preview |
| 98 | +outputQueue = cameraOutput.createOutputQueue() |
| 99 | +``` |
| 100 | + |
| 101 | +Request multiple outputs simply by calling `requestOutput` again. For full‑resolution use‑cases that previously used `.isp`, call `requestFullResolutionOutput()` instead. |
| 102 | + |
| 103 | +For former `MonoCamera` pipelines, replace the `.out` output with `requestOutput`, e.g. |
| 104 | + |
| 105 | +```python |
| 106 | +mono = pipeline.create(dai.node.Camera).build() |
| 107 | +monoOut = mono.requestOutput((1280, 720), type=dai.ImgFrame.Type.GRAY8) |
| 108 | +``` |
| 109 | + |
| 110 | +--- |
| 111 | + |
| 112 | +## Porting the old `ImageManip` to the new API |
| 113 | + |
| 114 | +The new API tracks every transformation in sequence and separates *how* the final image is resized. |
| 115 | +See the [official documentation](https://docs.luxonis.com/software/v3/depthai-components/nodes/image_manip/) for full details. |
| 116 | + |
| 117 | +### v2 example |
| 118 | + |
| 119 | +```python |
| 120 | +#!/usr/bin/env python3 |
| 121 | + |
| 122 | +import cv2 |
| 123 | +import depthai as dai |
| 124 | + |
| 125 | +# Create pipeline |
| 126 | +pipeline = dai.Pipeline() |
| 127 | + |
| 128 | +camRgb = pipeline.create(dai.node.ColorCamera) |
| 129 | +camRgb.setPreviewSize(1000, 500) |
| 130 | +camRgb.setInterleaved(False) |
| 131 | +maxFrameSize = camRgb.getPreviewHeight() * camRgb.getPreviewWidth() * 3 |
| 132 | + |
| 133 | +# In this example we use 2 imageManips for splitting the original 1000x500 |
| 134 | +# preview frame into 2 500x500 frames |
| 135 | +manip1 = pipeline.create(dai.node.ImageManip) |
| 136 | +manip1.initialConfig.setCropRect(0, 0, 0.5, 1) |
| 137 | +manip1.setMaxOutputFrameSize(maxFrameSize) |
| 138 | +camRgb.preview.link(manip1.inputImage) |
| 139 | + |
| 140 | +manip2 = pipeline.create(dai.node.ImageManip) |
| 141 | +manip2.initialConfig.setCropRect(0.5, 0, 1, 1) |
| 142 | +manip2.setMaxOutputFrameSize(maxFrameSize) |
| 143 | +camRgb.preview.link(manip2.inputImage) |
| 144 | + |
| 145 | +xout1 = pipeline.create(dai.node.XLinkOut) |
| 146 | +xout1.setStreamName('out1') |
| 147 | +manip1.out.link(xout1.input) |
| 148 | + |
| 149 | +xout2 = pipeline.create(dai.node.XLinkOut) |
| 150 | +xout2.setStreamName('out2') |
| 151 | +manip2.out.link(xout2.input) |
| 152 | + |
| 153 | +# Connect to device and start pipeline |
| 154 | +with dai.Device(pipeline) as device: |
| 155 | + # Output queue will be used to get the rgb frames from the output defined above |
| 156 | + q1 = device.getOutputQueue(name="out1", maxSize=4, blocking=False) |
| 157 | + q2 = device.getOutputQueue(name="out2", maxSize=4, blocking=False) |
| 158 | + |
| 159 | + while True: |
| 160 | + if q1.has(): |
| 161 | + cv2.imshow("Tile 1", q1.get().getCvFrame()) |
| 162 | + |
| 163 | + if q2.has(): |
| 164 | + cv2.imshow("Tile 2", q2.get().getCvFrame()) |
| 165 | + |
| 166 | + if cv2.waitKey(1) == ord('q'): |
| 167 | + break |
| 168 | +``` |
| 169 | + |
| 170 | +### v3 equivalent: |
| 171 | + |
| 172 | +```python |
| 173 | +#!/usr/bin/env python3 |
| 174 | + |
| 175 | +import cv2 |
| 176 | +import depthai as dai |
| 177 | + |
| 178 | +# Create pipeline |
| 179 | +pipeline = dai.Pipeline() |
| 180 | + |
| 181 | +camRgb = pipeline.create(dai.node.Camera).build() |
| 182 | +preview = camRgb.requestOutput((1000, 500), type=dai.ImgFrame.Type.RGB888p) |
| 183 | + |
| 184 | +# In this example we use 2 imageManips for splitting the original 1000x500 |
| 185 | +# preview frame into 2 500x500 frames |
| 186 | +manip1 = pipeline.create(dai.node.ImageManip) |
| 187 | +manip1.initialConfig.addCrop(0, 0, 500, 500) |
| 188 | +preview.link(manip1.inputImage) |
| 189 | + |
| 190 | +manip2 = pipeline.create(dai.node.ImageManip) |
| 191 | +manip2.initialConfig.addCrop(500, 0, 500, 500) |
| 192 | +preview.link(manip2.inputImage) |
| 193 | + |
| 194 | +q1 = manip1.out.createOutputQueue() |
| 195 | +q2 = manip2.out.createOutputQueue() |
| 196 | + |
| 197 | +pipeline.start() |
| 198 | +with pipeline: |
| 199 | + while pipeline.isRunning(): |
| 200 | + if q1.has(): |
| 201 | + cv2.imshow("Tile 1", q1.get().getCvFrame()) |
| 202 | + |
| 203 | + if q2.has(): |
| 204 | + cv2.imshow("Tile 2", q2.get().getCvFrame()) |
| 205 | + |
| 206 | + if cv2.waitKey(1) == ord('q'): |
| 207 | + break |
| 208 | +``` |
0 commit comments