Skip to content

Commit a96b4fd

Browse files
committed
Fix building docs
1 parent 5d7eb49 commit a96b4fd

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/source/tutorials/low-latency.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ Encoded frames
111111
You can also reduce frame latency by using `Zero-Copy <https://github.com/luxonis/depthai-core/tree/message_zero_copy>`__
112112
branch of the DepthAI. This will pass pointers (at XLink level) to cv2.Mat instead of doing memcopy (as it currently does),
113113
so performance improvement would depend on the image sizes you are using.
114-
(Note: API differs and not all functionality is available as is on the `message_zero_copy` branch)
114+
(Note: API differs and not all functionality is available as-is on the `message_zero_copy` branch)
115115

116116

117117
Reducing latency when running NN
@@ -130,8 +130,8 @@ By default, NN nodes are running 2 threads, 1 NCE/thread, and we suggest compili
130130
available SHAVE cores of the pipeline. This configuration will provide best throughput, as all threads can run freely.
131131
Compiling the model for more SHAVE cores will only provide marginal improvement, due to:
132132

133-
1. `Model optimizer`__ doing a great work at optimizing the model
134-
2. On-deivce parallelization of NN operations (splitting the operation task between multiple SHAVEs) doesn't scale linearly due to " `memory wall <https://en.wikipedia.org/wiki/Random-access_memory#Memory_wall>`__ "
133+
1. `Model optimizer <https://docs.luxonis.com/en/latest/pages/model_conversion/#model-optimizer>`__ doing a great work at optimizing the model
134+
2. On-device parallelization of NN operations (splitting the operation task between multiple SHAVEs) doesn't scale linearly due to " `memory wall <https://en.wikipedia.org/wiki/Random-access_memory#Memory_wall>`__ "
135135

136136
To minimize the latency, though, we should allocate all resources to the single inference. To get lowest latency (which will result in much lower FPS),
137137
we suggest the following:

0 commit comments

Comments
 (0)