diff --git a/object-detection-cv25/README.md b/object-detection-cv25/README.md index 02b1c7cf..49ac5c53 100644 --- a/object-detection-cv25/README.md +++ b/object-detection-cv25/README.md @@ -294,13 +294,13 @@ Depending on selected chip, different output is received. The label file is used In the system log the chip is sometimes only mentioned as a string, they are mapped as follows: -| Chips | Larod 1 (int) | Larod 3 | -|-------|--------------|------------------| -| CPU with TensorFlow Lite | 2 | cpu-tflite | -| Google TPU | 4 | google-edge-tpu-tflite | -| Ambarella CVFlow (NN) | 6 | ambarella-cvflow | -| ARTPEC-8 DLPU | 12 | axis-a8-dlpu-tflite | -| ARTPEC-9 DLPU | - | a9-dlpu-tflite | +| Chips | Larod 1 (int) | Larod 3 | +| ------------------------ | ------------- | ---------------------- | +| CPU with TensorFlow Lite | 2 | cpu-tflite | +| Google TPU | 4 | google-edge-tpu-tflite | +| Ambarella CVFlow (NN) | 6 | ambarella-cvflow | +| ARTPEC-8 DLPU | 12 | axis-a8-dlpu-tflite | +| ARTPEC-9 DLPU | - | a9-dlpu-tflite | There are four outputs from MobileNet SSD v2 (COCO) model. The number of detections, cLasses, scores, and locations are shown as below. The four location numbers stand for \[top, left, bottom, right\]. By the way, currently the saved images will be overwritten continuously, so those saved images might not all from the detections of the last frame, if the number of detections is less than previous detection numbers. diff --git a/remote-debug-example/README.md b/remote-debug-example/README.md index d83d2325..d6366959 100644 --- a/remote-debug-example/README.md +++ b/remote-debug-example/README.md @@ -191,6 +191,23 @@ ssh acap-remote_debug@ /tmp/gdbserver :1234 /usr/local/packages/remote_debug/remote_debug ``` +> [!NOTE] +> If your `manifest.json` file contains runtime options under +> `acapPackageConf.setup.runOptions`, these are not automatically propagated to +> the gdbserver. You must explicity include them when starting the `gdbserver`. +> +> For example, if your `manifest.json` contains: +> +> ```json +> "runOptions": "--arg1 value1 --arg2 value2" +> ``` +> +> Start the gdbserver with: +> +> ```sh +> /tmp/gdbserver :1234 /usr/local/packages/remote_debug/remote_debug --arg1 value1 --arg2 value2 +> ``` + You should see output similar to: ```sh diff --git a/tensorflow-to-larod/README.md b/tensorflow-to-larod/README.md index f21308b9..76c7328d 100644 --- a/tensorflow-to-larod/README.md +++ b/tensorflow-to-larod/README.md @@ -122,11 +122,11 @@ be done using less precision. This generally results in significantly lower inference latency and model size with only a slight penalty to the model's accuracy. -| Chip | Supported precision | -|---------- |------------------ | -| Edge TPU | INT8 | -| Common CPUs | FP32, INT8 | -| Common GPUs | FP32, FP16, INT8 | +| Chip | Supported precision | +| ----------- | ------------------- | +| Edge TPU | INT8 | +| Common CPUs | FP32, INT8 | +| Common GPUs | FP32, FP16, INT8 | As noted in the first chapter, this example uses a camera equipped with an Edge TPU. As the Edge TPU chip **only** uses INT8 precision, the model will need to be quantized from diff --git a/vdo-larod/README.md b/vdo-larod/README.md index 39819ead..0068fac6 100644 --- a/vdo-larod/README.md +++ b/vdo-larod/README.md @@ -276,13 +276,13 @@ Depending on the selected chip, different output is received. In previous larod versions, the chip was referred to as a number instead of a string. See the table below to understand the mapping: -| Chips | Larod 1 (int) | Larod 3 | -|-------|--------------|------------------| -| CPU with TensorFlow Lite | 2 | cpu-tflite | -| Google TPU | 4 | google-edge-tpu-tflite | -| Ambarella CVFlow (NN) | 6 | ambarella-cvflow | -| ARTPEC-8 DLPU | 12 | axis-a8-dlpu-tflite | -| ARTPEC-9 DLPU | - | a9-dlpu-tflite | +| Chips | Larod 1 (int) | Larod 3 | +| ------------------------ | ------------- | ---------------------- | +| CPU with TensorFlow Lite | 2 | cpu-tflite | +| Google TPU | 4 | google-edge-tpu-tflite | +| Ambarella CVFlow (NN) | 6 | ambarella-cvflow | +| ARTPEC-8 DLPU | 12 | axis-a8-dlpu-tflite | +| ARTPEC-9 DLPU | - | a9-dlpu-tflite | #### Output - ARTPEC-8 with TensorFlow Lite