diff --git a/documentation/asciidoc/accessories/ai-camera/details.adoc b/documentation/asciidoc/accessories/ai-camera/details.adoc index 407a48f1f..ed07d996c 100644 --- a/documentation/asciidoc/accessories/ai-camera/details.adoc +++ b/documentation/asciidoc/accessories/ai-camera/details.adoc @@ -165,7 +165,7 @@ There are a number of scaling/cropping/translation operations occurring from the === Picamera2 -IMX500 integration in Picamera2 is very similar to what is available in `rpicam-apps`. Picamera2 has an IMX500 helper class that provides the same functionality as the `rpicam-apps` `IMX500PostProcessingStage` base class. This can be imported to any python script with: +IMX500 integration in Picamera2 is very similar to what is available in `rpicam-apps`. Picamera2 has an IMX500 helper class that provides the same functionality as the `rpicam-apps` `IMX500PostProcessingStage` base class. This can be imported to any Python script with: [source,python] ---- @@ -175,7 +175,7 @@ from picamera2.devices.imx500 import IMX500 imx500 = IMX500(model_file) ---- -To retrieve the output tensors, fetch them from the controls. You can then apply additional processing in your python script. +To retrieve the output tensors, fetch them from the controls. You can then apply additional processing in your Python script. For example, in an object inference use case such as https://github.com/raspberrypi/picamera2/tree/main/examples/imx500/imx500_object_detection_demo.py[imx500_object_detection_demo.py], the object bounding boxes and confidence values are extracted in `parse_detections()` and draw the boxes on the image in `draw_detections()`: @@ -257,6 +257,6 @@ There are a number of scaling/cropping/translation operations occurring from the | Automatically calculates region of interest (ROI) crop rectangle on the sensor image to preserve the given aspect ratio. To make the ROI aspect ratio exactly match the input tensor for this network, use `imx500.set_inference_aspect_ratio(imx500.get_input_size())`. | `IMX500.get_kpi_info(metadata)` -| Returns the frame level performance indicators logged by the IMX500 for the given image metadata. +| Returns the frame-level performance indicators logged by the IMX500 for the given image metadata. |=== diff --git a/documentation/asciidoc/accessories/ai-camera/getting-started.adoc b/documentation/asciidoc/accessories/ai-camera/getting-started.adoc index 62ab96702..82108cd6c 100644 --- a/documentation/asciidoc/accessories/ai-camera/getting-started.adoc +++ b/documentation/asciidoc/accessories/ai-camera/getting-started.adoc @@ -4,7 +4,7 @@ The instructions below describe how to run the pre-packaged MobileNet SSD and Po === Prerequisites -These instructions assumes you are using the AI Camera attached to either a Raspberry Pi 4 Model B or Raspberry Pi 5 board. With minor changes, you can follow these instructions on other Raspberry Pi models with a camera connector, including the Raspberry Pi Zero 2 W and Raspberry Pi 3 Model B+. +These instructions assume you are using the AI Camera attached to either a Raspberry Pi 4 Model B or Raspberry Pi 5 board. With minor changes, you can follow these instructions on other Raspberry Pi models with a camera connector, including the Raspberry Pi Zero 2 W and Raspberry Pi 3 Model B+. First, ensure that your Raspberry Pi runs the latest software. Run the following command to update: @@ -72,7 +72,7 @@ After running the command, you should see a viewfinder that overlays bounding bo image::images/imx500-mobilenet.jpg[IMX500 MobileNet] -To record video with object detection overlays, use `rpicam-vid` instead. The following command runs `rpicam-hello` with object detection post-processing: +To record video with object detection overlays, use `rpicam-vid` instead: [source,console] ----