diff --git a/source/edgeai/configuration_file.rst b/source/edgeai/configuration_file.rst index fcb0eaedc..19f7f5cd1 100644 --- a/source/edgeai/configuration_file.rst +++ b/source/edgeai/configuration_file.rst @@ -1,15 +1,15 @@ .. _pub_edgeai_configuration: -======================== +######################## Configuring applications -======================== +######################## The demo config file uses YAML format to define input sources, models, outputs -and finally the flows which defines how everything is connected. Config files -for out-of-box demos are kept in ``edgeai-gst-apps/configs`` folder. The -folder contains config files for all the use cases and also multi-input and +and finally the flows, which define how components connect to each other. The +:file:`edgeai-gst-apps/configs` directory has the config files for out-of-box demos. +The folder has config files for all the use cases and also multi-input and multi-inference case. The folder also has a template YAML file -``app_config_template.yaml`` which has detailed explanation of all the +:file:`app_config_template.yaml`, which has detailed explanation of all the parameters supported in the config file. Config file is divided in 4 sections: @@ -19,8 +19,9 @@ Config file is divided in 4 sections: #. Outputs #. Flows +****** Inputs -====== +****** The input section defines a list of supported inputs like camera, video files etc. Their properties like shown below. @@ -55,14 +56,14 @@ Below are the details of most commonly used inputs. .. _pub_edgeai_camera_sources: Camera sources (v4l2) ---------------------- +===================== **v4l2src** GStreamer element is used to capture frames from camera sources which are exposed as v4l2 devices. In Linux, there are many devices which are implemented as v4l2 devices. Not all of them will be camera devices. You need to make sure the correct device is configured for running the demo successfully. -``init_script.sh`` is ran as part of systemd, which detects all cameras connected +:file:`init_script.sh` is ran as part of systemd, which detects all cameras connected and prints the detail like below in the console: .. code-block:: bash @@ -79,9 +80,8 @@ and prints the detail like below in the console: script can also be run manually later to get the camera details. -From the above log we can determine that 1 USB camera is connected -(/dev/video-usb-cam0), and 1 CSI camera is connected (/dev/video-imx219-cam0) which is IMX219 raw -sensor and needs ISP. +The console shows one USB camera at :file:`/dev/video-usb-cam0` and +one CSI camera at :file:`/dev/video-imx219-cam0` (an IMX219 raw sensor that requires ISP) Using this method, you can configure correct device for camera capture in the input section of config file. @@ -109,10 +109,10 @@ camera to allow GStreamer to negotiate the format. ``rggb`` for sensor that needs ISP. Video sources -------------- +============= H.264 and H.265 encoded videos can be provided as input sources to the demos. -Sample video files are provided under ``/opt/edgeai-test-data/videos/`` +The :file:`/opt/edgeai-test-data/videos/` directory has sample video files. .. code-block:: yaml @@ -135,12 +135,11 @@ By default the format is set to ``auto`` which will then use the GStreamer bin ``decodebin`` instead. Image sources -------------- +============= -JPEG compressed images can be provided as inputs to the demos. A sample set of -images are provided under ``/opt/edgeai-test-data/images``. The names of the -files are numbered sequentially and incrementally and the demo plays the files -at the fps specified by the user. +The demos accept JPEG-compressed images as inputs. The :file:`/opt/edgeai-test-data/images` +directory has sample images. The filenames use sequential numbering, and the demo plays them +at the user‑specified frame rate. .. code-block:: yaml @@ -152,7 +151,7 @@ at the fps specified by the user. framerate: 1 RTSP sources ------------- +============ H.264 encoded video streams either coming from a RTSP compliant IP camera or via RTSP server running on a remote PC can be provided as inputs to the demo. @@ -165,8 +164,9 @@ via RTSP server running on a remote PC can be provided as inputs to the demo. height: 720 framerate: 30 +****** Models -====== +****** The model section defines a list of models that are used in the demo. Path to the model directory is a required argument for each model and rest are optional @@ -200,9 +200,9 @@ Below are some of the use case specific properties: The content of the model directory and its structure is discussed in detail in :ref:`pub_edgeai_import_custom_models` - +******* Outputs -======= +******* The output section defines a list of supported outputs. @@ -239,7 +239,7 @@ All supported outputs are listed in template config file. Below are the details of most commonly used outputs Display sink (kmssink) ----------------------- +====================== When you have only one display connected to the SK, kmssink will try to use it for displaying the output buffers. In case you have connected multiple @@ -261,7 +261,7 @@ Following command finds out the connected displays available to use. Configure the required connector ID in the output section of the config file. Video sinks ------------ +=========== The post-processed outputs can be encoded in H.264 format and stored on disk. Please specify the location of the video file in the configuration file. @@ -273,7 +273,7 @@ Please specify the location of the video file in the configuration file. height: 1080 Image sinks ------------ +=========== The post-processed outputs can be stored as JPEG compressed images. Please specify the location of the image files in the configuration file. The images will be named sequentially and incrementally as shown. @@ -286,7 +286,7 @@ The images will be named sequentially and incrementally as shown. height: 1080 Remote sinks ------------- +============ Post-processed frames can be encoded as jpeg or h264 frames and send as udp packets to a port. Please specify the sink as remote in the configuration file. The udp port and host to send packets to can be defined. If not, default port is 8081 and host @@ -302,17 +302,16 @@ is 127.0.0.1. host: 127.0.0.1 encoding: jpeg #(jpeg or h264) -A NodeJS server is provided under ``/opt/edgeai-gst-apps/scripts/remote_streaming`` -which establishes a node server on the target and listens to the udp port (8081) -on localhost (127.0.0.1) and can be used to view the frames remotely. +The EdgeAI filesystem includes a Node.js server at :file:`/opt/edgeai-gst-apps/scripts/remote_streaming`. +The server starts a local UDP listener on localhost (127.0.0.1) port 8081 and streams frames for remote viewing. .. code-block:: bash /opt/edgeai-gst-apps# node scripts/remote_streaming/server.js - +***** Flows -===== +***** The flows section defines how inputs, models and outputs are connected. Multiple flows can be defined to achieve multi input, multi inference as shown @@ -338,17 +337,15 @@ for optimization. Along with input, models and outputs it is required to define plane. This is needed because multiple inference outputs can be rendered to same output (Ex: Display). - GStreamer plugins ================= The edgeai-gst-apps essentially constructs GStreamer pipelines for dataflow. This pipeline is constructed optimally and dynamically based on a pool of -specific plugins available on the platform. The defined pool of plugins for -different platform can be found in ``edgeai-gst-apps/configs/gst_plugin_maps.yaml`` -file. +specific plugins available on the platform. See :file:`edgeai-gst-apps/configs/gst_plugin_maps.yaml` +for the pool of plugins defined per platform. -This file contains the plugin used for certain task and the property of plugin +This file has the plugin used for certain task and the property of plugin (if applicable). Default GStreamer plugins map for |__PART_FAMILY_NAME__| diff --git a/source/edgeai/docker_environment.rst b/source/edgeai/docker_environment.rst index 97310a028..e6e793a55 100644 --- a/source/edgeai/docker_environment.rst +++ b/source/edgeai/docker_environment.rst @@ -1,8 +1,8 @@ .. _pub_edgeai_docker_env: -================== +################## Docker Environment -================== +################## Docker is a set of "platform as a service" products that uses the OS-level virtualization to deliver software in packages called containers. @@ -16,8 +16,9 @@ additional 3rd party applications and packages as required. .. _pub_edgeai_docker_build_ontarget: +********************* Building Docker image -====================== +********************* The `docker/Dockerfile` in the edgeai-gst-apps repo describes the recipe for creating the Docker container image. Feel free to review and update it to @@ -40,8 +41,9 @@ Initiate the Docker image build as shown, /opt/edgeai-gst-apps/docker# ./docker_build.sh +**************************** Running the Docker container -============================ +**************************** Enter the Docker session as shown, @@ -77,28 +79,29 @@ access camera, display and other hardware accelerators the SoC has to offer. .. note:: After building and running the docker container, one needs to run - ``setup_script.sh`` before running any of the demo applications. + :file:`setup_script.sh` before running any of the demo applications. This is required to rebuild all components against the shared libraries of docker, same should be done when switching back to Yocto .. _pub_edgeai_docker_additional_commands: +************************** Additional Docker commands -========================== +************************** .. note:: This section is provided only for additional reference and not required to run out-of-box demos -**Commit Docker container** - +Commit Docker container +======================= Generally, containers have a short life cycle. If the container has any local changes it is good to save the changes on top of the existing Docker image. When re-running the Docker image, the local changes can be restored. Following commands show how to save the changes made to the last container. -Note that this is already done automatically by ``docker_run.sh`` when you exit +Note that this is already done automatically by :file:`docker_run.sh` when you exit the container. .. code-block:: bash @@ -111,7 +114,8 @@ the container. For more information refer: `Commit Docker image `_ -**Save Docker Image** +Save Docker Image +================= Docker image can be saved as tar file by using the command below: @@ -120,9 +124,10 @@ Docker image can be saved as tar file by using the command below: docker save --output For more information refer here. -`Save Docker image `_ +`docker image save `_ -**Load Docker image** +Load Docker image +================= Load a previously saved Docker image using the command below: @@ -131,9 +136,10 @@ Load a previously saved Docker image using the command below: docker load --input For more information refer here. -`Load Docker image `_ +`docker image load `_ -**Remove Docker image** +Remove Docker image +=================== Docker image can be removed by using the command below: @@ -149,7 +155,8 @@ For more information refer `rmi reference `_ and `Image prune reference `_ -**Remove Docker container** +Remove Docker container +======================= Docker container can be removed by using the command below: @@ -220,7 +227,9 @@ current location is the desired location then exit this procedure. 6. Anytime the SD card is updated with a new targetfs, steps (1), (3), and (4) need to be followed. -**Additional references** +********************* +Additional references +********************* | https://docs.docker.com/engine/reference/commandline/images/ | https://docs.docker.com/engine/reference/commandline/ps/ diff --git a/source/edgeai/edgeai_dataflows.rst b/source/edgeai/edgeai_dataflows.rst index 84736c086..f2a5ad9f1 100644 --- a/source/edgeai/edgeai_dataflows.rst +++ b/source/edgeai/edgeai_dataflows.rst @@ -1,8 +1,8 @@ .. _pub_edgeai_dataflows: -================= +################# Edge AI dataflows -================= +################# The reference edgeai application at a high level can be split into 3 parts, @@ -16,11 +16,12 @@ GStreamer launch strings that is generated. User can interact with the applicati .. _pub_edgeai_optiflow_data_flow: +******** OpTIFlow -==================== +******** -Image Classification --------------------- +OpTIFlow - Image Classification +=============================== | **Input: USB Camera** | **DL Task: Classification** @@ -58,8 +59,8 @@ GStreamer pipeline: OpTIFlow pipeline for image classification demo with USB camera and display -Object Detection --------------------- +OpTIFlow - Object Detection +=========================== | **Input: IMX219 Camera** | **DL Task: Detection** @@ -99,8 +100,8 @@ GStreamer pipeline: OpTIFlow pipeline for object detection demo with IMX219 camera and save to file -Semantic Segmentation ---------------------- +OpTIFlow - Semantic Segmentation +================================ | **Input: H264 Video** | **DL Task: Segmentation** @@ -139,8 +140,8 @@ GStreamer pipeline: OpTIFlow pipeline for semantic segmentation demo with file input and remote streaming -Single Input Multi Inference ----------------------------- +OpTIFlow - Single Input Multi Inference +======================================= | **Input: H264 Video** | **DL Task: Detection, Detection, Classification, Segmentation** @@ -186,8 +187,8 @@ GStreamer pipeline: OpTIFlow pipeline for single input multi inference -Multi Input Multi Inference ----------------------------- +OpTIFlow - Multi Input Multi Inference +====================================== | **Input: USB Camera, H264 Video** | **DL Task: Detection, Detection, Classification, Segmentation** @@ -235,11 +236,12 @@ GStreamer pipeline: OpTIFlow pipeline for multi input multi inference +*************** Python/C++ apps -====================== +*************** -Image Classification --------------------- +Python/C++ apps - Image Classification +====================================== | **Input: USB Camera** | **DL Task: Classification** @@ -281,8 +283,8 @@ GStreamer output pipeline: Python/C++ application data-flow for image classification demo with USB camera and display -Object Detection --------------------- +Python/C++ apps - Object Detection +================================== | **Input: IMX219 Camera** | **DL Task: Detection** @@ -325,8 +327,8 @@ GStreamer output pipeline: Python/C++ application data-flow for object detection demo with IMX219 camera and save to file -Semantic Segmentation ---------------------- +Python/C++ apps - Semantic Segmentation +======================================= | **Input: H264 Video** | **DL Task: Segmentation** @@ -368,8 +370,8 @@ GStreamer output pipeline: Python/C++ application data-flow for semantic segmentation demo with file input and remote streaming -Single Input Multi Inference ----------------------------- +Python/C++ apps - Single Input Multi Inference +============================================== | **Input: H264 Video** | **DL Task: Detection, Detection, Classification, Segmentation** @@ -419,8 +421,8 @@ GStreamer output pipeline: Python/C++ application data-flow for single input multi inference -Multi Input Multi Inference ----------------------------- +Python/C++ apps - Multi Input Multi Inference +============================================= | **Input: USB Camera, H264 Video** | **DL Task: Detection, Detection, Classification, Segmentation** diff --git a/source/edgeai/inference_models.rst b/source/edgeai/inference_models.rst index 124a716c8..8c1226ac0 100644 --- a/source/edgeai/inference_models.rst +++ b/source/edgeai/inference_models.rst @@ -1,8 +1,8 @@ .. _pub_edgeai_inference_models: -==================== +#################### Deep learning models -==================== +#################### Neural networks run on TI's C7xMMA accelerator using the TI Deep Learning (TIDL) software. Development tools are available for different levels of expertise to help @@ -25,8 +25,9 @@ In each case, the goal is to acquire or generate a series of `"Model Artifacts" <#dnn-directory-structure>`_ that may be deployed to the |__PART_FAMILY_NAME__| SoC. +*************************** Pretrained Model Evaluation -=========================== +*************************** `TI Edge AI Model Zoo `__ is a large collection of deep learning models validated to work on TI processors @@ -54,8 +55,9 @@ out-of-box examples are available. Precompiled model artifacts may be downloaded directly from the TI model zoo with a browser or by using the `Model Downloader Tool`_ directly in the SDK. +********************* Model Downloader Tool ---------------------- +********************* Use the **Model Downloader Tool** in the SDK to download more models on target as shown, @@ -65,7 +67,7 @@ Use the **Model Downloader Tool** in the SDK to download more models on target a The script will launch an interactive menu showing the list of available, pre-imported models for download. The downloaded models will be placed -under ``/opt/model_zoo/`` directory +under :file:`/opt/model_zoo/` directory .. figure:: ../images/edgeai/model_downloader.png :align: center @@ -80,8 +82,9 @@ The script can also be used in a non-interactive way as shown below: .. _pub_edgeai_model_development_for_beginners: +******************** Model Training Tools -==================== +******************** Models within the TI model zoo are used as a starting point for "Transfer Learning", and may be retrained for custom use-cases on the developer's dataset. This is considered @@ -116,8 +119,9 @@ Training code is open source and available for modification as necessary. .. _pub_edgeai_import_custom_models: +******************** Import Custom Models -==================== +******************** The Processor SDK Linux Edge AI for |__PART_FAMILY_NAME__| supports importing pre-trained custom models to run inference on target using the "Bring Your Own Model" diff --git a/source/edgeai/measure_perf.rst b/source/edgeai/measure_perf.rst index e129575f6..f6ac7bb18 100644 --- a/source/edgeai/measure_perf.rst +++ b/source/edgeai/measure_perf.rst @@ -1,14 +1,15 @@ .. _pub_edgeai_perf_viz_tool: -===================== +##################### Measuring performance -===================== +##################### There are simple tools to get the performance numbers like core loadings, DDR bandwidths, HWA loadings, GStreamer element latencies etc.. on the bash terminal. +******************************************** GStreamer plugin for Performance measurement --------------------------------------------- +******************************************** This custom GStreamer plugin allows users to include these non-intrusive elements in the pipeline which overlays the performance information directly on the output image displayed on the screen. The entire processing, @@ -48,8 +49,9 @@ A preview of performance overlay on the display is as shown, :scale: 30 :align: center +*************** Perf-stats tool ---------------- +*************** Perf-stats tool is a simple cpp application which prints stats on the terminal and updates it every second. To use this tool, it needs to be compiled and @@ -78,8 +80,9 @@ below is the sample output of the tool DDR: WRITE BW: AVG = 332 MB/s, PEAK = 2138 MB/s DDR: TOTAL BW: AVG = 1787 MB/s, PEAK = 8278 MB/s +***************** Parse GST Tracers ------------------ +***************** GStreamer has a feature called tracers to get useful statistics like element wise latency, cpu loading, etc. as a part of GST debug logs. These logs are very diff --git a/source/edgeai/sample_apps.rst b/source/edgeai/sample_apps.rst index 04a4e1554..f60381b7d 100644 --- a/source/edgeai/sample_apps.rst +++ b/source/edgeai/sample_apps.rst @@ -1,8 +1,8 @@ .. _pub_edgeai_sample_apps: -=================== +******************* Edge AI sample apps -=================== +******************* There are various ways you can explore running a typical Edge AI usecase on |__PART_FAMILY_NAME__| EVM, @@ -18,8 +18,9 @@ The SDK is packaged with networks which does 3 DL tasks as below, - **Object Detection**: Detects and draws bounding boxes around the objects, also classifies the objects to one of the classes in dataset - **Semantic Segmentation**: Classifies each pixel into class in dataset +****************** Out-of-box GUI app -================== +****************** When the |__PART_FAMILY_NAME__| EVM is powered on with SD card in place, the **Edge AI Gallery** comes up on boot as shown. @@ -36,9 +37,9 @@ custom input (Camera/VideoFile/Image) and a custom model available in the filesystem. This will automatically construct a GStreamer pipeline with required elements and launch the application. -- For a model to pop up on GUI, it needs to be present under ``/opt/model_zoo/`` -- For a videofile to pop up on GUI, the videos needs to be present under ``/opt/edgeai-test-data/videos/`` -- For an image to pop up on GUI, the images needs to be present under ``/opt/edgeai-test-data/iamges/`` +- For a model to pop up on GUI, it needs to be present under :file:`/opt/model_zoo/` +- For a videofile to pop up on GUI, the videos needs to be present under :file:`/opt/edgeai-test-data/videos/` +- For an image to pop up on GUI, the images needs to be present under :file:`/opt/edgeai-test-data/images/` .. note:: @@ -78,8 +79,9 @@ elements and launch the application. .. _pub_edgeai_python_cpp_demos: +*************** Python/C++ apps -=============== +*************** Python based demos are simple executable scripts written for image classification, object detection and semantic segmentation. Demos are @@ -87,8 +89,8 @@ configured using a YAML file. Details on configuration file parameters can be found in :ref:`pub_edgeai_configuration` Sample configuration files for out of the box demos can be found in -``edgeai-gst-apps/configs`` this folder also contains a template config file -which has brief info on each configurable parameter ``edgeai-gst-apps/configs/app_config_template.yaml`` +:file:`edgeai-gst-apps/configs` this folder also contains a template config file +which has brief info on each configurable parameter :file:`edgeai-gst-apps/configs/app_config_template.yaml` Here is how a Python based image classification demo can be run, @@ -134,8 +136,9 @@ C++ apps can be modified and built on the target as well using below steps .. _pub_edgeai_optiflow_apps: +******** OpTIFlow -======== +******** In Edge AI Python and C++ applications, post processing and DL inference are done between appsink and appsrc application boundaries. This makes the data flow sub-optimal because of @@ -167,8 +170,9 @@ To just dump the end-to-end pipeline use the following command. Python, C++ and OpTIFlow applications are similar by construction and can accept the same config file +***************** EdgeAI Tiovx Apps -================= +***************** EdgeAI Tiovx Apps creates and runs optimized end-to-end OpenVx analytics pipelines based on the user defined configuration. diff --git a/source/edgeai/sdk_components.rst b/source/edgeai/sdk_components.rst index 6458c5ac0..2019505cb 100644 --- a/source/edgeai/sdk_components.rst +++ b/source/edgeai/sdk_components.rst @@ -1,8 +1,8 @@ .. _pub_sdk_components: -=============== +############## SDK Components -=============== +############## The Processor SDK Linux Edge AI for |__PART_FAMILY_NAME__| mainly comprises of three layers, @@ -10,8 +10,9 @@ The Processor SDK Linux Edge AI for |__PART_FAMILY_NAME__| mainly comprises of - **Linux foundations** - **Firmware builder** +************************* Edge AI application stack -========================= +************************* The Edge AI applications are designed for users to quickly evaluate various deep learning networks with real-time inputs on the TI SoCs. Users can @@ -33,7 +34,7 @@ build and install steps please refer to **edgeai-app-stack** on `GitHub `_ edgeai-tiovx-modules --------------------- +==================== This repo provides OpenVx modules which help access underlying hardware accelerators in the SoC and serves as a bridge between GStreamer custom elements and underlying OpenVx custom kernels. @@ -81,7 +82,7 @@ Source code and documentation: `TI Edge AI TIOVX modules `_ to explore more! +**************** Firmware builder -================ +**************** |__PART_FAMILY_NAME__| firmware builder package is required only when dealing with low level software components such as remote core firmware, drivers to diff --git a/source/edgeai/sdk_overview.rst b/source/edgeai/sdk_overview.rst index 0a80945a1..900950754 100644 --- a/source/edgeai/sdk_overview.rst +++ b/source/edgeai/sdk_overview.rst @@ -1,8 +1,8 @@ .. _pub_sdk_overview: -======== +######## Overview -======== +######## **Welcome to Processor SDK Linux Edge AI for** |__PART_FAMILY_NAME__| **!**