Skip to content

Commit 61da0d1

Browse files
committed
fix(edgeai): Update section headers per sphinx guidelines
Update all rst files under source/edgeai to conform to sphinx guidelines for section headers [0] [0] - https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#sections Signed-off-by: Chirag Shilwant <[email protected]>
1 parent 68e02a8 commit 61da0d1

File tree

8 files changed

+110
-83
lines changed

8 files changed

+110
-83
lines changed

source/edgeai/configuration_file.rst

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
.. _pub_edgeai_configuration:
22

3-
========================
3+
########################
44
Configuring applications
5-
========================
5+
########################
66

77
The demo config file uses YAML format to define input sources, models, outputs
88
and finally the flows which defines how everything is connected. Config files
@@ -19,8 +19,9 @@ Config file is divided in 4 sections:
1919
#. Outputs
2020
#. Flows
2121

22+
******
2223
Inputs
23-
======
24+
******
2425

2526
The input section defines a list of supported inputs like camera, video files etc.
2627
Their properties like shown below.
@@ -55,7 +56,7 @@ Below are the details of most commonly used inputs.
5556
.. _pub_edgeai_camera_sources:
5657

5758
Camera sources (v4l2)
58-
---------------------
59+
=====================
5960

6061
**v4l2src** GStreamer element is used to capture frames from camera sources
6162
which are exposed as v4l2 devices. In Linux, there are many devices which are
@@ -80,8 +81,8 @@ and prints the detail like below in the console:
8081
script can also be run manually later to get the camera details.
8182

8283
From the above log we can determine that 1 USB camera is connected
83-
(:file:`/dev/video-usb-cam0`), and 1 CSI camera is connected (:file:`/dev/video-imx219-cam0`) which is IMX219 raw
84-
sensor and needs ISP.
84+
(:file:`/dev/video-usb-cam0`), and 1 CSI camera is connected (:file:`/dev/video-imx219-cam0`),
85+
which is IMX219 raw sensor and needs ISP.
8586

8687
Using this method, you can configure correct device for camera capture in the
8788
input section of config file.
@@ -109,10 +110,10 @@ camera to allow GStreamer to negotiate the format. ``rggb`` for sensor
109110
that needs ISP.
110111

111112
Video sources
112-
-------------
113+
=============
113114

114115
H.264 and H.265 encoded videos can be provided as input sources to the demos.
115-
Sample video files are provided under :file:`/opt/edgeai-test-data/videos/`
116+
The :file:`/opt/edgeai-test-data/videos/` directory contains sample video files.
116117

117118
.. code-block:: yaml
118119
@@ -135,12 +136,11 @@ By default the format is set to ``auto`` which will then use the GStreamer
135136
bin ``decodebin`` instead.
136137

137138
Image sources
138-
-------------
139+
=============
139140

140-
JPEG compressed images can be provided as inputs to the demos. A sample set of
141-
images are provided under :file:`/opt/edgeai-test-data/images`. The names of the
142-
files are numbered sequentially and incrementally and the demo plays the files
143-
at the fps specified by the user.
141+
JPEG compressed images can be provided as inputs to the demos. The :file:`/opt/edgeai-test-data/images`
142+
directory contains sample images. The names of the files are numbered sequentially and incrementally
143+
and the demo plays the files at the fps specified by the user.
144144

145145
.. code-block:: yaml
146146
@@ -152,7 +152,7 @@ at the fps specified by the user.
152152
framerate: 1
153153
154154
RTSP sources
155-
------------
155+
============
156156

157157
H.264 encoded video streams either coming from a RTSP compliant IP camera or
158158
via RTSP server running on a remote PC can be provided as inputs to the demo.
@@ -165,8 +165,9 @@ via RTSP server running on a remote PC can be provided as inputs to the demo.
165165
height: 720
166166
framerate: 30
167167
168+
******
168169
Models
169-
======
170+
******
170171

171172
The model section defines a list of models that are used in the demo. Path to
172173
the model directory is a required argument for each model and rest are optional
@@ -200,9 +201,9 @@ Below are some of the use case specific properties:
200201
The content of the model directory and its structure is discussed in detail in
201202
:ref:`pub_edgeai_import_custom_models`
202203

203-
204+
*******
204205
Outputs
205-
=======
206+
*******
206207

207208
The output section defines a list of supported outputs.
208209

@@ -239,7 +240,7 @@ All supported outputs are listed in template config file.
239240
Below are the details of most commonly used outputs
240241

241242
Display sink (kmssink)
242-
----------------------
243+
======================
243244

244245
When you have only one display connected to the SK, kmssink will try to use
245246
it for displaying the output buffers. In case you have connected multiple
@@ -261,7 +262,7 @@ Following command finds out the connected displays available to use.
261262
Configure the required connector ID in the output section of the config file.
262263

263264
Video sinks
264-
-----------
265+
===========
265266
The post-processed outputs can be encoded in H.264 format and stored on disk.
266267
Please specify the location of the video file in the configuration file.
267268

@@ -273,7 +274,7 @@ Please specify the location of the video file in the configuration file.
273274
height: 1080
274275
275276
Image sinks
276-
-----------
277+
===========
277278
The post-processed outputs can be stored as JPEG compressed images.
278279
Please specify the location of the image files in the configuration file.
279280
The images will be named sequentially and incrementally as shown.
@@ -286,7 +287,7 @@ The images will be named sequentially and incrementally as shown.
286287
height: 1080
287288
288289
Remote sinks
289-
------------
290+
============
290291
Post-processed frames can be encoded as jpeg or h264 frames and send as udp packets
291292
to a port. Please specify the sink as remote in the configuration file. The udp port and
292293
host to send packets to can be defined. If not, default port is 8081 and host
@@ -310,9 +311,9 @@ on localhost (127.0.0.1) and can be used to view the frames remotely.
310311
311312
/opt/edgeai-gst-apps# node scripts/remote_streaming/server.js
312313
313-
314+
*****
314315
Flows
315-
=====
316+
*****
316317

317318
The flows section defines how inputs, models and outputs are connected.
318319
Multiple flows can be defined to achieve multi input, multi inference as shown
@@ -338,7 +339,6 @@ for optimization. Along with input, models and outputs it is required to define
338339
plane. This is needed because multiple inference outputs can be rendered to same
339340
output (Ex: Display).
340341

341-
342342
GStreamer plugins
343343
=================
344344

source/edgeai/docker_environment.rst

Lines changed: 23 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
.. _pub_edgeai_docker_env:
22

3-
==================
3+
##################
44
Docker Environment
5-
==================
5+
##################
66

77
Docker is a set of "platform as a service" products that uses the OS-level
88
virtualization to deliver software in packages called containers.
@@ -16,8 +16,9 @@ additional 3rd party applications and packages as required.
1616

1717
.. _pub_edgeai_docker_build_ontarget:
1818

19+
*********************
1920
Building Docker image
20-
======================
21+
*********************
2122

2223
The `docker/Dockerfile` in the edgeai-gst-apps repo describes the recipe for
2324
creating the Docker container image. Feel free to review and update it to
@@ -40,8 +41,9 @@ Initiate the Docker image build as shown,
4041
4142
/opt/edgeai-gst-apps/docker# ./docker_build.sh
4243
44+
****************************
4345
Running the Docker container
44-
============================
46+
****************************
4547

4648
Enter the Docker session as shown,
4749

@@ -83,16 +85,17 @@ access camera, display and other hardware accelerators the SoC has to offer.
8385

8486
.. _pub_edgeai_docker_additional_commands:
8587

88+
**************************
8689
Additional Docker commands
87-
==========================
90+
**************************
8891

8992
.. note::
9093

9194
This section is provided only for additional reference and not required to
9295
run out-of-box demos
9396

94-
**Commit Docker container**
95-
97+
Commit Docker container
98+
=======================
9699
Generally, containers have a short life cycle. If the container has any local
97100
changes it is good to save the changes on top of the existing Docker image.
98101
When re-running the Docker image, the local changes can be restored.
@@ -111,7 +114,8 @@ the container.
111114
For more information refer:
112115
`Commit Docker image <https://docs.docker.com/engine/reference/commandline/commit/>`_
113116

114-
**Save Docker Image**
117+
Save Docker Image
118+
=================
115119

116120
Docker image can be saved as tar file by using the command below:
117121

@@ -120,9 +124,10 @@ Docker image can be saved as tar file by using the command below:
120124
docker save --output <pre_built_docker_image.tar>
121125
122126
For more information refer here.
123-
`Save Docker image <https://docs.docker.com/engine/reference/commandline/save/>`_
127+
`docker image save <https://docs.docker.com/engine/reference/commandline/save/>`_
124128

125-
**Load Docker image**
129+
Load Docker image
130+
=================
126131

127132
Load a previously saved Docker image using the command below:
128133

@@ -131,9 +136,10 @@ Load a previously saved Docker image using the command below:
131136
docker load --input <pre_built_docker_image.tar>
132137
133138
For more information refer here.
134-
`Load Docker image <https://docs.docker.com/engine/reference/commandline/load/>`_
139+
`docker image load <https://docs.docker.com/engine/reference/commandline/load/>`_
135140

136-
**Remove Docker image**
141+
Remove Docker image
142+
===================
137143

138144
Docker image can be removed by using the command below:
139145

@@ -149,7 +155,8 @@ For more information refer
149155
`rmi reference <https://docs.docker.com/engine/reference/commandline/rmi/>`_ and
150156
`Image prune reference <https://docs.docker.com/engine/reference/commandline/image_prune/>`_
151157

152-
**Remove Docker container**
158+
Remove Docker container
159+
=======================
153160

154161
Docker container can be removed by using the command below:
155162

@@ -220,7 +227,9 @@ current location is the desired location then exit this procedure.
220227
6. Anytime the SD card is updated with a new targetfs, steps (1), (3), and
221228
(4) need to be followed.
222229

223-
**Additional references**
230+
*********************
231+
Additional references
232+
*********************
224233

225234
| https://docs.docker.com/engine/reference/commandline/images/
226235
| https://docs.docker.com/engine/reference/commandline/ps/

source/edgeai/edgeai_dataflows.rst

Lines changed: 18 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
.. _pub_edgeai_dataflows:
22

3-
=================
3+
#################
44
Edge AI dataflows
5-
=================
5+
#################
66

77
The reference edgeai application at a high level can be split into 3 parts,
88

@@ -16,11 +16,14 @@ GStreamer launch strings that is generated. User can interact with the applicati
1616

1717
.. _pub_edgeai_optiflow_data_flow:
1818

19+
********
1920
OpTIFlow
20-
====================
21+
********
22+
23+
.. _pub_edgeai_optiflow_image_classification:
2124

2225
Image Classification
23-
--------------------
26+
====================
2427

2528
| **Input: USB Camera**
2629
| **DL Task: Classification**
@@ -59,7 +62,7 @@ GStreamer pipeline:
5962
OpTIFlow pipeline for image classification demo with USB camera and display
6063

6164
Object Detection
62-
--------------------
65+
================
6366

6467
| **Input: IMX219 Camera**
6568
| **DL Task: Detection**
@@ -100,7 +103,7 @@ GStreamer pipeline:
100103
OpTIFlow pipeline for object detection demo with IMX219 camera and save to file
101104

102105
Semantic Segmentation
103-
---------------------
106+
=====================
104107

105108
| **Input: H264 Video**
106109
| **DL Task: Segmentation**
@@ -140,7 +143,7 @@ GStreamer pipeline:
140143
OpTIFlow pipeline for semantic segmentation demo with file input and remote streaming
141144

142145
Single Input Multi Inference
143-
----------------------------
146+
============================
144147

145148
| **Input: H264 Video**
146149
| **DL Task: Detection, Detection, Classification, Segmentation**
@@ -187,7 +190,7 @@ GStreamer pipeline:
187190
OpTIFlow pipeline for single input multi inference
188191

189192
Multi Input Multi Inference
190-
----------------------------
193+
===========================
191194

192195
| **Input: USB Camera, H264 Video**
193196
| **DL Task: Detection, Detection, Classification, Segmentation**
@@ -235,11 +238,12 @@ GStreamer pipeline:
235238

236239
OpTIFlow pipeline for multi input multi inference
237240

241+
***************
238242
Python/C++ apps
239-
======================
243+
***************
240244

241245
Image Classification
242-
--------------------
246+
====================
243247

244248
| **Input: USB Camera**
245249
| **DL Task: Classification**
@@ -282,7 +286,7 @@ GStreamer output pipeline:
282286
Python/C++ application data-flow for image classification demo with USB camera and display
283287

284288
Object Detection
285-
--------------------
289+
================
286290

287291
| **Input: IMX219 Camera**
288292
| **DL Task: Detection**
@@ -326,7 +330,7 @@ GStreamer output pipeline:
326330
Python/C++ application data-flow for object detection demo with IMX219 camera and save to file
327331

328332
Semantic Segmentation
329-
---------------------
333+
=====================
330334

331335
| **Input: H264 Video**
332336
| **DL Task: Segmentation**
@@ -369,7 +373,7 @@ GStreamer output pipeline:
369373
Python/C++ application data-flow for semantic segmentation demo with file input and remote streaming
370374

371375
Single Input Multi Inference
372-
----------------------------
376+
============================
373377

374378
| **Input: H264 Video**
375379
| **DL Task: Detection, Detection, Classification, Segmentation**
@@ -420,7 +424,7 @@ GStreamer output pipeline:
420424
Python/C++ application data-flow for single input multi inference
421425

422426
Multi Input Multi Inference
423-
----------------------------
427+
===========================
424428

425429
| **Input: USB Camera, H264 Video**
426430
| **DL Task: Detection, Detection, Classification, Segmentation**

0 commit comments

Comments
 (0)