Skip to content

Commit 294d9cf

Browse files
committed
fix(edgeai): Standardize RST section headers to follow sphinx guidelines
Update all rst files under source/edgeai to conform to sphinx guidelines for section headers [0] [0] - https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#sections Signed-off-by: Chirag Shilwant <[email protected]>
1 parent 68e02a8 commit 294d9cf

File tree

8 files changed

+100
-74
lines changed

8 files changed

+100
-74
lines changed

source/edgeai/configuration_file.rst

Lines changed: 18 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
.. _pub_edgeai_configuration:
22

3-
========================
3+
########################
44
Configuring applications
5-
========================
5+
########################
66

77
The demo config file uses YAML format to define input sources, models, outputs
88
and finally the flows which defines how everything is connected. Config files
@@ -19,8 +19,9 @@ Config file is divided in 4 sections:
1919
#. Outputs
2020
#. Flows
2121

22+
******
2223
Inputs
23-
======
24+
******
2425

2526
The input section defines a list of supported inputs like camera, video files etc.
2627
Their properties like shown below.
@@ -55,7 +56,7 @@ Below are the details of most commonly used inputs.
5556
.. _pub_edgeai_camera_sources:
5657

5758
Camera sources (v4l2)
58-
---------------------
59+
=====================
5960

6061
**v4l2src** GStreamer element is used to capture frames from camera sources
6162
which are exposed as v4l2 devices. In Linux, there are many devices which are
@@ -109,7 +110,7 @@ camera to allow GStreamer to negotiate the format. ``rggb`` for sensor
109110
that needs ISP.
110111

111112
Video sources
112-
-------------
113+
=============
113114

114115
H.264 and H.265 encoded videos can be provided as input sources to the demos.
115116
Sample video files are provided under :file:`/opt/edgeai-test-data/videos/`
@@ -135,7 +136,7 @@ By default the format is set to ``auto`` which will then use the GStreamer
135136
bin ``decodebin`` instead.
136137

137138
Image sources
138-
-------------
139+
=============
139140

140141
JPEG compressed images can be provided as inputs to the demos. A sample set of
141142
images are provided under :file:`/opt/edgeai-test-data/images`. The names of the
@@ -152,7 +153,7 @@ at the fps specified by the user.
152153
framerate: 1
153154
154155
RTSP sources
155-
------------
156+
============
156157

157158
H.264 encoded video streams either coming from a RTSP compliant IP camera or
158159
via RTSP server running on a remote PC can be provided as inputs to the demo.
@@ -165,8 +166,9 @@ via RTSP server running on a remote PC can be provided as inputs to the demo.
165166
height: 720
166167
framerate: 30
167168
169+
******
168170
Models
169-
======
171+
******
170172

171173
The model section defines a list of models that are used in the demo. Path to
172174
the model directory is a required argument for each model and rest are optional
@@ -200,9 +202,9 @@ Below are some of the use case specific properties:
200202
The content of the model directory and its structure is discussed in detail in
201203
:ref:`pub_edgeai_import_custom_models`
202204

203-
205+
*******
204206
Outputs
205-
=======
207+
*******
206208

207209
The output section defines a list of supported outputs.
208210

@@ -239,7 +241,7 @@ All supported outputs are listed in template config file.
239241
Below are the details of most commonly used outputs
240242

241243
Display sink (kmssink)
242-
----------------------
244+
======================
243245

244246
When you have only one display connected to the SK, kmssink will try to use
245247
it for displaying the output buffers. In case you have connected multiple
@@ -261,7 +263,7 @@ Following command finds out the connected displays available to use.
261263
Configure the required connector ID in the output section of the config file.
262264

263265
Video sinks
264-
-----------
266+
===========
265267
The post-processed outputs can be encoded in H.264 format and stored on disk.
266268
Please specify the location of the video file in the configuration file.
267269

@@ -273,7 +275,7 @@ Please specify the location of the video file in the configuration file.
273275
height: 1080
274276
275277
Image sinks
276-
-----------
278+
===========
277279
The post-processed outputs can be stored as JPEG compressed images.
278280
Please specify the location of the image files in the configuration file.
279281
The images will be named sequentially and incrementally as shown.
@@ -286,7 +288,7 @@ The images will be named sequentially and incrementally as shown.
286288
height: 1080
287289
288290
Remote sinks
289-
------------
291+
============
290292
Post-processed frames can be encoded as jpeg or h264 frames and send as udp packets
291293
to a port. Please specify the sink as remote in the configuration file. The udp port and
292294
host to send packets to can be defined. If not, default port is 8081 and host
@@ -310,9 +312,9 @@ on localhost (127.0.0.1) and can be used to view the frames remotely.
310312
311313
/opt/edgeai-gst-apps# node scripts/remote_streaming/server.js
312314
313-
315+
*****
314316
Flows
315-
=====
317+
*****
316318

317319
The flows section defines how inputs, models and outputs are connected.
318320
Multiple flows can be defined to achieve multi input, multi inference as shown
@@ -338,7 +340,6 @@ for optimization. Along with input, models and outputs it is required to define
338340
plane. This is needed because multiple inference outputs can be rendered to same
339341
output (Ex: Display).
340342

341-
342343
GStreamer plugins
343344
=================
344345

source/edgeai/docker_environment.rst

Lines changed: 21 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
.. _pub_edgeai_docker_env:
22

3-
==================
3+
##################
44
Docker Environment
5-
==================
5+
##################
66

77
Docker is a set of "platform as a service" products that uses the OS-level
88
virtualization to deliver software in packages called containers.
@@ -16,8 +16,9 @@ additional 3rd party applications and packages as required.
1616

1717
.. _pub_edgeai_docker_build_ontarget:
1818

19+
*********************
1920
Building Docker image
20-
======================
21+
*********************
2122

2223
The `docker/Dockerfile` in the edgeai-gst-apps repo describes the recipe for
2324
creating the Docker container image. Feel free to review and update it to
@@ -40,8 +41,9 @@ Initiate the Docker image build as shown,
4041
4142
/opt/edgeai-gst-apps/docker# ./docker_build.sh
4243
44+
****************************
4345
Running the Docker container
44-
============================
46+
****************************
4547

4648
Enter the Docker session as shown,
4749

@@ -83,16 +85,17 @@ access camera, display and other hardware accelerators the SoC has to offer.
8385

8486
.. _pub_edgeai_docker_additional_commands:
8587

88+
**************************
8689
Additional Docker commands
87-
==========================
90+
**************************
8891

8992
.. note::
9093

9194
This section is provided only for additional reference and not required to
9295
run out-of-box demos
9396

94-
**Commit Docker container**
95-
97+
Commit Docker container
98+
=======================
9699
Generally, containers have a short life cycle. If the container has any local
97100
changes it is good to save the changes on top of the existing Docker image.
98101
When re-running the Docker image, the local changes can be restored.
@@ -111,7 +114,8 @@ the container.
111114
For more information refer:
112115
`Commit Docker image <https://docs.docker.com/engine/reference/commandline/commit/>`_
113116

114-
**Save Docker Image**
117+
Save Docker Image
118+
=================
115119

116120
Docker image can be saved as tar file by using the command below:
117121

@@ -122,7 +126,8 @@ Docker image can be saved as tar file by using the command below:
122126
For more information refer here.
123127
`Save Docker image <https://docs.docker.com/engine/reference/commandline/save/>`_
124128

125-
**Load Docker image**
129+
Load Docker image
130+
=================
126131

127132
Load a previously saved Docker image using the command below:
128133

@@ -133,7 +138,8 @@ Load a previously saved Docker image using the command below:
133138
For more information refer here.
134139
`Load Docker image <https://docs.docker.com/engine/reference/commandline/load/>`_
135140

136-
**Remove Docker image**
141+
Remove Docker image
142+
===================
137143

138144
Docker image can be removed by using the command below:
139145

@@ -149,7 +155,8 @@ For more information refer
149155
`rmi reference <https://docs.docker.com/engine/reference/commandline/rmi/>`_ and
150156
`Image prune reference <https://docs.docker.com/engine/reference/commandline/image_prune/>`_
151157

152-
**Remove Docker container**
158+
Remove Docker container
159+
=======================
153160

154161
Docker container can be removed by using the command below:
155162

@@ -220,7 +227,9 @@ current location is the desired location then exit this procedure.
220227
6. Anytime the SD card is updated with a new targetfs, steps (1), (3), and
221228
(4) need to be followed.
222229

223-
**Additional references**
230+
*********************
231+
Additional references
232+
*********************
224233

225234
| https://docs.docker.com/engine/reference/commandline/images/
226235
| https://docs.docker.com/engine/reference/commandline/ps/

source/edgeai/edgeai_dataflows.rst

Lines changed: 16 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
.. _pub_edgeai_dataflows:
22

3-
=================
3+
#################
44
Edge AI dataflows
5-
=================
5+
#################
66

77
The reference edgeai application at a high level can be split into 3 parts,
88

@@ -16,11 +16,12 @@ GStreamer launch strings that is generated. User can interact with the applicati
1616

1717
.. _pub_edgeai_optiflow_data_flow:
1818

19+
********
1920
OpTIFlow
20-
====================
21+
********
2122

2223
Image Classification
23-
--------------------
24+
====================
2425

2526
| **Input: USB Camera**
2627
| **DL Task: Classification**
@@ -59,7 +60,7 @@ GStreamer pipeline:
5960
OpTIFlow pipeline for image classification demo with USB camera and display
6061

6162
Object Detection
62-
--------------------
63+
================
6364

6465
| **Input: IMX219 Camera**
6566
| **DL Task: Detection**
@@ -100,7 +101,7 @@ GStreamer pipeline:
100101
OpTIFlow pipeline for object detection demo with IMX219 camera and save to file
101102

102103
Semantic Segmentation
103-
---------------------
104+
=====================
104105

105106
| **Input: H264 Video**
106107
| **DL Task: Segmentation**
@@ -140,7 +141,7 @@ GStreamer pipeline:
140141
OpTIFlow pipeline for semantic segmentation demo with file input and remote streaming
141142

142143
Single Input Multi Inference
143-
----------------------------
144+
============================
144145

145146
| **Input: H264 Video**
146147
| **DL Task: Detection, Detection, Classification, Segmentation**
@@ -187,7 +188,7 @@ GStreamer pipeline:
187188
OpTIFlow pipeline for single input multi inference
188189

189190
Multi Input Multi Inference
190-
----------------------------
191+
===========================
191192

192193
| **Input: USB Camera, H264 Video**
193194
| **DL Task: Detection, Detection, Classification, Segmentation**
@@ -235,11 +236,12 @@ GStreamer pipeline:
235236

236237
OpTIFlow pipeline for multi input multi inference
237238

239+
***************
238240
Python/C++ apps
239-
======================
241+
***************
240242

241243
Image Classification
242-
--------------------
244+
====================
243245

244246
| **Input: USB Camera**
245247
| **DL Task: Classification**
@@ -282,7 +284,7 @@ GStreamer output pipeline:
282284
Python/C++ application data-flow for image classification demo with USB camera and display
283285

284286
Object Detection
285-
--------------------
287+
================
286288

287289
| **Input: IMX219 Camera**
288290
| **DL Task: Detection**
@@ -326,7 +328,7 @@ GStreamer output pipeline:
326328
Python/C++ application data-flow for object detection demo with IMX219 camera and save to file
327329

328330
Semantic Segmentation
329-
---------------------
331+
=====================
330332

331333
| **Input: H264 Video**
332334
| **DL Task: Segmentation**
@@ -369,7 +371,7 @@ GStreamer output pipeline:
369371
Python/C++ application data-flow for semantic segmentation demo with file input and remote streaming
370372

371373
Single Input Multi Inference
372-
----------------------------
374+
============================
373375

374376
| **Input: H264 Video**
375377
| **DL Task: Detection, Detection, Classification, Segmentation**
@@ -420,7 +422,7 @@ GStreamer output pipeline:
420422
Python/C++ application data-flow for single input multi inference
421423

422424
Multi Input Multi Inference
423-
----------------------------
425+
===========================
424426

425427
| **Input: USB Camera, H264 Video**
426428
| **DL Task: Detection, Detection, Classification, Segmentation**

0 commit comments

Comments
 (0)