Skip to content

Commit bd1b723

Browse files
committed
Minor Fixes (indentions, tables widths, etc.) according to review
1 parent b5342ce commit bd1b723

25 files changed

+195
-196
lines changed

doc/documents/MLI_helpers/MLI_helpers.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
.. _mli_helpers:
22

3-
MLI helpers
3+
MLI Helpers
44
===========
55

6-
The MLI helpers are a set of utility functions for getting information from data
6+
The MLI Helpers are a set of utility functions for getting information from data
77
structures and performing various operations on it.
88

99
.. toctree::

doc/documents/MLI_helpers/convert_tensor.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,8 @@ Parameters
4949
''''''''''
5050

5151
.. table:: Kernel Interface Parameters
52-
52+
:widths: 20,130
53+
5354
+-----------------------+-----------------------+
5455
| **Parameters** | **Description** |
5556
+=======================+=======================+

doc/documents/MLI_helpers/count_no_elements.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,8 @@ Parameters
4242
''''''''''
4343

4444
.. table:: Kernel Interface Parameters
45-
45+
:widths: 20,130
46+
4647
+-----------------------+-----------------------+
4748
| **Parameters** | **Description** |
4849
+=======================+=======================+

doc/documents/MLI_helpers/get_basic_elem_size.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,8 @@ Parameters
2323
''''''''''
2424

2525
.. table:: Kernel Interface Parameters
26-
26+
:widths: 20,130
27+
2728
+-----------------------+-----------------------+
2829
| **Parameters** | **Description** |
2930
+=======================+=======================+

doc/documents/MLI_helpers/point_to_sub_tensor.rst

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -38,13 +38,12 @@ Definition
3838

3939
.. code:: c
4040
41-
typedef struct {
42-
uint32_t
43-
start_coord[MLI_MAX_RANK ];
44-
uint8_t coord_num;
45-
uint8_t first_out_dim_size;
46-
}
47-
mli_point_to_subtsr_cfg;
41+
typedef struct {
42+
uint32_t
43+
start_coord[MLI_MAX_RANK];
44+
uint8_t coord_num;
45+
uint8_t first_out_dim_size;
46+
} mli_point_to_subtsr_cfg;
4847
..
4948
5049
Parameters

doc/documents/MLI_kernels/comm_basic_rnn.rst

Lines changed: 76 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -108,9 +108,9 @@ one step of kernel (M or L*M elements for stacked weights matrix).
108108
For the other modes (one-to-one or batch-to-batch), kernel does not
109109
use the intermediate result tensor and this field might not be
110110
initialized. For more information about configuration structure, see
111-
:ref:`fn_conf_lstm`.
111+
:ref:`fn_conf_brnn`.
112112

113-
.. note::
113+
.. caution::
114114
Ensure that you allocate memory for all tensors (including
115115
intermediate results tensor) without overlaps.
116116

@@ -124,8 +124,79 @@ initialized. For more information about configuration structure, see
124124
Function Configuration Structure
125125
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
126126

127-
Basic RNN cell kernel shares configuration structure with LSTM cell.
128-
For more information see :ref:`fn_conf_lstm`.
127+
Definition
128+
''''''''''
129+
.. code:: c
130+
131+
typedef struct {
132+
mli_rnn_mode mode;
133+
mli_rnn_out_activation act;
134+
mli_tensor *ir_tsr;
135+
} mli_rnn_cell_cfg;
136+
..
137+
138+
Parameters
139+
''''''''''
140+
141+
.. table:: Function Configuration Parameters
142+
:widths: 20,80
143+
144+
+-----------------------+-----------------------+
145+
| **Fields** | **Description** |
146+
+=======================+=======================+
147+
| ``mode`` | RNN processing mode |
148+
| | (enumeration) |
149+
+-----------------------+-----------------------+
150+
| ``act`` | RNN output |
151+
| | activation type |
152+
| | (enumeration) |
153+
+-----------------------+-----------------------+
154+
| ``ir_tsr`` | Pointer to tensor for |
155+
| | holding intermediate |
156+
| | results. Tensor must |
157+
| | contain valid data |
158+
| | and capacity fields. |
159+
| | Field is modified by |
160+
| | kernels. |
161+
+-----------------------+-----------------------+
162+
163+
.. _mli_rnn_mode_val_desc:
164+
.. table:: mli_rnn_mode Values Description
165+
:widths: 20,80
166+
167+
+-----------------------------------+-----------------------------------+
168+
| **Value** | **Field Description** |
169+
+===================================+===================================+
170+
| ``RNN_ONE_TO_ONE`` | Process input tensor as a single |
171+
| | input frame . |
172+
+-----------------------------------+-----------------------------------+
173+
| ``RNN_BATCH_TO_BATCH`` | Process input tensor as a |
174+
| | sequence of frames to produce a |
175+
| | sequence of outputs . |
176+
+-----------------------------------+-----------------------------------+
177+
| ``RNN_BATCH_TO_LAST`` | Process input tensor as a |
178+
| | sequence of frames to produce |
179+
| | single (last) outputs. |
180+
+-----------------------------------+-----------------------------------+
181+
182+
183+
.. _mli_rnn_out_activation_val_desc:
184+
.. table:: mli_rnn_out_activation Values Description
185+
:widths: 20,100
186+
187+
+-----------------------------------+-----------------------------------+
188+
| **Value** | **Field Description** |
189+
+===================================+===================================+
190+
| ``RNN_ACT_TANH`` | Hyperbolic tangent activation |
191+
| | function.                  |
192+
+-----------------------------------+-----------------------------------+
193+
| ``RNN_ACT_SIGM`` | Logistic (sigmoid) activation |
194+
| | function.                  |
195+
+-----------------------------------+-----------------------------------+
196+
| ``RNN_ACT_NONE`` | No activation. |
197+
+-----------------------------------+-----------------------------------+
198+
199+
\
129200

130201
.. _api_brnn:
131202

@@ -233,7 +304,7 @@ function:
233304
- The input tensor has the following restrictions:
234305

235306
- For ``RNN_ONE_TO_ONE`` mode, the total number of input and previous
236-
output tensors (N+M) must be equal to the last dimension of
307+
output tensors elements (N+M) must be equal to the last dimension of
237308
Weights tensor.
238309

239310
- For ``RNN_BATCH_TO_BATCH`` and ``RNN_BATCH_TO_LAST`` modes, first

doc/documents/MLI_kernels/comm_fully_connected.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Ensure that the weight for this kernel is a two-dimensional tensor
3131
input tensor is not considered and only total number of elements is
3232
considered. Kernel outputs a one-dimensional tensor of shape [M].
3333

34-
.. note::
34+
.. caution::
3535
Ensure that input and output
3636
tensors do not point to
3737
overlapped memory regions,

doc/documents/MLI_kernels/comm_lstm.rst

Lines changed: 6 additions & 78 deletions
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ The first [M, *M+N]* sub-tensor of weights is applied to the input
109109
gate, the second, to new cell candidates, the third, to the forget
110110
gate, and the last, to the output gate.
111111

112-
.. note::
112+
.. caution::
113113
- Ensure that you keep the same
114114
order of sub-tensors for bias
115115
tensor. For more information
@@ -179,7 +179,7 @@ Dense part of calculations uses intermediate tensor for result, and
179179
consequently output and previous output tensors might use the same
180180
memory if it is acceptable to rewrite previous output data.
181181

182-
.. note::
182+
.. caution::
183183
Ensure that you allocate memory
184184
for the rest of the tensors
185185
(including intermediate results
@@ -192,79 +192,8 @@ memory if it is acceptable to rewrite previous output data.
192192
Function Configuration Structure
193193
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
194194

195-
Definition
196-
''''''''''
197-
.. code:: c
198-
199-
typedef struct {
200-
mli_rnn_mode mode;
201-
mli_rnn_out_activation act;
202-
mli_tensor *ir_tsr;
203-
} mli_rnn_cell_cfg;
204-
..
205-
206-
Parameters
207-
''''''''''
208-
209-
.. table:: Function Configuration Parameters
210-
:widths: 20,80
211-
212-
+-----------------------+-----------------------+
213-
| **Fields** | **Description** |
214-
+=======================+=======================+
215-
| ``mode`` | LSTM processing mode |
216-
| | (enumeration) |
217-
+-----------------------+-----------------------+
218-
| ``act`` | LSTM output |
219-
| | activation type |
220-
| | (enumeration) |
221-
+-----------------------+-----------------------+
222-
| ``ir_tsr`` | Pointer to tensor for |
223-
| | holding intermediate |
224-
| | results. Tensor must |
225-
| | contain valid data |
226-
| | and capacity fields. |
227-
| | Field is modified by |
228-
| | kernels. |
229-
+-----------------------+-----------------------+
230-
231-
.. _mli_rnn_mode_val_desc:
232-
.. table:: mli_rnn_mode Values Description
233-
:widths: 20,80
234-
235-
+-----------------------------------+-----------------------------------+
236-
| **Value** | **Field Description** |
237-
+===================================+===================================+
238-
| ``RNN_ONE_TO_ONE`` | Process input tensor as a single |
239-
| | input frame . |
240-
+-----------------------------------+-----------------------------------+
241-
| ``RNN_BATCH_TO_BATCH`` | Process input tensor as a |
242-
| | sequence of frames to produce a |
243-
| | sequence of outputs . |
244-
+-----------------------------------+-----------------------------------+
245-
| ``RNN_BATCH_TO_LAST`` | Process input tensor as a |
246-
| | sequence of frames to produce |
247-
| | single (last) outputs. |
248-
+-----------------------------------+-----------------------------------+
249-
250-
251-
.. _mli_rnn_out_activation_val_desc:
252-
.. table:: mli_rnn_out_activation Values Description
253-
:widths: 20,100
254-
255-
+-----------------------------------+-----------------------------------+
256-
| **Value** | **Field Description** |
257-
+===================================+===================================+
258-
| ``RNN_ACT_TANH`` | Hyperbolic tangent activation |
259-
| | function.                  |
260-
+-----------------------------------+-----------------------------------+
261-
| ``RNN_ACT_SIGM`` | Logistic (sigmoid) activation |
262-
| | function.                  |
263-
+-----------------------------------+-----------------------------------+
264-
| ``RNN_ACT_NONE`` | No activation. |
265-
+-----------------------------------+-----------------------------------+
266-
267-
\
195+
LSTM cell kernel shares configuration structure with Basic RNN cell.
196+
For more information see :ref:`fn_conf_brnn`.
268197

269198
.. _api_lstm:
270199

@@ -276,8 +205,7 @@ Prototype
276205

277206
.. code:: c
278207
279-
mli_status mli_krn_lstm_cell_<data_type>
280-
[_specialization](
208+
mli_status mli_krn_lstm_cell_<data_type>[_specialization](
281209
const mli_tensor *in,
282210
const mli_tensor *prev_out,
283211
const mli_tensor *weights,
@@ -369,7 +297,7 @@ function:
369297
- The input tensor has the following restrictions:
370298

371299
- For ``RNN_ONE_TO_ONE`` mode, the total number of input and previous
372-
output tensors (N+M) must be equal to the last dimension of the
300+
output tensors elements (N+M) must be equal to the last dimension of the
373301
weights tensor
374302

375303
- For ``RNN_BATCH_TO_BATCH`` and ``RNN_BATCH_TO_LAST`` modes, the first

doc/documents/MLI_kernels/convolution_2d.rst

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ following types of ReLU activations are supported (for more info see
6363

6464
- RELU_6
6565

66-
.. note::
66+
.. caution::
6767
Ensure that input and output
6868
tensors do not point to
6969
overlapped memory regions,
@@ -204,7 +204,6 @@ all specializations for the primitive.
204204
+-------------------------------------+-----------------------------------+
205205
| ``mli_krn_conv2d_chw_fx16`` | Switching function; 16bit FX |
206206
| | tensors; |
207-
| | |
208207
| | Delegates calculations to |
209208
| | suitable specialization or |
210209
| | generic function. |
@@ -236,7 +235,7 @@ all specializations for the primitive.
236235
| | input and output) |
237236
+-------------------------------------+-----------------------------------+
238237

239-
.. note::
238+
.. attention::
240239
\*For specialization functions, backward compatibility between different releases cannot be guaranteed. The general functions call the available specializations when possible.
241240

242241
Conditions for Applying the Function

doc/documents/MLI_kernels/convolution_depthwise.rst

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ following types of ReLU activations are supported (for more info see
3939

4040
- RELU_6
4141

42-
.. note::
42+
.. caution::
4343
Ensure that input and output
4444
tensors do not point to
4545
overlapped memory regions,
@@ -136,7 +136,6 @@ and inputs layout beforehand by permute primitive (see :ref:`permute`).
136136
+-----------------------------------------------+-----------------------------------+
137137
| ``mli_krn_depthwise_conv2d_chw_fx16`` | Switching function (see |
138138
| | :ref:`fns`); 16bit FX tensors; |
139-
| | |
140139
| | Delegates calculations to |
141140
| | suitable specialization or |
142141
| | generic function. |
@@ -158,7 +157,7 @@ and inputs layout beforehand by permute primitive (see :ref:`permute`).
158157
| | FX tensors |
159158
+-----------------------------------------------+-----------------------------------+
160159

161-
.. note::
160+
.. attention::
162161
\*For specialization
163162
functions, backward
164163
compatibility between

0 commit comments

Comments
 (0)