Skip to content

Commit fbaa299

Browse files
committed
Fixing indentation
1 parent caed8ff commit fbaa299

File tree

1 file changed

+38
-38
lines changed

1 file changed

+38
-38
lines changed

dlib/cuda/tensor_tools.h

Lines changed: 38 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -172,49 +172,49 @@ namespace dlib { namespace tt
172172
requires
173173
- dest does not alias the memory of lhs or rhs
174174
- The dimensions of lhs and rhs must be compatible for matrix multiplication.
175-
The specific requirements depend on the mode:
176-
177-
For CHANNEL_WISE mode (default):
178-
- Let L == trans_lhs ? trans(mat(lhs)) : mat(lhs)
179-
- Let R == trans_rhs ? trans(mat(rhs)) : mat(rhs)
180-
- Let D == mat(dest)
181-
- D.nr() == L.nr() && D.nc() == R.nc()
182-
(i.e. dest must be preallocated and have the correct output dimensions)
183-
- L.nc() == R.nr()
184-
185-
For PLANE_WISE mode:
186-
- lhs.num_samples() == rhs.num_samples() && lhs.k() == rhs.k()
187-
- If !trans_lhs && !trans_rhs:
188-
lhs.nc() == rhs.nr()
189-
dest.nr() == lhs.nr() && dest.nc() == rhs.nc()
190-
- If trans_lhs && !trans_rhs:
191-
lhs.nr() == rhs.nr()
192-
dest.nr() == lhs.nc() && dest.nc() == rhs.nc()
193-
- If !trans_lhs && trans_rhs:
194-
lhs.nc() == rhs.nc()
195-
dest.nr() == lhs.nr() && dest.nc() == rhs.nr()
196-
- If trans_lhs && trans_rhs:
197-
lhs.nr() == rhs.nc()
198-
dest.nr() == lhs.nc() && dest.nc() == rhs.nr()
175+
The specific requirements depend on the mode:
176+
177+
For CHANNEL_WISE mode (default):
178+
- Let L == trans_lhs ? trans(mat(lhs)) : mat(lhs)
179+
- Let R == trans_rhs ? trans(mat(rhs)) : mat(rhs)
180+
- Let D == mat(dest)
181+
- D.nr() == L.nr() && D.nc() == R.nc()
182+
(i.e. dest must be preallocated and have the correct output dimensions)
183+
- L.nc() == R.nr()
184+
185+
For PLANE_WISE mode:
186+
- lhs.num_samples() == rhs.num_samples() && lhs.k() == rhs.k()
187+
- If !trans_lhs && !trans_rhs:
188+
lhs.nc() == rhs.nr()
189+
dest.nr() == lhs.nr() && dest.nc() == rhs.nc()
190+
- If trans_lhs && !trans_rhs:
191+
lhs.nr() == rhs.nr()
192+
dest.nr() == lhs.nc() && dest.nc() == rhs.nc()
193+
- If !trans_lhs && trans_rhs:
194+
lhs.nc() == rhs.nc()
195+
dest.nr() == lhs.nr() && dest.nc() == rhs.nr()
196+
- If trans_lhs && trans_rhs:
197+
lhs.nr() == rhs.nc()
198+
dest.nr() == lhs.nc() && dest.nc() == rhs.nr()
199199
200200
ensures
201201
- Performs matrix multiplication based on the specified mode:
202202
203-
For CHANNEL_WISE mode:
204-
- performs: dest = alpha*L*R + beta*mat(dest)
205-
Where L, R, and D are as defined above.
206-
207-
For PLANE_WISE mode:
208-
- Performs matrix multiplication for each corresponding 2D plane (nr x nc)
209-
in lhs and rhs across all samples and channels.
210-
- The operation is equivalent to performing the following for each sample
211-
and channel:
212-
dest[s][k] = alpha * (lhs[s][k] * rhs[s][k]) + beta * dest[s][k]
213-
Where [s][k] represents the 2D plane for sample s and channel k.
203+
For CHANNEL_WISE mode:
204+
- performs: dest = alpha*L*R + beta*mat(dest)
205+
where L, R, and D are as defined above.
206+
207+
For PLANE_WISE mode:
208+
- Performs matrix multiplication for each corresponding 2D plane (nr x nc)
209+
in lhs and rhs across all samples and channels.
210+
- The operation is equivalent to performing the following for each sample
211+
and channel:
212+
dest[s][k] = alpha * (lhs[s][k] * rhs[s][k]) + beta * dest[s][k]
213+
where [s][k] represents the 2D plane for sample s and channel k.
214214
215-
Note that the PLANE_WISE mode is particularly useful for operations like attention
216-
mechanisms in neural networks, where you want to perform matrix multiplications
217-
on 2D planes of 4D tensors while preserving the sample and channel dimensions.
215+
Note that the PLANE_WISE mode is particularly useful for operations like attention
216+
mechanisms in neural networks, where you want to perform matrix multiplications
217+
on 2D planes of 4D tensors while preserving the sample and channel dimensions.
218218
!*/
219219

220220
// ----------------------------------------------------------------------------------------

0 commit comments

Comments
 (0)