You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: c_reference/include/conv1d.h
+13-18Lines changed: 13 additions & 18 deletions
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@
6
6
7
7
/**
8
8
* @brief Model parameters for the 1D Convolution Layer
9
-
* @var W pointer to convolution weights W, size for regular = out_channels*in_channels*kernel_size, size for depth based = out_channels*kernel_size
9
+
* @var W pointer to convolution weights W, size for regular = out_channels * in_channels * kernel_size, size for depth based = out_channels * kernel_size
10
10
* @var B pointer to the bias vector for the convolution, size = out_channels
* @param[in] padding padding applied to the input before the conv is performed.
26
26
* Note: padding is applied to both the starting and ending of the input, along the time axis
27
27
* E.g : padding = 3, the input is padded with zeros(for 3 time steps), both before the input_signal(time step 0) and after the input_signal(time step in_time).
28
-
*
29
28
* @param[in] kernel_size kernel size of the conv filter
30
29
* @param[in] params weights, bias and other essential parameters used to describe the layer
31
30
* @param[in] activations an integer to choose the type of activation function.
* @param[in] padding padding applied to the input before the conv is performed.
49
48
* Note: padding is applied to both the starting and ending of the input, along the time axis
50
49
* E.g : padding = 3, the input is padded with zeros(for 3 time steps), both before the input_signal(time step 0) and after the input_signal(time step in_time).
51
-
*
52
50
* @param[in] kernel_size kernel size of the conv filter
53
51
* @param[in] params weights, bias and other essential parameters used to describe the layer
54
52
* @param[in] activations an integer to choose the type of activation function.
* @param[in] padding padding applied to the input before the conv is performed.
89
86
* Note: padding is applied to both the starting and ending of the input, along the time axis
90
87
* E.g : padding = 3, the input is padded with zeros(for 3 time steps), both before the input_signal(time step 0) and after the input_signal(time step in_time).
91
-
*
92
88
* @param[in] kernel_size kernel size of the conv filter
93
89
* @param[in] params weights, bias and other essential parameters used to describe the layer
94
90
* @param[in] activations an integer to choose the type of activation function.
@@ -112,7 +108,6 @@ int conv1d_lr(float *output_signal, unsigned out_time, unsigned out_channels, co
112
108
* @param[in] padding padding applied to the input before the conv is performed.
113
109
* Note: padding is applied to both the starting and ending of the input, along the time axis
114
110
* E.g : padding = 3, the input is padded with zeros(for 3 time steps), both before the input_signal(time step 0) and after the input_signal(time step in_time).
115
-
*
116
111
* @param[in] kernel_size kernel size of the conv filter
117
112
* @param[in] params weights, bias and other essential parameters used to describe the layer
118
113
* @param[in] activations an integer to choose the type of activation function.
@@ -121,7 +116,7 @@ int conv1d_lr(float *output_signal, unsigned out_time, unsigned out_channels, co
* @param[in] padding padding applied to the input before the conv is performed.
137
132
* Note: padding is applied to both the starting and ending of the input, along the time axis
138
133
* E.g : padding = 3, the input is padded with zeros(for 3 time steps), both before the input_signal(time step 0) and after the input_signal(time step in_time).
139
-
*
140
134
* @param[in] kernel_size kernel size of the pool filter
141
135
* @param[in] activations an integer to choose the type of activation function.
Copy file name to clipboardExpand all lines: c_reference/include/dscnn.h
+4-10Lines changed: 4 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -7,19 +7,16 @@
7
7
/**
8
8
* @brief Model definition for the 1D Convolution block applied before the RNN
9
9
* @brief sub-layers : batchnorm1d -> conv1d_lr
10
-
*
11
10
* @param[out] output_signal pointer to the final output signal, minimum size = out_time * in_channels. out_time has to be calculated based on the reduction from all the conv and pool layers
12
11
* @param[in] input_signal pointer to the input signal. size = in_time * in_channels
13
12
* @param[in] in_time number of time steps in the input_signal
14
13
* @param[in] in_channels number of input channels
15
-
16
14
* @param[in] mean pointer to the mean for the batch normalization, size = in_channels
17
15
* @param[in] var pointer to the variance for the batch normalization, size = in_channels
18
16
* @param[in] affine whether the affine operations are applied
19
17
* @param[in] gamma pointer to the scaling factors for the post-norm affine operation, size = in_channels
20
18
* @param[in] beta pointer to the offsets for the post-norm affine operation, size = in_channels
21
19
* @param[in] in_place in-place computation check for the batchnorm. Storage efficient
22
-
*
23
20
* @param[in] cnn_hidden hidden state/out_channels dimensions for the low-rank CNN. The final channel size of this block
24
21
* @param[in] cnn_padding padding for the low-rank CNN layer. Note: applied to both sides of the input
25
22
* @param[in] cnn_kernel_size kernel size of the low-rank CNN
* @param[out] output_signal pointer to the final output signal, minimum size = out_time * in_channels. out_time has to be calculated based on the reduction from all the conv and pool layers
43
40
* @param[in] input_signal pointer to the input signal. size = in_time * in_channels
44
41
* @param[in] in_time number of time steps in the input
45
42
* @param[in] in_channels number of input channels
46
-
47
43
* @param[in] mean pointer to the mean for the batch normalization, size = in_channels
48
44
* @param[in] var pointer to the variance for the batch normalization, size = in_channels
49
45
* @param[in] affine whether the affine operations are applied
50
46
* @param[in] gamma pointer to the scaling factors for the post-norm affine operation, size = in_channels
51
47
* @param[in] beta pointer to the offsets for the post-norm affine operation, size = in_channels
52
48
* @param[in] in_place in-place computation of the batchnorm. Storage efficient
53
-
*
54
49
* @param[in] depth_cnn_hidden hidden state/out_channels dimensions for the depth CNN
55
50
* @param[in] depth_cnn_padding padding for the depth CNN layer. Note: applied to both sides of the input to the depth CNN
56
51
* @param[in] depth_cnn_kernel_size kernel size of the depth CNN
* @param[in] window window length for each brick. For the final brick, the left over time steps are used(need not be window in length for the last brick)
43
44
* @param[in] hop hop distance for between bricks
44
-
* @param[in] rnn function pointer to the rnn
45
-
* @param[in] params pointer to the parameters for the rnn
46
-
* @param[in,out] buffers pointer to buffer for the rnn
47
-
* @param[in] bi_direction determine if the ouput if for a bi-directional rnn.
45
+
* @param[in] rnn function pointer to the RNN
46
+
* @param[in] params pointer to the parameters for the RNN
47
+
* @param[in,out] buffers pointer to buffer for the RNN
48
+
* @param[in] bi_direction determine if the ouput if for a bi-directional RNN.
48
49
* @param[in] sample_first_brick determine if the 1st brick should also be sampled
49
-
* -> if = 0, only the last hidden state of each brick is sampled. out_time = (in_time-window)/hop + 1
50
-
* -> if = 1, for the 1st brick, we sample every hop index(similar to ::hop). For all the bricks(including the 1st) we sample the final hiddens state. out_time = in_time/hop + 1
50
+
* -> if = 0, only the last hidden state of each brick is sampled. out_time = (in_time-window)/hop + 1
51
+
* -> if = 1, for the 1st brick, we sample every hop index(similar to ::hop). For all the bricks(including the 1st) we sample the final hiddens state. out_time = in_time/hop + 1
* @param[in] window window length for each brick. For the final brick, the left over time steps are used(need not be window in length for the last brick)
64
65
* @param[in] hop hop distance for between bricks
65
-
* @param[in] rnn function pointer to the rnn
66
-
* @param[in] params pointer to the parameters for the rnn
67
-
* @param[in,out] buffers pointer to buffer for the rnn
68
-
* @param[in] bi_direction determine if the ouput if for a bi-directional rnn.
69
-
* @param[in] sample_last_brick determine if the last brick should also be sampled
70
-
* -> if = 0, only the first(last in reverse) hidden state of each brick is sampled. out_time = (in_time-window)/hop + 1
71
-
* -> if = 1, for the last brick, we sample every hop index in reverse(similar to ::hop in reverse). For all the bricks(including the last) we sample the first hiddens state(last in reverse). out_time = in_time/hop + 1
66
+
* @param[in] rnn function pointer to the RNN
67
+
* @param[in] params pointer to the parameters for the RNN
68
+
* @param[in,out] buffers pointer to buffer for the RNN
69
+
* @param[in] bi_direction determine if the ouput if for a bi-directional RNN.
70
+
* @param[in] sample_last_brick determine if the last brick should also be sampled
71
+
* -> if = 0, only the first(last in reverse) hidden state of each brick is sampled. out_time = (in_time-window)/hop + 1
72
+
* -> if = 1, for the last brick, we sample every hop index in reverse(similar to ::hop in reverse). For all the bricks(including the last) we sample the first hiddens state(last in reverse). out_time = in_time/hop + 1
0 commit comments