You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* E.g : padding = 3, the input is padded with zeros(for 3 time steps), both before the input_signal(time step 0) and after the input_signal(time step in_time).
28
28
* @param[in] kernel_size kernel size of the conv filter
29
29
* @param[in] params weights, bias and other essential parameters used to describe the layer
30
-
* @param[in] activations an integer to choose the type of activation function.
30
+
* @param[in] stride stride length for the layer. input_time_iterator += stride for output_time_iterator +=1
31
+
* @param[in] activation an integer to choose the type of activation function.
* E.g : padding = 3, the input is padded with zeros(for 3 time steps), both before the input_signal(time step 0) and after the input_signal(time step in_time).
50
52
* @param[in] kernel_size kernel size of the conv filter
51
53
* @param[in] params weights, bias and other essential parameters used to describe the layer
52
-
* @param[in] activations an integer to choose the type of activation function.
54
+
* @param[in] stride stride length for the layer. input_time_iterator += stride for output_time_iterator +=1
55
+
* @param[in] activation an integer to choose the type of activation function.
* @brief Model parameters for the 1D Low Rank Convolution Layer
66
+
* @brief Model parameters for the 1D Low Rank Convolution Layer.
64
67
* @var W1 pointer to the 1st low-rank component of the weights, size = out_channels * rank
65
68
* @var W2 pointer to the 2nd low-rank component of the weights, size for regular = rank * in_channels * kernel_size, size for depthwise = rank * kernel_size
66
69
* @var B pointer to the bias vector for the convolution, shape = [out_channels]
* E.g : padding = 3, the input is padded with zeros(for 3 time steps), both before the input_signal(time step 0) and after the input_signal(time step in_time).
88
92
* @param[in] kernel_size kernel size of the conv filter
89
93
* @param[in] params weights, bias and other essential parameters used to describe the layer
90
-
* @param[in] activations an integer to choose the type of activation function.
94
+
* @param[in] stride stride length for the layer. input_time_iterator += stride for output_time_iterator +=1
95
+
* @param[in] activation an integer to choose the type of activation function.
* @brief Model definition for the 1D Low-Rank Depthwise Convolution Layer
106
+
* @brief Model definition for the 1D Low-Rank Depthwise Convolution Layer. Currently only for dilation = 1
102
107
* @brief Identical to the non-low-rank form. One modification is the multiplication of the weights handled within the layer
108
+
* @brief The Weights W1 and W2 are multiplied within the layer using a matmul function from utils. Operation : W1 * W2
103
109
* @param[out] output_signal pointer to the output signal, size = out_time * in_channels
110
+
* NOTE: out_channels == in_channels for depthwise conv
104
111
* @param[in] out_time number of time steps in the output
105
112
* @param[in] input_signal pointer to the input signal. size = in_time * in_channels
106
113
* @param[in] in_time number of time steps in the input
@@ -110,20 +117,22 @@ int conv1d_lr(float* output_signal, unsigned out_time, unsigned out_channels, co
110
117
* E.g : padding = 3, the input is padded with zeros(for 3 time steps), both before the input_signal(time step 0) and after the input_signal(time step in_time).
111
118
* @param[in] kernel_size kernel size of the conv filter
112
119
* @param[in] params weights, bias and other essential parameters used to describe the layer
113
-
* @param[in] activations an integer to choose the type of activation function.
120
+
* @param[in] stride stride length for the layer. input_time_iterator += stride for output_time_iterator +=1
121
+
* @param[in] activation an integer to choose the type of activation function.
* Note: padding is applied to both the starting and ending of the input, along the time axis
133
142
* E.g : padding = 3, the input is padded with zeros(for 3 time steps), both before the input_signal(time step 0) and after the input_signal(time step in_time).
134
143
* @param[in] kernel_size kernel size of the pool filter
135
-
* @param[in] activations an integer to choose the type of activation function.
144
+
* @param[in] stride stride length for the layer. input_time_iterator += stride for output_time_iterator +=1
145
+
* @param[in] activation an integer to choose the type of activation function.
// This use of an offset is a way to exploit the nature of bi-direction to bypass the concatenation step typically associated with bi-directional passes
21
21
//
22
22
// Constraints
23
-
// For Bi-Directional use, there are 2 constraints
24
-
// 1) (in_time - window) % hop == 0
25
-
// 2) both the window % hop == 0
23
+
// For Bi-Directional use, there are 3 constraints
24
+
// 1) (in_time - fwd_window) % hop == 0 and (in_time - bwd_window) % hop == 0
25
+
// 2) fwd_window % hop == 0 and bwd_window % hop == 0
26
26
// 3) sample_first_brick and sample_last_brick = 1
27
27
//
28
28
// Violation of these constraints can lead to one of the following issues
0 commit comments