You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* @brief Model definition for the 1D Convolution sub-block applied before the RNN
14
-
* @brief sub-layers : BatchNorm1d -> Conv1D_LR
15
-
*
16
-
* @param[out] output_signal pointer to the final output signal, minimum size = out_T * in_channels. out_T has to be calculated based on the reduction from all the conv and pool layers
17
-
* @param[in] input_signal pointer to the input signal. size = in_T * in_channels
18
-
* @param[in] in_T number of time steps in the input
19
-
* @param[in] in_channels number of input channels. The output will have the same number of channels
20
-
21
-
* @param[in] mean pointer to the mean for the batch normalization, size = in_channels
22
-
* @param[in] var pointer to the variance for the batch normalization, size = in_channels
23
-
* @param[in] affine whether the affine operations are applied
24
-
* @param[in] gamma pointer to the scaling factors for the post-norm affine operation, size = in_channels
25
-
* @param[in] beta pointer to the scalar offsets for the post-norm affine operation, size = in_channels
26
-
* @param[in] in_place in place computation of the batchnorm. Storage efficient
27
-
*
28
-
* @param[in] cnn_hidden hidden state/out_channels dimensions for the CNN
29
-
* @param[in] cnn_padding padding for the CNN layer. Note: applied to both sides of the input
30
-
* @param[in] cnn_kernel_size kernel size of the CNN
31
-
* @param[in] cnn_params weights, bias and other essential parameters used to describe the CNN
32
-
* @param[in] cnn_activations an integer to choose the type of activation function.
13
+
* @brief Model definition for the 1D Convolution block applied before the RNN
14
+
* @brief sub-layers : batchnorm1d -> conv1d_lr
15
+
* @param[out] output_signal pointer to the final output signal, minimum size = out_time * in_channels. out_time has to be calculated based on the reduction from all the conv and pool layers
16
+
* @param[in] input_signal pointer to the input signal. size = in_time * in_channels
17
+
* @param[in] in_time number of time steps in the input_signal
18
+
* @param[in] in_channels number of input channels
19
+
* @param[in] mean pointer to the mean for the batch normalization, size = in_channels. Pass NULL/0 for affine_config = 2
20
+
* @param[in] var pointer to the variance for the batch normalization, size = in_channels. Pass NULL/0 for affine_config = 2
21
+
* @param[in] affine_config whether the affine operations are applied
22
+
* if affine_config = 0, then only mean and var are used
23
+
* if affine_config = 1, then mean, var, gamma and beta are used for the final computation.
24
+
* if affine_config = 2, then only the gamma and beta are used. gamma = original_gamma/sqrt(var), beta = original_beta - gamma * mean/sqrt(var)
25
+
* Note: Use affine_config = 2 for faster calculations. The new gamma and beta would need to be pre-computed, stored and passed
26
+
* @param[in] gamma pointer to the scaling factors for the post-norm affine operation, size = in_channels. Pass NULL/0 for affine_config = 0
27
+
* @param[in] beta pointer to the offsets for the post-norm affine operation, size = in_channels. Pass NULL/0 for affine_config = 0
28
+
* @param[in] in_place in-place computation check for the batchnorm. Storage efficient
29
+
* @param[in] cnn_hidden hidden state/out_channels dimensions for the low-rank CNN. The final channel size of this block
30
+
* @param[in] cnn_padding padding for the low-rank CNN layer. Note: applied to both sides of the input
31
+
* @param[in] cnn_kernel_size kernel size of the low-rank CNN
32
+
* @param[in] cnn_params weights, bias and other essential parameters for the low-rank CNN
33
+
* @param[in] cnn_stride stride factor for the low-rank CNN
34
+
* @param[in] cnn_activation an integer to choose the type of activation function.
* @param[out] output_signal pointer to the final output signal, minimum size = out_T * in_channels. out_T has to be calculated based on the reduction from all the conv and pool layers
47
-
* @param[in] input_signalpointer to the input signal. size = in_T * in_channels
48
-
* @param[in] in_Tnumber of time steps in the input
49
-
* @param[in] in_channelsnumber of input channels. The output will have the same number of channels
50
-
51
-
* @param[in] mean pointer to the mean for the batch normalization, size = in_channels
52
-
* @param[in]varpointer to the variance for the batch normalization, size = in_channels
53
-
* @param[in]affinewhether the affine operations are applied
54
-
* @param[in]gammapointer to the scaling factors for the post-norm affine operation, size = in_channels
55
-
* @param[in]betapointer to the scalar offsets for the post-norm affine operation, size = in_channels
56
-
* @param[in] in_placein place computation of the batchnorm. Storage efficient
57
-
*
58
-
* @param[in] depth_cnn_hiddenhidden state/out_channels dimensions for the depth CNN
59
-
* @param[in] depth_cnn_padding padding for the depth CNN layer. Note: applied to both sides of the input
48
+
* @brief Model definition for the 1D Convolution block applied after the RNN
* @param[out] output_signal pointer to the final output signal, minimum size = out_time * in_channels. out_time has to be calculated based on the reduction from all the conv and pool layers
51
+
* @param[in] input_signalpointer to the input signal. size = in_time * in_channels
52
+
* @param[in] in_time number of time steps in the input
53
+
* @param[in] in_channels number of input channels
54
+
* @param[in] mean pointer to the mean for the batch normalization, size = in_channels. Pass NULL/0 for affine_config = 2
55
+
* @param[in] var pointer to the variance for the batch normalization, size = in_channels. Pass NULL/0 for affine_config = 2
56
+
* @param[in] affine_configwhether the affine operations are applied
57
+
* if affine_config = 0, then only mean and var are used
58
+
* if affine_config = 1, then mean, var, gamma and beta are used for the final computation.
59
+
* if affine_config = 2, then only the gamma and beta are used. gamma = original_gamma/sqrt(var), beta = original_beta - gamma * mean/sqrt(var)
60
+
* Note: Use affine_config = 2 for faster calculations. The new gamma and beta would need to be pre-computed, stored and passed
61
+
* @param[in] gamma pointer to the scaling factors for the post-norm affine operation, size = in_channels. Pass NULL/0 for affine_config = 0
62
+
* @param[in] beta pointer to the offsets for the post-norm affine operation, size = in_channels. Pass NULL/0 for affine_config = 0
63
+
* @param[in] in_place in-place computation of the batchnorm. Storage efficient
64
+
* @param[in] depth_cnn_padding padding for the depth CNN layer. Note: applied to both sides of the input to the depth CNN
60
65
* @param[in] depth_cnn_kernel_size kernel size of the depth CNN
61
66
* @param[in] depth_cnn_params weights, bias and other essential parameters used to describe the depth CNN
62
-
* @param[in] depth_cnn_activations an integer to choose the type of activation function.
67
+
* @param[in] depth_cnn_stride stride factor for the depth CNN
68
+
* @param[in] depth_cnn_activation an integer to choose the type of activation function.
63
69
* 0: none
64
70
* 1: sigmoid
65
71
* 2: tanh
66
72
* 3: relu
67
-
*
68
-
* @param[in] point_cnn_hidden hidden state/out_channels dimensions for the point CNN
69
-
* @param[in] point_cnn_padding padding for the point CNN layer. Note: applied to both sides of the input
73
+
* @param[in] point_cnn_hidden hidden state/out_channels dimensions for the point CNN. The final channel size of this block
74
+
* @param[in] point_cnn_padding padding for the point CNN layer. Note: applied to both sides of the input to the point CNN
70
75
* @param[in] point_cnn_kernel_size kernel size of the point CNN
71
76
* @param[in] point_cnn_params weights, bias and other essential parameters used to describe the point CNN
72
-
* @param[in] point_cnn_activations an integer to choose the type of activation function.
77
+
* @param[in] point_cnn_stride stride factor for the point CNN
78
+
* @param[in] point_cnn_activation an integer to choose the type of activation function.
73
79
* 0: none
74
80
* 1: sigmoid
75
81
* 2: tanh
76
82
* 3: relu
77
-
*
78
-
* @param[in] pool_padding padding for the pool layer. Note: applied to both sides of the input
83
+
* @param[in] pool_padding padding for the pool layer. Note: applied to both sides of the input to the pool
79
84
* @param[in] pool_kernel_size kernel size of the pool
80
-
* @param[in] pool_activations an integer to choose the type of activation function.
85
+
* @param[in] pool_stride stride factor for the pool
86
+
* @param[in] pool_activation an integer to choose the type of activation function.
0 commit comments