You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# This is the 1st commit message:
Conv1d and batchnorm1d layers
# This is the commit message #2:
Created BatchNorm1d, TanhGate activation and dscnn blocks
# This is the commit message #3:
Trials for the keyword spotting sub-modules
# This is the commit message #4:
Update README
# This is the commit message #5:
Keyword spotting network and test-benches created
# This is the commit message #6:
Clocking keyword detection trials
# This is the commit message #7:
Moving large files via git lfs
* @brief Model paramters for the 1D Convolution Layer
62
+
* @var W1 pointer to the 1st low-rank component of the weights, size = out_channels * rank
63
+
* @var W2 pointer to the 2nd low-rank component of the weights, size for regular = rank * in_channels * kernel_size, size for depthwise = rank * kernel_size
64
+
* @var B pointer to the bias vector for the convolution, shape = [out_channels]
65
+
* @var rank rank of the weight tensor. A low rank decomposition typically used to reduce computation and storage
66
+
*/
67
+
typedefstructConvLayers_LR_Params{
68
+
float*W1;
69
+
float*W2;
70
+
float*B;
71
+
unsignedrank;
72
+
} ConvLayers_LR_Params;
73
+
74
+
/**
75
+
* @brief Model definition for the 1D Low Rank Convolution Layer
76
+
* @brief Identical to the non-low-rank form. One modification is the mulitplication of the weights handeled witin the layer
77
+
* @param[out] output_signal pointer to the output signal, size = out_T * out_channels
78
+
* @param[in] out_T number of time steps in the output
79
+
* @param[in] out_channels number of output channels for the ouput of the conv layer
80
+
* @param[in] input_signal pointer to the input signal. size = in_T * in_channels
81
+
* @param[in] in_T number of time steps in the input
82
+
* @param[in] in_channels number of input channels
83
+
* @param[in] padding padding applied to the input before the conv is performed. Note: padding is applied to both the start and end
84
+
* @param[in] kernel_size kernel size of the conv filter
85
+
* @param[in] params weights, bias and other essential parameters used to describe the layer
86
+
* @param[in] activations an integer to choose the type of activation function.
// Copyright (c) Microsoft Corporation. All rights reserved.
2
+
// Licensed under the MIT license.
3
+
4
+
#ifndef__DSCNN__
5
+
#define__DSCNN__
6
+
7
+
#include"conv1d.h"
8
+
#include"conv_utils.h"
9
+
#include<stdlib.h>
10
+
#include<math.h>
11
+
12
+
/**
13
+
* @brief Model definition for the 1D Convolution sub-block applied before the RNN
14
+
* @brief sub-layers : BatchNorm1d -> Conv1D_LR
15
+
*
16
+
* @param[out] output_signal pointer to the final output signal, minimum size = out_T * in_channels. out_T has to be calculated based on the reduction from all the conv and pool layers
17
+
* @param[in] input_signal pointer to the input signal. size = in_T * in_channels
18
+
* @param[in] in_T number of time steps in the input
19
+
* @param[in] in_channels number of input channels. The output will have the same number of channels
20
+
21
+
* @param[in] mean pointer to the mean for the batch normalization, size = in_channels
22
+
* @param[in] var pointer to the variance for the batch normalization, size = in_channels
23
+
* @param[in] affine whether the affine operations are applied
24
+
* @param[in] gamma pointer to the scaling factors for the post-norm affine operation, size = in_channels
25
+
* @param[in] beta pointer to the scalar offsets for the post-norm affine operation, size = in_channels
26
+
* @param[in] in_place in place computation of the batchnorm. Storage efficient
27
+
*
28
+
* @param[in] cnn_hidden hidden state/out_channels dimensions for the CNN
29
+
* @param[in] cnn_padding padding for the CNN layer. Note: applied to both sides of the input
30
+
* @param[in] cnn_kernel_size kernel size of the CNN
31
+
* @param[in] cnn_params weights, bias and other essential parameters used to describe the CNN
32
+
* @param[in] cnn_activations an integer to choose the type of activation function.
* @param[out] output_signal pointer to the final output signal, minimum size = out_T * in_channels. out_T has to be calculated based on the reduction from all the conv and pool layers
47
+
* @param[in] input_signal pointer to the input signal. size = in_T * in_channels
48
+
* @param[in] in_T number of time steps in the input
49
+
* @param[in] in_channels number of input channels. The output will have the same number of channels
50
+
51
+
* @param[in] mean pointer to the mean for the batch normalization, size = in_channels
52
+
* @param[in] var pointer to the variance for the batch normalization, size = in_channels
53
+
* @param[in] affine whether the affine operations are applied
54
+
* @param[in] gamma pointer to the scaling factors for the post-norm affine operation, size = in_channels
55
+
* @param[in] beta pointer to the scalar offsets for the post-norm affine operation, size = in_channels
56
+
* @param[in] in_place in place computation of the batchnorm. Storage efficient
57
+
*
58
+
* @param[in] depth_cnn_hidden hidden state/out_channels dimensions for the depth CNN
59
+
* @param[in] depth_cnn_padding padding for the depth CNN layer. Note: applied to both sides of the input
60
+
* @param[in] depth_cnn_kernel_size kernel size of the depth CNN
61
+
* @param[in] depth_cnn_params weights, bias and other essential parameters used to describe the depth CNN
62
+
* @param[in] depth_cnn_activations an integer to choose the type of activation function.
63
+
* 0: none
64
+
* 1: sigmoid
65
+
* 2: tanh
66
+
* 3: relu
67
+
*
68
+
* @param[in] point_cnn_hidden hidden state/out_channels dimensions for the point CNN
69
+
* @param[in] point_cnn_padding padding for the point CNN layer. Note: applied to both sides of the input
70
+
* @param[in] point_cnn_kernel_size kernel size of the point CNN
71
+
* @param[in] point_cnn_params weights, bias and other essential parameters used to describe the point CNN
72
+
* @param[in] point_cnn_activations an integer to choose the type of activation function.
73
+
* 0: none
74
+
* 1: sigmoid
75
+
* 2: tanh
76
+
* 3: relu
77
+
*
78
+
* @param[in] pool_padding padding for the pool layer. Note: applied to both sides of the input
79
+
* @param[in] pool_kernel_size kernel size of the pool
80
+
* @param[in] pool_activations an integer to choose the type of activation function.
0 commit comments