Skip to content

Commit ad07a56

Browse files
committed
Added documentation for new layers
1 parent f3e706e commit ad07a56

File tree

1 file changed

+158
-10
lines changed

1 file changed

+158
-10
lines changed

docs/modules/layers.md

Lines changed: 158 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,47 +1,195 @@
11
# Layers
22

3+
## Input Layer
4+
- **Description**: Represents the input layer of the neural network.
5+
- **Functions**:
6+
- `initialize_input(InputLayer *layer, int input_size)`
7+
- `forward_input(InputLayer *layer, float *input, float *output)`
8+
- `backward_input(InputLayer *layer, float *input, float *output, float *d_output, float *d_input)`
9+
- `free_input(InputLayer *layer)`
10+
- **File**: [`input.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/input.c)
11+
312
## Dense Layer
413
- **Description**: Fully connected layer where each input is connected to each output.
5-
- **Function**:
14+
- **Functions**:
615
- `initialize_dense(DenseLayer *layer, int input_size, int output_size)`
716
- `forward_dense(DenseLayer *layer, float *input, float *output)`
817
- `backward_dense(DenseLayer *layer, float *input, float *output, float *d_output, float *d_input, float *d_weights, float *d_biases)`
9-
- `update_dense(DenseLayer *layer, float *d_weights, float *d_biases, float learning_rate)`
18+
- `update_dense(DenseLayer *layer, float *d_weights, float *d_biases, float learning_rate, const char *optimizer_type, float beta1, float beta2, float epsilon)`
1019
- `free_dense(DenseLayer *layer)`
1120
- **File**: [`dense.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/dense.c)
1221

1322
## Dropout Layer
1423
- **Description**: Randomly sets a fraction of input units to zero during training to prevent overfitting.
15-
- **Function**:
24+
- **Functions**:
1625
- `initialize_dropout(DropoutLayer *layer, float dropout_rate)`
1726
- `forward_dropout(DropoutLayer *layer, float *input, float *output, int size)`
1827
- `backward_dropout(DropoutLayer *layer, float *input, float *output, float *d_output, float *d_input, int size)`
28+
- `free_dropout(DropoutLayer *layer)`
1929
- **File**: [`dropout.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/dropout.c)
2030

2131
## Flatten Layer
2232
- **Description**: Flattens the input without affecting the batch size.
23-
- **Function**:
24-
- `initializeFlatten(FlattenLayer *layer, int input_size)`
25-
- `forwardFlatten(FlattenLayer *layer, float *input, float *output)`
26-
- `backwardFlatten(FlattenLayer *layer, float *input, float *output, float *d_output, float *d_input)`
27-
- `freeFlatten(FlattenLayer *layer)`
33+
- **Functions**:
34+
- `initialize_flatten(FlattenLayer *layer, int input_size)`
35+
- `forward_flatten(FlattenLayer *layer, float *input, float *output)`
36+
- `backward_flatten(FlattenLayer *layer, float *input, float *output, float *d_output, float *d_input)`
37+
- `free_flatten(FlattenLayer *layer)`
2838
- **File**: [`flatten.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/flatten.c)
2939

40+
## Reshape Layer
41+
- **Description**: Reshapes the input tensor to a specified shape.
42+
- **Functions**:
43+
- `initialize_reshape(ReshapeLayer *layer, int input_size, int output_size)`
44+
- `forward_reshape(ReshapeLayer *layer, float *input, float *output)`
45+
- `backward_reshape(ReshapeLayer *layer, float *d_output, float *d_input)`
46+
- `free_reshape(ReshapeLayer *layer)`
47+
- **File**: [`reshape.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/reshape.c)
48+
3049
## Pooling Layer
3150
- **Description**: Reduces the spatial size of the input volume.
32-
- **Function**:
51+
- **Functions**:
3352
- `initialize_pooling(PoolingLayer *layer, int kernel_size, int stride)`
3453
- `compute_pooling_output_size(int input_size, int kernel_size, int stride)`
3554
- `forward_pooling(PoolingLayer *layer, const float *input, float *output, int input_size)`
55+
- `backward_pooling(PoolingLayer *layer, const float *input, const float *output, const float *d_output, float *d_input, int input_size)`
3656
- `free_pooling(PoolingLayer *layer)`
3757
- **File**: [`pooling.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/pooling.c)
3858

3959
## Max-Pooling Layer
4060
- **Description**: Applies max pooling operation to the input.
41-
- **Function**:
61+
- **Functions**:
4262
- `initialize_maxpooling(MaxPoolingLayer *layer, int kernel_size, int stride)`
4363
- `compute_maxpooling_output_size(int input_size, int kernel_size, int stride)`
4464
- `forward_maxpooling(MaxPoolingLayer *layer, const float *input, float *output, int input_size)`
65+
- `backward_maxpooling(MaxPoolingLayer *layer, const float *input, const float *output, const float *d_output, float *d_input, int input_size)`
4566
- `free_maxpooling(MaxPoolingLayer *layer)`
67+
- `validate_maxpooling_params(const int kernel_size, const int stride)`
4668
- **File**: [`maxpooling.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/maxpooling.c)
4769

70+
## Conv1D Layer
71+
- **Description**: Implements 1D convolution operation.
72+
- **Functions**:
73+
- `initialize_conv1d(Conv1DLayer *layer, const int input_channels, const int output_channels, const int kernel_size, const int input_length, const int padding, const int stride, const int dilation)`
74+
- `forward_conv1d(const Conv1DLayer *layer, const float *input, float *output)`
75+
- `backward_conv1d(const Conv1DLayer *layer, const float *input, const float *output, const float *d_output, float *d_input)`
76+
- `update_conv1d(Conv1DLayer *layer, float *d_weights, float *d_biases, float learning_rate, const char *optimizer_type, float beta1, float beta2, float epsilon)`
77+
- `free_conv1d(Conv1DLayer *layer)`
78+
- **File**: [`conv1d.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/conv1d.c)
79+
80+
## Conv1D Transpose Layer
81+
- **Description**: Implements 1D transposed convolution operation.
82+
- **Functions**:
83+
- `initialize_conv1d_transpose(Conv1DTransposeLayer *layer, int input_channels, int output_channels, int kernel_size, int input_length, int padding, int stride, int dilation)`
84+
- `forward_conv1d_transpose(Conv1DTransposeLayer *layer, float *input, float *output)`
85+
- `backward_conv1d_transpose(Conv1DTransposeLayer *layer, float *input, float *d_output, float *d_input)`
86+
- `update_conv1d_transpose(Conv1DTransposeLayer *layer, float *d_weights, float *d_biases, float learning_rate, const char *optimizer_type, float beta1, float beta2, float epsilon)`
87+
- `free_conv1d_transpose(Conv1DTransposeLayer *layer)`
88+
- **File**: [`conv1d_transpose.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/conv1d_transpose.c)
89+
90+
## Conv2D Layer
91+
- **Description**: Implements 2D convolution operation.
92+
- **Functions**:
93+
- `initialize_conv2d(Conv2DLayer *layer, const int input_channels, const int output_channels, const int kernel_size, const int input_height, const int input_width, const int padding, const int stride, const int dilation)`
94+
- `forward_conv2d(Conv2DLayer *layer, const float *input, float *output)`
95+
- `backward_conv2d(Conv2DLayer *layer, const float *input, const float *output, const float *d_output, float *d_input)`
96+
- `update_conv2d(Conv2DLayer *layer, float *d_weights, float *d_biases, float learning_rate, const char *optimizer_type, float beta1, float beta2, float epsilon)`
97+
- `free_conv2d(Conv2DLayer *layer)`
98+
- **File**: [`conv2d.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/conv2d.c)
99+
100+
## Conv2D Transpose Layer
101+
- **Description**: Implements 2D transposed convolution operation.
102+
- **Functions**:
103+
- `initialize_conv2d_transpose(Conv2DTransposeLayer *layer, int input_channels, int output_channels, int kernel_size, int input_height, int input_width, int padding, int stride, int dilation)`
104+
- `forward_conv2d_transpose(Conv2DTransposeLayer *layer, float *input, float *output)`
105+
- `backward_conv2d_transpose(Conv2DTransposeLayer *layer, float *input, float *output, float *d_output, float *d_input)`
106+
- `update_conv2d_transpose(Conv2DTransposeLayer *layer, float *d_weights, float *d_biases, float learning_rate)`
107+
- `free_conv2d_transpose(Conv2DTransposeLayer *layer)`
108+
- **File**: [`conv2d_transpose.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/conv2d_transpose.c)
109+
110+
## BatchNorm Layer
111+
- **Description**: Normalizes the input to improve training stability.
112+
- **Functions**:
113+
- `initialize_batchnorm(BatchNormLayer *layer, int num_features)`
114+
- `forward_batchnorm(BatchNormLayer *layer, float *input, float *output, float *mean, float *variance)`
115+
- `backward_batchnorm(BatchNormLayer *layer, float *input, float *d_output, float *d_input, float *d_gamma, float *d_beta, float *mean, float *variance)`
116+
- `update_batchnorm(BatchNormLayer *layer, float *d_gamma, float *d_beta, float learning_rate)`
117+
- `free_batchnorm(BatchNormLayer *layer)`
118+
- `save_batchnorm_params(BatchNormLayer *layer, const char *filename)`
119+
- `load_batchnorm_params(BatchNormLayer *layer, const char *filename)`
120+
- **File**: [`batchnorm.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/batchnorm.c)
121+
122+
## Embedding Layer
123+
- **Description**: Converts categorical data into dense vector representations.
124+
- **Functions**:
125+
- `initialize_embedding(EmbeddingLayer *layer, int vocab_size, int embedding_dim)`
126+
- `forward_embedding(EmbeddingLayer *layer, const int *input, float *output, int input_size)`
127+
- `backward_embedding(EmbeddingLayer *layer, const int *input, float *d_output, float *d_weights, int input_size)`
128+
- `update_embedding(EmbeddingLayer *layer, float *d_weights, float learning_rate)`
129+
- `free_embedding(EmbeddingLayer *layer)`
130+
- `save_embedding_weights(EmbeddingLayer *layer, const char *filename)`
131+
- `load_embedding_weights(EmbeddingLayer *layer, const char *filename)`
132+
- **File**: [`embedding.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/embedding.c)
133+
134+
## LSTM Layer
135+
- **Description**: Long Short-Term Memory layer for sequential data.
136+
- **Functions**:
137+
- `initialize_lstm(LSTMLayer *layer, int input_size, int hidden_size)`
138+
- `forward_lstm(LSTMLayer *layer, float *input, float *output)`
139+
- `backward_lstm(LSTMLayer *layer, float *input, float *output, float *d_output, float *d_input)`
140+
- `update_lstm(LSTMLayer *layer, float *d_weights_input, float *d_weights_hidden, float *d_biases, float learning_rate)`
141+
- `reset_state_lstm(LSTMLayer *layer)`
142+
- `free_lstm(LSTMLayer *layer)`
143+
- **File**: [`lstm.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/lstm.c)
144+
145+
## GRU Layer
146+
- **Description**: Gated Recurrent Unit layer for sequential data.
147+
- **Functions**:
148+
- `initialize_gru(GRULayer *layer, int input_size, int hidden_size)`
149+
- `forward_gru(GRULayer *layer, float *input, float *output)`
150+
- `backward_gru(GRULayer *layer, float *input, float *output, float *d_output, float *d_input)`
151+
- `update_gru(GRULayer *layer, float learning_rate)`
152+
- `reset_state_gru(GRULayer *layer)`
153+
- `free_gru(GRULayer *layer)`
154+
- **File**: [`gru.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/gru.c)
155+
156+
## Bidirectional LSTM Layer
157+
- **Description**: Combines forward and backward LSTM layers to capture context from both directions in sequential data.
158+
- **Functions**:
159+
- `initialize_bidirectional_lstm(BidirectionalLSTMLayer *layer, int input_size, int hidden_size)`
160+
- `forward_bidirectional_lstm(BidirectionalLSTMLayer *layer, float *input, float *output, int input_size, int output_size)`
161+
- `backward_bidirectional_lstm(BidirectionalLSTMLayer *layer, float *input, float *d_output, float *d_input, int input_size, int output_size)`
162+
- `reset_state_bidirectional_lstm(BidirectionalLSTMLayer *layer)`
163+
- `free_bidirectional_lstm(BidirectionalLSTMLayer *layer)`
164+
- **File**: [`bidirectional_lstm.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/bidirectional_lstm.c)
165+
166+
## Attention Layer
167+
- **Description**: Implements attention mechanism for sequence-to-sequence models.
168+
- **Functions**:
169+
- `initialize_attention(AttentionLayer *layer, int query_dim, int key_dim, int value_dim)`
170+
- `forward_attention(AttentionLayer *layer, float *query, float *key, float *value, float *output, const char *optimizer_type)`
171+
- `backward_attention(AttentionLayer *layer, float *query, float *key, float *value, float *d_output, float *d_input)`
172+
- `update_attention(AttentionLayer *layer, const float *d_weights_query, const float *d_weights_key, const float *d_weights_value, float learning_rate)`
173+
- `free_attention(AttentionLayer *layer)`
174+
- **File**: [`attention.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/attention.c)
175+
176+
## Additive Attention Layer
177+
- **Description**: Implements additive attention mechanism.
178+
- **Functions**:
179+
- `initialize_additive_attention(AdditiveAttentionLayer *layer, int query_dim, int key_dim, int value_dim)`
180+
- `forward_additive_attention(AdditiveAttentionLayer *layer, float *query, float *key, float *value, float *output, const char *optimizer_type)`
181+
- `backward_additive_attention(AdditiveAttentionLayer *layer, float *query, float *key, float *value, float *d_output, float *d_input)`
182+
- `update_additive_attention(AdditiveAttentionLayer *layer, const float *d_weights_query, const float *d_weights_key, const float *d_weights_value, const float *d_bias, float learning_rate)`
183+
- `free_additive_attention(AdditiveAttentionLayer *layer)`
184+
- **File**: [`additive_attention.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/additive_attention.c)
185+
186+
## Multi-Head Attention Layer
187+
- **Description**: Implements multi-head attention mechanism.
188+
- **Functions**:
189+
- `initialize_multi_head_attention(MultiHeadAttentionLayer *layer, int query_dim, int key_dim, int value_dim, int num_heads)`
190+
- `forward_multi_head_attention(MultiHeadAttentionLayer *layer, const float *query, const float *key, const float *value, float *output)`
191+
- `backward_multi_head_attention(MultiHeadAttentionLayer *layer, const float *query, const float *key, const float *value, const float *d_output, float *d_query, float *d_key, float *d_value)`
192+
- `update_multi_head_attention(MultiHeadAttentionLayer *layer, float *d_weights_query, float *d_weights_key, float *d_weights_value, float *d_weights_output, float learning_rate)`
193+
- `free_multi_head_attention(MultiHeadAttentionLayer *layer)`
194+
- **File**: [`multi_head_attention.c`](https://github.com/jaywyawhare/C-ML/tree/master/src/Layers/multi_head_attention.c)
195+

0 commit comments

Comments
 (0)