We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent b42a816 commit 3064415Copy full SHA for 3064415
R/layers_attention.R R/layers-attention.RR/layers_attention.R renamed to R/layers-attention.R
@@ -1,8 +1,7 @@
1
2
-#' Applies Dropout to the input.
+#' Creates attention layer
3
#'
4
-#' Dropout consists in randomly setting a fraction `rate` of input units to 0 at
5
-#' each update during training time, which helps prevent overfitting.
+#' Dot-product attention layer, a.k.a. Luong-style attention.
6
7
#' @inheritParams layer_dense
8
#' @param inputs a list of inputs first should be the query tensor, the second the value tensor
0 commit comments