@@ -8063,13 +8063,13 @@ def bilinear_tensor_product(x,
8063
8063
For example:
8064
8064
8065
8065
.. math::
8066
- y_ {i} = x * W_{i} * {y^\mathrm{T}}, i=0,1,...,K -1
8066
+ out {i} = x * W_{i} * {y^\mathrm{T}}, i=0,1,...,size -1
8067
8067
8068
8068
In this formular:
8069
- - :math:`x`: the first input contains M elements.
8070
- - :math:`y`: the second input contains N elements.
8071
- - :math:`y_{i}`: the i-th element of y.
8069
+ - :math:`x`: the first input contains M elements, shape is [batch_size, M].
8070
+ - :math:`y`: the second input contains N elements, shape is [batch_size, N].
8072
8071
- :math:`W_{i}`: the i-th learned weight, shape is [M, N]
8072
+ - :math:`out{i}`: the i-th element of out, shape is [batch_size, size].
8073
8073
- :math:`y^\mathrm{T}`: the transpose of :math:`y_{2}`.
8074
8074
8075
8075
The simple usage is:
@@ -8079,8 +8079,8 @@ def bilinear_tensor_product(x,
8079
8079
tensor = bilinear_tensor_product(x=layer1, y=layer2, size=1000)
8080
8080
8081
8081
Args:
8082
- x (Variable): 3 -D input tensor with shape [N x M x P ]
8083
- y (Variable): 3 -D input tensor with shape [N x M x P ]
8082
+ x (Variable): 2 -D input tensor with shape [batch_size, M ]
8083
+ y (Variable): 2 -D input tensor with shape [batch_size, N ]
8084
8084
size (int): The dimension of this layer.
8085
8085
act (str, default None): Activation to be applied to the output of this layer.
8086
8086
name (str, default None): The name of this layer.
0 commit comments