You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -2722,16 +2723,30 @@ class BiDynamicRNNLayer(Layer):
2722
2723
The arguments for the cell initializer.
2723
2724
n_hidden : a int
2724
2725
The number of hidden units in the layer.
2725
-
n_steps : a int
2726
-
The sequence length.
2726
+
initializer : initializer
2727
+
The initializer for initializing the parameters.
2728
+
sequence_length : a tensor, array or None
2729
+
The sequence length of each row of input data, see ``Advanced Ops for Dynamic RNN``.
2730
+
- If None, it uses ``retrieve_seq_length_op`` to compute the sequence_length, i.e. when the features of padding (on right hand side) are all zeros.
2731
+
- If using word embedding, you may need to compute the sequence_length from the ID array (the integer features before word embedding) by using ``retrieve_seq_length_op2`` or ``retrieve_seq_length_op``.
2732
+
- You can also input an numpy array.
2733
+
- More details about TensorFlow dynamic_rnn in `Wild-ML Blog <http://www.wildml.com/2016/08/rnns-in-tensorflow-a-practical-guide-and-undocumented-features/>`_.
2734
+
fw_initial_state : None or forward RNN State
2735
+
If None, initial_state is zero_state.
2736
+
bw_initial_state : None or backward RNN State
2737
+
If None, initial_state is zero_state.
2738
+
dropout : `tuple` of `float`: (input_keep_prob, output_keep_prob).
2739
+
The input and output keep probability.
2740
+
n_layer : a int, default is 1.
2741
+
The number of RNN layers.
2727
2742
return_last : boolean
2728
2743
If True, return the last output, "Sequence input and single output"\n
2729
2744
If False, return all outputs, "Synced sequence input and output"\n
2730
2745
In other word, if you want to apply one or more RNN(s) on this layer, set to False.
2731
2746
return_seq_2d : boolean
2732
-
When return_last = False\n
2733
-
if True, return 2D Tensor [n_example, n_hidden], for stacking DenseLayer after it.
2734
-
if False, return 3D Tensor [n_example/n_steps, n_steps, n_hidden], for stacking multiple RNN after it.
2747
+
- When return_last = False
2748
+
- If True, return 2D Tensor [n_example, 2 * n_hidden], for stacking DenseLayer or computing cost after it.
2749
+
- If False, return 3D Tensor [n_example/n_steps(max), n_steps(max), 2 * n_hidden], for stacking multiple RNN after it.
2735
2750
name : a string or None
2736
2751
An optional name to attach to this layer.
2737
2752
@@ -2740,20 +2755,23 @@ class BiDynamicRNNLayer(Layer):
2740
2755
outputs : a tensor
2741
2756
The output of this RNN.
2742
2757
return_last = False, outputs = all cell_output, which is the hidden state.
2743
-
cell_output.get_shape() = (?, n_hidden)
2758
+
cell_output.get_shape() = (?, 2 * n_hidden)
2744
2759
2745
-
final_state : a tensor or StateTuple
2760
+
fw(bw)_final_state : a tensor or StateTuple
2746
2761
When state_is_tuple = False,
2747
2762
it is the final hidden and cell states, states.get_shape() = [?, 2 * n_hidden].\n
2748
2763
When state_is_tuple = True, it stores two elements: (c, h), in that order.
2749
2764
You can get the final state after each iteration during training, then
2750
2765
feed it to the initial state of next iteration.
2751
2766
2752
-
initial_state : a tensor or StateTuple
2767
+
fw(bw)_initial_state : a tensor or StateTuple
2753
2768
It is the initial state of this RNN layer, you can use it to initialize
2754
2769
your state at the begining of each epoch or iteration according to your
2755
2770
training procedure.
2756
2771
2772
+
sequence_length : a tensor or array, shape = [batch_size]
2773
+
The sequence lengths computed by Advanced Opt or the given sequence lengths.
2774
+
2757
2775
Notes
2758
2776
-----
2759
2777
Input dimension should be rank 3 : [batch_size, n_steps(max), n_features], if no, please see :class:`ReshapeLayer`.
0 commit comments