Skip to content

Commit 8fe7bf3

Browse files
authored
Merge pull request #4823 from luotao1/doc
remove duplicated doc/tutorials, and rename tutorials to v1_api_tutorials
2 parents 5c5250e + 8fd07c9 commit 8fe7bf3

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+10
-3106
lines changed

doc/howto/deep_model/rnn/rnn_config_cn.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ wmt14数据的提供文件在 `python/paddle/v2/dataset/wmt14.py <https://github
2121

2222
循环神经网络在每个时间步骤顺序地处理序列。下面列出了 LSTM 的架构的示例。
2323

24-
.. image:: ../../../tutorials/sentiment_analysis/bi_lstm.jpg
24+
.. image:: src/bi_lstm.jpg
2525
:align: center
2626

2727
一般来说,循环网络从 :math:`t=1` 到 :math:`t=T` 或者反向地从 :math:`t=T` 到 :math:`t=1` 执行以下操作。
@@ -96,7 +96,7 @@ Sequence to Sequence Model with Attention
9696
我们将使用 sequence to sequence model with attention
9797
作为例子演示如何配置复杂的循环神经网络模型。该模型的说明如下图所示。
9898

99-
.. image:: ../../../tutorials/text_generation/encoder-decoder-attention-model.png
99+
.. image:: src/encoder-decoder-attention-model.png
100100
:align: center
101101

102102
在这个模型中,源序列 :math:`S = \{s_1, \dots, s_T\}`

doc/howto/deep_model/rnn/rnn_config_en.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Simple Gated Recurrent Neural Network
1919

2020
Recurrent neural network process a sequence at each time step sequentially. An example of the architecture of LSTM is listed below.
2121

22-
.. image:: ../../../tutorials/sentiment_analysis/src/bi_lstm.jpg
22+
.. image:: src/bi_lstm.jpg
2323
:align: center
2424

2525
Generally speaking, a recurrent network perform the following operations from :math:`t=1` to :math:`t=T`, or reversely from :math:`t=T` to :math:`t=1`.
@@ -78,7 +78,7 @@ Sequence to Sequence Model with Attention
7878
-----------------------------------------
7979
We will use the sequence to sequence model with attention as an example to demonstrate how you can configure complex recurrent neural network models. An illustration of the sequence to sequence model with attention is shown in the following figure.
8080

81-
.. image:: ../../../tutorials/text_generation/encoder-decoder-attention-model.png
81+
.. image:: src/encoder-decoder-attention-model.png
8282
:align: center
8383

8484
In this model, the source sequence :math:`S = \{s_1, \dots, s_T\}` is encoded with a bidirectional gated recurrent neural networks. The hidden states of the bidirectional gated recurrent neural network :math:`H_S = \{H_1, \dots, H_T\}` is called *encoder vector* The decoder is a gated recurrent neural network. When decoding each token :math:`y_t`, the gated recurrent neural network generates a set of weights :math:`W_S^t = \{W_1^t, \dots, W_T^t\}`, which are used to compute a weighted sum of the encoder vector. The weighted sum of the encoder vector is utilized to condition the generation of the token :math:`y_t`.
-456 KB
Binary file not shown.
Binary file not shown.

doc/tutorials/image_classification/index_cn.md

Lines changed: 0 additions & 205 deletions
This file was deleted.

0 commit comments

Comments
 (0)