-
Notifications
You must be signed in to change notification settings - Fork 5.8k
Refine documentation of hierarchical-rnn.rst #508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refine documentation of hierarchical-rnn.rst #508
Conversation
这部分希望可以保留:1)与单层的做对比,能让用户知道模型配置的正确性。2)双层的用法非常多,比如不定长,各种memory,以及后面没写的beam search。一个简单的网络可能涵盖不了那么多用法。3)方便大家之后的开发,这部分也可以看成是单测的文档吧。 具体这部分放在哪里,你再看下吧,可以拿出来作为一个单独的文档也可以。其他的几点我都赞同。 |
我觉得单测的case可以保留,但是文档硬要用这个单测来做,有点让用户困惑。所以,文档可以换一种demo的写法。 |
单测case的文档也保留吧,你可以挪到其他地方去。文档可以换demo的写法。 |
@luotao1 OK,我先把这个文档的文字完善一下。 |
Coverage decreased (-1.08%) to 61.728% when pulling a93c01a41ce0cb4aaf95414750fb046ec47a76e6 on reyoung:feature/refine_doc_drnn into d0a908d on PaddlePaddle:develop. |
a93c01a
to
b3dd2d1
Compare
Coverage decreased (-1.2%) to 61.72% when pulling 6917b503d6a68b14b7387969610e0074e9346b69 on reyoung:feature/refine_doc_drnn into 167c397 on PaddlePaddle:develop. |
6917b50
to
a146fcf
Compare
Changes Unknown when pulling a146fcf on reyoung:feature/refine_doc_drnn into * on PaddlePaddle:develop*. |
Changes Unknown when pulling 4fcf01a on reyoung:feature/refine_doc_drnn into ** on PaddlePaddle:develop**. |
@luotao1 @qingqing01 Please review this PR, it is basically done. Thanks. |
单双层RNN API对比介绍 | ||
##################### | ||
|
||
这篇教程主要介绍了\ :ref:`glossary_双层RNN`\ 的API接口。本文中的以PaddlePaddle的\ :ref:`glossary_双层RNN`\ 单元测试为示例,用多对效果完全相同的、分别使用单、双层RNN作为网络配置的模型,来讲解如何使用\ :ref:`glossary_双层RNN`\ 。本文中所有的例子,都只是介绍\ :ref:`glossary_双层RNN`\ 的API接口,并不是使用\ :ref:`glossary_双层RNN`\ 解决实际的问题。如果想要了解\ :ref:`glossary_双层RNN`\ 在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。文章中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp <https://github.com/reyoung/Paddle/blob/develop/paddle/gserver/tests/test_RecurrentGradientMachine.cpp>`_\ 。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- 本文以PaddlePaddle的\ :ref:
glossary_双层RNN
\ 单元测试为示例,用多对效果完全相同的、分别使用单双层RNN作为网络配置的模型... - 文章中示例所使用的单元测试->本文示例所使用的单元测试。因为这段话都用本文,所以可以统一
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
|
||
在\ :ref:`glossary_双层RNN`\ 中的经典情况是将内层的每一个\ :ref:`glossary_sequence`\ 数据,分别进行序列操作。并且内层的序列操作之间是独立没有依赖的,即不需要使用\ :ref:`glossary_Memory`\ 的。 | ||
|
||
在本问题中,单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是\ :ref:`glossary_RNN`\ 使用两层序列模型,将多句话看成一个整体,同时使用encoder压缩,二者语意上完全一致。这组语意相同的示例配置如下 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
在本问题中->本示例中
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
|
||
首先,本示例中使用的原始数据如下\: | ||
|
||
- 本里中的原始数据一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层\ :ref:`glossary_RNN`\ 网络直接使用。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
本例中的
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
return encoder, last | ||
|
||
_, sentence_last_state1 = inner_step(ipt=x1) | ||
encoder2, _ = inner_step(ipt=x2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_, sentence_last_state1
和encoder2, _
中的_
能换个名字么,便于理解
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
这个在一个新PR再搞吧 |
单双层RNN API对比介绍 | ||
##################### | ||
|
||
这篇教程主要介绍了\ :ref:`glossary_双层RNN`\ 的API接口。本文以PaddlePaddle的\ :ref:`glossary_双层RNN`\ 单元测试为示例,用多对效果完全相同的、分别使用单双层RNN作为网络配置的模型,来讲解如何使用\ :ref:`glossary_双层RNN`\ 。本文中所有的例子,都只是介绍\ :ref:`glossary_双层RNN`\ 的API接口,并不是使用\ :ref:`glossary_双层RNN`\ 解决实际的问题。如果想要了解\ :ref:`glossary_双层RNN`\ 在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。本文中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp <https://github.com/reyoung/Paddle/blob/develop/paddle/gserver/tests/test_RecurrentGradientMachine.cpp>`_\ 。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- \ :ref:
glossary_双层RNN
\只需要在最开始出现一次,后面就用普通文本来写“双层RNN”吧。下同。 - test_RecurrentGradientMachine.cpp的链接,现在连得是develop分支的。但是文档不是固定在develop分支。别的分支会和这儿有区别,所以可以不放么?
- 可以去掉一开始的“这篇教程主要介绍了双层RNN的API接口。”这句话
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1、不可以。因为文档可能被跳跃着阅读。所以每次出现这个关键词,都应该给出连接。
2、不可以。给出develop分支的code连接应该也没有什么严重问题。但是一定要给出连接,否则用户去搜索很麻烦。
3、多了这句话也没什么坏处吧?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- 如果每次出现一个关键词都要给出链接的话,关键词会非常多,其他文档中也没有这样做。对于文档跳着阅读的问题,可以每小节给一个链接。如果都要给链接的话,像LSTM,encoder也得给链接,会导致链接太多了。
- 因为这句话和后面的话连起来不是很通顺,不去掉的话可以改一下。
示例1:双层RNN,子序列间无Memory | ||
================================ | ||
|
||
在\ :ref:`glossary_双层RNN`\ 中的经典情况是将内层的每一个\ :ref:`glossary_sequence`\ 数据,分别进行序列操作。并且内层的序列操作之间是独立没有依赖的,即不需要使用\ :ref:`glossary_Memory`\ 的。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
双层RNN中的经典情况是:将内层的每一个\ :ref:glossary_sequence
\ 数据,分别进行序列操作;并且内层的序列操作之间独立无依赖,即不需要使用\ :ref:glossary_Memory
\。
|
||
在\ :ref:`glossary_双层RNN`\ 中的经典情况是将内层的每一个\ :ref:`glossary_sequence`\ 数据,分别进行序列操作。并且内层的序列操作之间是独立没有依赖的,即不需要使用\ :ref:`glossary_Memory`\ 的。 | ||
|
||
在本示例中,单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是\ :ref:`glossary_RNN`\ 使用两层序列模型,将多句话看成一个整体,同时使用encoder压缩,二者语意上完全一致。这组语意相同的示例配置如下 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- \ :ref:
glossary_RNN
\也只需要在最开始出现一次,后面就用普通文本来写“单层RNN”吧。下同。 - 区别是双层RNN使用两层序列模型,将多句话看成一个整体。这组语义相同的示例配置如下:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1、不可以
2、看做一个整体干什么呢?感觉还是加上同时使用encoder压缩,应该没什么坏处
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- 原来写的就是“将多句话看出一个整体”。
:language: text | ||
|
||
|
||
- 双层序列数据一共有4个样本。 每个样本间用空行分开,整体数据和原始数据完全一样。而对于双层序列的LSTM来说,第一条数据同时encode两条数据成两个向量。这四条数据同时处理的句子为\ :code:`[2, 3, 2, 3]`\ 。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- 但于双层序列的LSTM来说,第一个样本同时encode两条数据成两个向量?
- 这四条数据同时处理的句子为\ :code:
[2, 3, 2, 3]
\ 。这句话是什么意思呢?
.. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest | ||
:language: text | ||
|
||
其次,对于两种不同的输入数据类型,不同\ :ref:`glossary_DataProvider`\ 对比如下(`sequenceGen.py <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequenceGen.py>`_)\: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
链接现在连得是develop分支,但文档不一定固定在develop分支。别的分支会和这儿有区别,所以可以不放么
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不可以
:emphasize-lines: 9-15 | ||
|
||
|
||
其次,我们看一下语义相同的\ :ref:`glossary_双层RNN`\ 的网络配置。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
73行,句号改成冒号
|
||
其次,我们看一下语义相同的\ :ref:`glossary_双层RNN`\ 的网络配置。 | ||
|
||
* PaddlePaddle中的许多layer并不在意输入是否是\ :ref:`glossary_sequence`\ ,例如\ :code:`embedding_layer`\ 。在这些layer中,所有的操作都是针对每一个\ :ref:`glossary_timestep`\ 来进行的。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PaddlePaddle中的很多layer并不在意输入是否是时间序列 ,例如 embedding_layer
等。这些layer的操作都是对所有时间步统一进行的。
用 embedding_layer
这种写法,可以在markdown中正确显示,但\ :code:embedding_layer
\写法就不行。建议可以换掉。下同
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
没必要换掉,我们用的就是rst,rst里面支持的inline code写法是 :code:
|
||
* PaddlePaddle中的许多layer并不在意输入是否是\ :ref:`glossary_sequence`\ ,例如\ :code:`embedding_layer`\ 。在这些layer中,所有的操作都是针对每一个\ :ref:`glossary_timestep`\ 来进行的。 | ||
|
||
* 在该配置中,7-26行将双层\ :ref:`glossary_sequence`\ 数据,先变换成单层\ :ref:`glossary_sequence`\ 数据,在对每一个单层\ :ref:`glossary_sequence`\ 进行处理。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
该配置的高亮部分,将双层时间序列数据先变换成单层,再对每一个单层时间序列进行处理。
|
||
* 在本例中,我们将原始数据的每一组,通过\ :code:`recurrent_group`\ 进行拆解,拆解成的每一句话再通过一个LSTM网络。这和单层\ :ref:`glossary_RNN`\ 的配置是等价的。 | ||
|
||
* 与单层\ :ref:`glossary_RNN`\ 的配置类似,我们只需要知道使用LSTM encode成的最后一个向量。所以对\ :code:`recurrent_group`\ 进行了\ :code:`last_seq`\ 操作。但是,和单层\ :ref:`glossary_RNN`\ 有区别的地方是,我们是对每一个子序列取最后一个元素。于是我们设置\ :code:`agg_level=AggregateLevel.EACH_SEQUENCE`\ 。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
只需要使用LSTM压缩成的最后一个向量,所以对 recurrent_group
进行了 last_seq
操作;但和单层RNN不同,我们是对每一个子序列取最后一个元素,因此设置agg_level=AggregateLevel.EACH_SEQUENCE
。
|
||
* 与单层\ :ref:`glossary_RNN`\ 的配置类似,我们只需要知道使用LSTM encode成的最后一个向量。所以对\ :code:`recurrent_group`\ 进行了\ :code:`last_seq`\ 操作。但是,和单层\ :ref:`glossary_RNN`\ 有区别的地方是,我们是对每一个子序列取最后一个元素。于是我们设置\ :code:`agg_level=AggregateLevel.EACH_SEQUENCE`\ 。 | ||
|
||
* 至此,\ :code:`lstm_last`\ 便和单层\ :ref:`glossary_RNN`\ 的配置中的\ :code:`lstm_last`\ 具有相同的结果了。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
至此,lstm_last
便和单层RNN配置中的 lstm_last
具有相同的结果了。
示例2::ref:`glossary_双层RNN`,子序列间有\ :ref:`glossary_Memory` | ||
================================================================== | ||
|
||
本示例中,意图使用单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 同时实现一个完全等价的全连接\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ ,输入数据为一个完整的\ :ref:`glossary_sequence`\ ,例如\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 。而对于\ :ref:`glossary_双层RNN`\ ,输入数据为在单层\ :ref:`glossary_RNN`\ 数据里面,任意将一些数据组合成双层\ :ref:`glossary_sequence`\ ,例如\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
示例2中,通过分别使用单层RNN和双层RNN,实现了两个完全等价的全连接RNN。
- 对于单层RNN,输入数据为一个完整的时间序列,例如[4, 5, 2, 0, 9, 8, 1, 4]。
- 对于双层RNN,将单层RNN的数据,按顺序组合成任意长度的双层时间序列,例如[ [4, 5, 2], [0, 9], [8, 1, 4]]。
|
||
本示例中,意图使用单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 同时实现一个完全等价的全连接\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ ,输入数据为一个完整的\ :ref:`glossary_sequence`\ ,例如\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 。而对于\ :ref:`glossary_双层RNN`\ ,输入数据为在单层\ :ref:`glossary_RNN`\ 数据里面,任意将一些数据组合成双层\ :ref:`glossary_sequence`\ ,例如\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`。 | ||
|
||
:ref:`glossary_trainer_config`\ 的模型配置 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:ref:glossary_trainer_config
\,词汇表中没有这个
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
在use_concept.rst中
|
||
我们选取单双层序列配置中的不同部分,来对比分析两者语义相同的原因。 | ||
|
||
- 单层序列:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
单层RNN:过了一个很简单的recurrent_group。每一个时间步内,当前的输入y和上一个时间步的输出rnn_state做了一个全连接。
:lines: 39-66 | ||
|
||
.. warning:: | ||
PaddlePaddle目前只支持在每一个时间步中,Memory的sequence长度一致的情况。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PaddlePaddle目前只支持在每个时间步中,memory的序列长度都一致的情况。
|
||
<style> .red {color:red} </style> | ||
|
||
**输入不等长** 是指recurrent_group的多个输入序列,在每个\ :ref:`glossary_timestep`\ 的子序列长度可以不相等。但\ :ref:`glossary_双层RNN`\ 目前需要整体的输出,与某一个输入的序列信息是一致的。使用\ :red:`targetInlink`\ 可以指定和输出序列信息一致。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
但输出序列时,需要指定与哪一个输入的序列信息是一致的。\ :red:targetInlink
\ 关键词用来指定哪一个输入,默认指定第1个输入。
因为单层也可以不等长,所以“但\ :ref:glossary_双层RNN
\ 目前需要整体的输出,与某一个输入的序列信息是一致的。”没考虑单层的情景。
|
||
**输入不等长** 是指recurrent_group的多个输入序列,在每个\ :ref:`glossary_timestep`\ 的子序列长度可以不相等。但\ :ref:`glossary_双层RNN`\ 目前需要整体的输出,与某一个输入的序列信息是一致的。使用\ :red:`targetInlink`\ 可以指定和输出序列信息一致。 | ||
|
||
本例参考配置分别为\ `单层不等长RNN <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf>`_\ 和\ `双层不等长RNN <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf>`_\ 。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
示例3的参考。。。
|
||
本例参考配置分别为\ `单层不等长RNN <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf>`_\ 和\ `双层不等长RNN <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf>`_\ 。 | ||
|
||
本例中对于单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 数据完全相同,对于单层\ :ref:`glossary_RNN`\ 的数据一共有两个样本,他们分别是\ :code:`[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]`\ 和\ :code:`[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]`\ 。对于每一个单层\ :ref:`glossary_RNN`\ 的数据,均有两组特征。在单层数据的基础上,\ :ref:`glossary_双层RNN`\ 数据随意加了一些隔断,例如将第一条数据转化为\ :code:`[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]`\ 。但是需要注意的是Paddle目前只支持序列数目一样的多输入\ :ref:`glossary_双层RNN`\ 。即两个特征,均有三个子序列。每个子序列长度可以不一致,但是子序列的数目必须一样。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
示例3中,单层RNN和双层RNN数据完全相同:
- 单层RNN的数据一共有两个样本,分别是[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]和[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]。即每一个样本均有两个特征。
- 双层RNN的数据,是在单层数据的基础上随意加了一些隔断。例如将第二条样本转化为[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]。即每个特征又包含了三个子序列。
- 注意:对多输入双层RNN的不同序列来说,每个子序列长度可以不一致,但是子序列的数目必须一样。
:ref:`glossary_trainer_config`\ 的模型配置 | ||
------------------------------------------ | ||
|
||
本例中的配置,使用了单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 使用一个\ :code:`recurrent_group`\ 将两个序列同时过完全连接的\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ 的code如下。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同示例二类似,示例三也是通过分别使用单层RNN和双层RNN,实现两个完全等价的全连接RNN。
- 单层RNN
.... - 双层RNN
....
描述格式同示例二的列表相同。将154行分开放到两个列表中。:code:emb2
后面要加一个空格,未显示正确
:language: python | ||
:lines: 36-48 | ||
|
||
- 双层序列,外层memory是一个元素: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
双层RNN
TBD | ||
词汇表 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
词汇表等下一个PR,统一再修改。
@luotao1 Thank you for so many comments, I will change this PR soon, but it could be delayed due to many other things need be done. When I finish change this PR, I will notice you by another comment. |
@luotao1 Follow comments. All folded code has been fixed. |
|
||
* 在单层数据的基础上,\ :ref:`glossary_双层RNN`\ 数据随意加了一些隔断,例如将第一条数据转化为\ :code:`[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]`\ 。 | ||
|
||
* 需要注意的是Paddle目前只支持子序列数目一样的多输入\ :ref:`glossary_双层RNN`\ 。例如本里中的两个特征,均有三个子序列。每个子序列长度可以不一致,但是子序列的数目必须一样。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PaddlePaddle。本里-》本例
------------------------------------------ | ||
|
||
本例中的配置,使用了单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 使用一个\ :code:`recurrent_group`\ 将两个序列同时过完全连接的\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ 的code如下。 | ||
和示例2中的配置累死,示例3的配置使用了单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ ,实现两个完全等价的全连接\ :ref:`glossary_RNN`\ 。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
累死-》类似
@luotao1 All thing done. Please review this PR. |
单双层RNN API对比介绍 | ||
##################### | ||
|
||
本文以PaddlePaddle的双层RNN单元测试为示例,用多对效果完全相同的、分别使用单双层RNN作为网络配置的模型,来讲解如何使用双层RNN。本文中所有的例子,都只是介绍双层RNN的API接口,并不是使用双层RNN解决实际的问题。如果想要了解双层RNN在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。本文中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp <https://github.com/reyoung/Paddle/blob/develop/paddle/gserver/tests/test_RecurrentGradientMachine.cpp>`_\ 。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是跨文档的连接,不是词汇表
* `Recurrent Group教程 <algorithm/rnn/rnn-tutorial.html>`_ | ||
* `单层RNN示例 <../doc/algorithm/rnn/rnn.html>`_ | ||
* `双层RNN示例 <algorithm/rnn/hierarchical-rnn.html>`_ | ||
* :ref:`algo_hrnn_rnn_api_compare` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是跨文档的连接,不是词汇表
* Update registry.h Co-authored-by: haozech <[email protected]>
写一份整体的说明,将双层rnn的整体思路介绍一下,或者给出介绍文档(由于第三点决定新弄一个文档解决,所以这个到时候再做)对每一种demo详细说明一下在做什么(由于第三点决定新弄一个文档解决,所以这个到时候再做)