Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion doc/api/data_provider/pydataprovider2_en.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.. _api_pydataprovider:
.. _api_pydataprovider2_en:

PyDataProvider2
===============
Expand Down Expand Up @@ -104,6 +104,8 @@ And PaddlePadle will do all of the rest things\:

Is this cool?

.. _api_pydataprovider2_en_sequential_model:

DataProvider for the sequential model
-------------------------------------
A sequence model takes sequences as its input. A sequence is made up of several
Expand Down
4 changes: 2 additions & 2 deletions doc/api/predict/swig_py_paddle_en.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ python's :code:`help()` function. Let's walk through the above python script:

* At the beginning, use :code:`swig_paddle.initPaddle()` to initialize
PaddlePaddle with command line arguments, for more about command line arguments
see `Command Line Arguments <../cmd_argument/detail_introduction.html>`_.
see :ref:`cmd_detail_introduction_en` .
* Parse the configuration file that is used in training with :code:`parse_config()`.
Because data to predict with always have no label, and output of prediction work
normally is the output layer rather than the cost layer, so you should modify
Expand All @@ -36,7 +36,7 @@ python's :code:`help()` function. Let's walk through the above python script:
- Note: As swig_paddle can only accept C++ matrices, we offer a utility
class DataProviderConverter that can accept the same input data with
PyDataProvider2, for more information please refer to document
of `PyDataProvider2 <../data_provider/pydataprovider2.html>`_.
of :ref:`api_pydataprovider2_en` .
* Do the prediction with :code:`forwardTest()`, which takes the converted
input data and outputs the activations of the output layer.

Expand Down
2 changes: 2 additions & 0 deletions doc/api/trainer_config_helpers/layers.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _api_trainer_config_helpers_layers:

======
Layers
======
Expand Down
8 changes: 0 additions & 8 deletions doc/getstarted/basic_usage/index_en.rst
Original file line number Diff line number Diff line change
Expand Up @@ -99,11 +99,3 @@ In PaddlePaddle, training is just to get a collection of model parameters, which
Although starts from a random guess, you can see that value of ``w`` changes quickly towards 2 and ``b`` changes quickly towards 0.3. In the end, the predicted line is almost identical with real answer.

There, you have recovered the underlying pattern between ``X`` and ``Y`` only from observed data.


5. Where to Go from Here
-------------------------

- `Install and Build <../build_and_install/index.html>`_
- `Tutorials <../demo/quick_start/index_en.html>`_
- `Example and Demo <../demo/index.html>`_
4 changes: 4 additions & 0 deletions doc/howto/cmd_parameter/detail_introduction_en.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
```eval_rst
.. _cmd_detail_introduction_en:
```

# Detail Description

## Common
Expand Down
6 changes: 3 additions & 3 deletions doc/howto/deep_model/rnn/rnn_en.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Then at the :code:`process` function, each :code:`yield` function will return th
yield src_ids, trg_ids, trg_ids_next


For more details description of how to write a data provider, please refer to `PyDataProvider2 <../../ui/data_provider/index.html>`_. The full data provider file is located at :code:`demo/seqToseq/dataprovider.py`.
For more details description of how to write a data provider, please refer to :ref:`api_pydataprovider2_en` . The full data provider file is located at :code:`demo/seqToseq/dataprovider.py`.

===============================================
Configure Recurrent Neural Network Architecture
Expand Down Expand Up @@ -106,7 +106,7 @@ We will use the sequence to sequence model with attention as an example to demon

In this model, the source sequence :math:`S = \{s_1, \dots, s_T\}` is encoded with a bidirectional gated recurrent neural networks. The hidden states of the bidirectional gated recurrent neural network :math:`H_S = \{H_1, \dots, H_T\}` is called *encoder vector* The decoder is a gated recurrent neural network. When decoding each token :math:`y_t`, the gated recurrent neural network generates a set of weights :math:`W_S^t = \{W_1^t, \dots, W_T^t\}`, which are used to compute a weighted sum of the encoder vector. The weighted sum of the encoder vector is utilized to condition the generation of the token :math:`y_t`.

The encoder part of the model is listed below. It calls :code:`grumemory` to represent gated recurrent neural network. It is the recommended way of using recurrent neural network if the network architecture is simple, because it is faster than :code:`recurrent_group`. We have implemented most of the commonly used recurrent neural network architectures, you can refer to `Layers <../../ui/api/trainer_config_helpers/layers_index.html>`_ for more details.
The encoder part of the model is listed below. It calls :code:`grumemory` to represent gated recurrent neural network. It is the recommended way of using recurrent neural network if the network architecture is simple, because it is faster than :code:`recurrent_group`. We have implemented most of the commonly used recurrent neural network architectures, you can refer to :ref:`api_trainer_config_helpers_layers` for more details.

We also project the encoder vector to :code:`decoder_size` dimensional space, get the first instance of the backward recurrent network, and project it to :code:`decoder_size` dimensional space:

Expand Down Expand Up @@ -246,6 +246,6 @@ The code is listed below:
outputs(beam_gen)


Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to `Semantic Role Labeling Demo <../../demo/semantic_role_labeling/index.html>`_ for more details.
Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to :ref:`semantic_role_labeling_en` for more details.

The full configuration file is located at :code:`demo/seqToseq/seqToseq_net.py`.
6 changes: 3 additions & 3 deletions doc/howto/optimization/gpu_profiling_en.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ In this tutorial, we will focus on nvprof and nvvp.
:code:`test_GpuProfiler` from :code:`paddle/math/tests` directory will be used to evaluate
above profilers.

.. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp
.. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp
:language: c++
:lines: 111-124
:linenos:
Expand All @@ -77,7 +77,7 @@ As a simple example, consider the following:

1. Add :code:`REGISTER_TIMER_INFO` and :code:`printAllStatus` functions (see the emphasize-lines).

.. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp
.. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp
:language: c++
:lines: 111-124
:emphasize-lines: 8-10,13
Expand Down Expand Up @@ -124,7 +124,7 @@ To use this command line profiler **nvprof**, you can simply issue the following

1. Add :code:`REGISTER_GPU_PROFILER` function (see the emphasize-lines).

.. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp
.. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp
:language: c++
:lines: 111-124
:emphasize-lines: 6-7
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/embedding_model/index_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ where `train.sh` is almost the same as `demo/seqToseq/translation/train.sh`, the
- `--init_model_path`: path of the initialization model, here is `data/paraphrase_model`
- `--load_missing_parameter_strategy`: operations when model file is missing, here use a normal distibution to initialize the other parameters except for the embedding layer

For users who want to understand the dataset format, model architecture and training procedure in detail, please refer to [Text generation Tutorial](../text_generation/text_generation.md).
For users who want to understand the dataset format, model architecture and training procedure in detail, please refer to [Text generation Tutorial](../text_generation/index_en.md).

## Optional Function ##
### Embedding Parameters Observation
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/rec/ml_regression_en.rst
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ In this :code:`dataprovider.py`, we should set\:
* use_seq\: Whether this :code:`dataprovider.py` in sequence mode or not.
* process\: Return each sample of data to :code:`paddle`.

The data provider details document see :ref:`api_pydataprovider`.
The data provider details document see :ref:`api_pydataprovider2_en`.

Train
`````
Expand Down
4 changes: 4 additions & 0 deletions doc/tutorials/semantic_role_labeling/index_en.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
```eval_rst
.. _semantic_role_labeling_en:
```

# Semantic Role labeling Tutorial #

Semantic role labeling (SRL) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence. SRL is useful as an intermediate step in a wide range of natural language processing tasks, such as information extraction. automatic document categorization and question answering. An instance is as following [1]:
Expand Down
201 changes: 0 additions & 201 deletions doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md

This file was deleted.

3 changes: 1 addition & 2 deletions python/paddle/trainer_config_helpers/data_sources.py
Original file line number Diff line number Diff line change
Expand Up @@ -186,8 +186,7 @@ def define_py_data_sources2(train_list, test_list, module, obj, args=None):
obj="process",
args={"dictionary": dict_name})

The related data provider can refer to
`here <../../data_provider/pydataprovider2.html#dataprovider-for-the-sequential-model>`__.
The related data provider can refer to :ref:`api_pydataprovider2_en_sequential_model` .

:param train_list: Train list name.
:type train_list: basestring
Expand Down