Skip to content

Commit e0a85db

Browse files
committed
fix dead links, reduce image size
1 parent e0a81dc commit e0a85db

File tree

5 files changed

+9
-21
lines changed

5 files changed

+9
-21
lines changed

doc/howto/deep_model/rnn/rnn_config_cn.rst

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,7 @@ PaddlePaddle
3333
3434
yield src_ids, trg_ids, trg_ids_next
3535
36-
有关如何编写数据提供程序的更多细节描述,请参考
37-
`PyDataProvider2 <../../ui/data_provider/index.html>`__\ 。完整的数据提供文件在
36+
有关如何编写数据提供程序的更多细节描述,请参考 :ref:`api_pydataprovider2` 。完整的数据提供文件在
3837
``demo/seqToseq/dataprovider.py``\
3938

4039
配置循环神经网络架构
@@ -132,9 +131,7 @@ Sequence to Sequence Model with Attention
132131

133132
模型的编码器部分如下所示。它叫做\ ``grumemory``\ 来表示门控循环神经网络。如果网络架构简单,那么推荐使用循环神经网络的方法,因为它比
134133
``recurrent_group``
135-
更快。我们已经实现了大多数常用的循环神经网络架构,可以参考
136-
`Layers <../../ui/api/trainer_config_helpers/layers_index.html>`__
137-
了解更多细节。
134+
更快。我们已经实现了大多数常用的循环神经网络架构,可以参考 :ref:`api_trainer_config_helpers_layers` 了解更多细节。
138135

139136
我们还将编码向量投射到 ``decoder_size``
140137
维空间。这通过获得反向循环网络的第一个实例,并将其投射到
@@ -276,9 +273,6 @@ attention,门控循环单元单步函数和输出函数:
276273
result_file=gen_trans_file)
277274
outputs(beam_gen)
278275
279-
注意,这种生成技术只用于类似解码器的生成过程。如果你正在处理序列标记任务,请参阅
280-
`Semantic Role Labeling
281-
Demo <../../demo/semantic_role_labeling/index.html>`__
282-
了解更多详细信息。
276+
注意,这种生成技术只用于类似解码器的生成过程。如果你正在处理序列标记任务,请参阅 :ref:`semantic_role_labeling` 了解更多详细信息。
283277

284278
完整的配置文件在\ ``demo/seqToseq/seqToseq_net.py``\

doc/howto/usage/k8s/k8s_aws_en.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -331,15 +331,15 @@ For sharing the training data across all the Kubernetes nodes, we use EFS (Elast
331331
1. Make sure you added AmazonElasticFileSystemFullAccess policy in your group.
332332

333333
1. Create the Elastic File System in AWS console, and attach the new VPC with it.
334-
<img src="src/create_efs.png" width="800">
334+
<center>![](src/create_efs.png)</center>
335335

336336

337337
1. Modify the Kubernetes security group under ec2/Security Groups, add additional inbound policy "All TCP TCP 0 - 65535 0.0.0.0/0" for Kubernetes default VPC security group.
338-
<img src="src/add_security_group.png" width="800">
338+
<center>![](src/add_security_group.png)</center>
339339

340340

341341
1. Follow the EC2 mount instruction to mount the disk onto all the Kubernetes nodes, we recommend to mount EFS disk onto ~/efs.
342-
<img src="src/efs_mount.png" width="800">
342+
<center>![](src/efs_mount.png)</center>
343343

344344

345345
Before starting the training, you should place your user config and divided training data onto EFS. When the training start, each task will copy related files from EFS into container, and it will also write the training results back onto EFS, we will show you how to place the data later in this article.

doc/tutorials/gan/gan.png

-15.1 KB
Loading

doc/tutorials/gan/index_en.md

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,7 @@ This demo implements GAN training described in the original [GAN paper](https://
44

55
The high-level structure of GAN is shown in Figure. 1 below. It is composed of two major parts: a generator and a discriminator, both of which are based on neural networks. The generator takes in some kind of noise with a known distribution and transforms it into an image. The discriminator takes in an image and determines whether it is artificially generated by the generator or a real image. So the generator and the discriminator are in a competitive game in which generator is trying to generate image to look as real as possible to fool the discriminator, while the discriminator is trying to distinguish between real and fake images.
66

7-
<p align="center">
8-
<img src="./gan.png" width="500" height="300">
9-
</p>
7+
<center>![](./gan.png)</center>
108
<p align="center">
119
Figure 1. GAN-Model-Structure
1210
<a href="https://ishmaelbelghazi.github.io/ALI/">figure credit</a>
@@ -111,9 +109,7 @@ $python gan_trainer.py -d uniform --useGpu 1
111109
```
112110
The generated samples can be found in ./uniform_samples/ and one example is shown below as Figure 2. One can see that it roughly recovers the 2D uniform distribution.
113111

114-
<p align="center">
115-
<img src="./uniform_sample.png" width="300" height="300">
116-
</p>
112+
<center>![](./uniform_sample.png)</center>
117113
<p align="center">
118114
Figure 2. Uniform Sample
119115
</p>
@@ -135,9 +131,7 @@ To train the GAN model on mnist data, one can use the following command:
135131
$python gan_trainer.py -d mnist --useGpu 1
136132
```
137133
The generated sample images can be found at ./mnist_samples/ and one example is shown below as Figure 3.
138-
<p align="center">
139-
<img src="./mnist_sample.png" width="300" height="300">
140-
</p>
134+
<center>![](./mnist_sample.png)</center>
141135
<p align="center">
142136
Figure 3. MNIST Sample
143137
</p>

doc/tutorials/gan/uniform_sample.png

4.17 KB
Loading

0 commit comments

Comments
 (0)