Skip to content
Closed
6 changes: 0 additions & 6 deletions _typos.toml
Original file line number Diff line number Diff line change
Expand Up @@ -60,13 +60,7 @@ cantains = "cantains"
classfy = "classfy"
cliping = "cliping"
colunms = "colunms"
commmit = "commmit"
complie = "complie"
condtional = "condtional"
conjuction = "conjuction"
containg = "containg"
contruct = "contruct"
contructed = "contructed"
contruction = "contruction"
contxt = "contxt"
convertion = "convertion"
Expand Down
2 changes: 1 addition & 1 deletion docs/design/concurrent/go_op.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
## Introduction

The **go_op** allows user's of PaddlePaddle to run program blocks on a detached
thread. It works in conjuction with CSP operators (channel_send,
thread. It works in conjunction with CSP operators (channel_send,
channel_receive, channel_open, channel_close, and select) to allow users to
concurrently process data and communicate easily between different threads.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/memory/memory_optimization.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ At last, we take basic strategy and liveness analysis techniques learning from c
In-place is a built-in attribute of an operator. Since we treat in-place and other operators differently, we have to add an in-place attribute for every operator.


#### contruct control flow graph
#### construct control flow graph

Following is the ProgramDesc protobuf of [machine translation](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/tests/book/test_machine_translation.py) example.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/others/gan_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ build the whole GAN model, define training loss for both generator and discrimat
To be more detailed, we introduce our design of DCGAN as following:

### Class member Function: Initializer
- Set up hyper-parameters, including condtional dimension, noise dimension, batch size and so forth.
- Set up hyper-parameters, including conditional dimension, noise dimension, batch size and so forth.
- Declare and define all the model variables. All the discriminator parameters are included in the list self.theta_D and all the generator parameters are included in the list self.theta_G.
```python
class DCGAN:
Expand Down
2 changes: 1 addition & 1 deletion docs/dev_guides/git_guides/local_dev_guide_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ clang-format.......................................(no files to check)Skipped
create mode 100644 233
```

可以看到,在执行`git commit`后,输出了一些额外的信息。这是使用`pre-commmit`进行代码风格检查的结果,关于代码风格检查的使用问题请参考[代码风格检查指南](./codestyle_check_guide_cn.html)。
可以看到,在执行`git commit`后,输出了一些额外的信息。这是使用`pre-commit`进行代码风格检查的结果,关于代码风格检查的使用问题请参考[代码风格检查指南](./codestyle_check_guide_cn.html)。

## 保持本地仓库最新

Expand Down
4 changes: 2 additions & 2 deletions docs/dev_guides/sugon/index_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@

以下将说明 Paddle 适配曙光相关的开发指南:

- `曙光智算平台-Paddle 源码编译和单测执行 <./complie_and_test_cn.html>`_ : 如何曙光曙光智算平台编译 Paddle 源码编译并执行单测。
- `曙光智算平台-Paddle 源码编译和单测执行 <./compile_and_test_cn.html>`_ : 如何曙光曙光智算平台编译 Paddle 源码编译并执行单测。
- `Paddle 适配 C86 加速卡详解 <./paddle_c86_cn.html>`_ : 详解 Paddle 适配 C86 加速卡。
- `Paddle 框架下 ROCm(HIP)算子单测修复指导 <./paddle_c86_fix_guides_cn.html>`_ : 指导 Paddle 框架下 ROCm(HIP)算子单测修复。


.. toctree::
:hidden:

complie_and_test_cn.md
compile_and_test_cn.md
paddle_c86_cn.md
paddle_c86_fix_guides_cn.md
2 changes: 1 addition & 1 deletion docs/guides/advanced/layer_and_model_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ class Model(paddle.nn.Layer):
return y
```

Here we contructed a ``Model`` which inherited from ``paddle.nn.Layer``. This model only holds a single layer of ``paddle.nn.Flatten``, which flattens the input variables **inputs** upon execution.
Here we constructed a ``Model`` which inherited from ``paddle.nn.Layer``. This model only holds a single layer of ``paddle.nn.Flatten``, which flattens the input variables **inputs** upon execution.

## Sublayers

Expand Down