Skip to content

Commit a9e26c1

Browse files
committed
typos修复2
1 parent 7582035 commit a9e26c1

File tree

19 files changed

+103
-117
lines changed

19 files changed

+103
-117
lines changed

_typos.toml

Lines changed: 0 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -107,20 +107,6 @@ correspoinding = "correspoinding"
107107
corss = "corss"
108108
creatation = "creatation"
109109
creats = "creats"
110-
dafault = "dafault"
111-
datas = "datas"
112-
decribe = "decribe"
113-
decribes = "decribes"
114-
deocder = "deocder"
115-
desgin = "desgin"
116-
desginated = "desginated"
117-
desigin = "desigin"
118-
determinated = "determinated"
119-
diffcult = "diffcult"
120-
dimention = "dimention"
121-
dimentions = "dimentions"
122-
dirrectories = "dirrectories"
123-
disucssion = "disucssion"
124110
egde = "egde"
125111
enviornment = "enviornment"
126112
erros = "erros"

docs/design/concepts/tensor.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -116,12 +116,12 @@ Before writing code, please make sure you already look through Majel Source Code
116116

117117

118118
### Memory Management
119-
`Allocation` manages a block of memory in device(CPU/GPU). We use `Place` to decribe memory location. The details of memory allocation and deallocation are implememted in `Allocator` and `DeAllocator`. Related low-level API such as `hl_malloc_device()` and `hl_malloc_host()` are provided by Paddle.
119+
`Allocation` manages a block of memory in device(CPU/GPU). We use `Place` to describe memory location. The details of memory allocation and deallocation are implememted in `Allocator` and `DeAllocator`. Related low-level API such as `hl_malloc_device()` and `hl_malloc_host()` are provided by Paddle.
120120

121121
### Dim and Array
122122
#### Dim
123123

124-
`Dim` decribes the dimension information of an array.
124+
`Dim` describes the dimension information of an array.
125125

126126
`DDimVar` is an alias of a specializd class of boost.variant class template.
127127

docs/design/data_type/float16.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ fp16_tensor.set(tensor.astype(numpy.float16).view(numpy.uint16), GPUPlace)
130130
```
131131

132132
### Consistent API requirement
133-
The basic inference in float16 mode requires users to feed input and obtain output both of float16 data type. However, in this way, the inference APIs are not consistent between float16 mode and float mode, and users may find it confusing and diffcult to use float16 inference since they need to do extra steps to provide float16 input data and convert float16 output data back to float. To have consistent API for different inference modes, we need to transpile the program desc in some way so that we can run float16 inference by feeding and fetching variables of float data type.
133+
The basic inference in float16 mode requires users to feed input and obtain output both of float16 data type. However, in this way, the inference APIs are not consistent between float16 mode and float mode, and users may find it confusing and difficult to use float16 inference since they need to do extra steps to provide float16 input data and convert float16 output data back to float. To have consistent API for different inference modes, we need to transpile the program desc in some way so that we can run float16 inference by feeding and fetching variables of float data type.
134134

135135
This problem can be solved by introducing a type-casting operator which takes an input variable of certain data type, cast it to another specified data type, and put the casted data into the output variable. Insert cast operator where needed can make a program internally run in float16 mode.
136136

docs/design/motivation/api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Some essential concepts that our API have to provide include:
2626

2727
As a summarization
2828
of
29-
[our disucssion](https://github.com/PaddlePaddle/Paddle/issues/1315),
29+
[our discussion](https://github.com/PaddlePaddle/Paddle/issues/1315),
3030
let us present two examples here:
3131

3232

docs/design/others/graph.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ For each parameter, like W and b created by `layer.fc`, marked as double circles
5656

5757
## Block and Graph
5858

59-
The word block and graph are interchangable in the desgin of PaddlePaddle. A [Block](https://github.com/PaddlePaddle/Paddle/pull/3708) is a metaphore of the code and local variables in a pair of curly braces in programming languages, where operators are like statements or instructions. A graph of operators and variables is a representation of the block.
59+
The word block and graph are interchangable in the design of PaddlePaddle. A [Block](https://github.com/PaddlePaddle/Paddle/pull/3708) is a metaphore of the code and local variables in a pair of curly braces in programming languages, where operators are like statements or instructions. A graph of operators and variables is a representation of the block.
6060

6161
A Block keeps operators in an array `BlockDesc::ops`
6262

docs/design/quantization/fixed_point_quantization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ From these formulas, dequantization also can be moved before GEMM, do dequantiza
7979
Figure 2. Equivalent forward in training with simulated quantization.
8080
</p>
8181

82-
We use this equivalent workflow in the training. In our desigin, there is a quantization transpiler to insert the quantization operator and the de-quantization operator in the Fluid `ProgramDesc`. Since the outputs of quantization and de-quantization operator are still in floating point, they are called faked quantization and de-quantization operator. And the training framework is called simulated quantization.
82+
We use this equivalent workflow in the training. In our design, there is a quantization transpiler to insert the quantization operator and the de-quantization operator in the Fluid `ProgramDesc`. Since the outputs of quantization and de-quantization operator are still in floating point, they are called faked quantization and de-quantization operator. And the training framework is called simulated quantization.
8383

8484
#### Backward pass
8585

docs/dev_guides/custom_device_docs/custom_device_example_cn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -285,7 +285,7 @@ add_custom_command(TARGET ${PLUGIN_NAME} POST_BUILD
285285
COMMAND ${CMAKE_COMMAND} -E make_directory ${CMAKE_CURRENT_BINARY_DIR}/python/
286286
COMMAND ${CMAKE_COMMAND} -E make_directory ${CMAKE_CURRENT_BINARY_DIR}/python/paddle-plugins/
287287
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_BINARY_DIR}/lib${PLUGIN_NAME}.so ${CMAKE_CURRENT_BINARY_DIR}/python/paddle-plugins/
288-
COMMENT "Creating plugin dirrectories------>>>"
288+
COMMENT "Creating plugin directories------>>>"
289289
)
290290
291291
add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/python/.timestamp

docs/dev_guides/custom_device_docs/custom_device_example_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -281,7 +281,7 @@ add_custom_command(TARGET ${PLUGIN_NAME} POST_BUILD
281281
COMMAND ${CMAKE_COMMAND} -E make_directory ${CMAKE_CURRENT_BINARY_DIR}/python/
282282
COMMAND ${CMAKE_COMMAND} -E make_directory ${CMAKE_CURRENT_BINARY_DIR}/python/paddle-plugins/
283283
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_BINARY_DIR}/lib${PLUGIN_NAME}.so ${CMAKE_CURRENT_BINARY_DIR}/python/paddle-plugins/
284-
COMMENT "Creating plugin dirrectories------>>>"
284+
COMMENT "Creating plugin directories------>>>"
285285
)
286286
287287
add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/python/.timestamp

docs/dev_guides/custom_device_docs/custom_runtime_en.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Device APIs
2929
+---------------------------+----------------------------------------------------------+----------+
3030
| get_device | To get the current device | Y |
3131
+---------------------------+----------------------------------------------------------+----------+
32-
| synchronize_device | To synchronize the desginated device | Y |
32+
| synchronize_device | To synchronize the designated device | Y |
3333
+---------------------------+----------------------------------------------------------+----------+
3434
| get_device_count | To count available devices | Y |
3535
+---------------------------+----------------------------------------------------------+----------+

docs/dev_guides/style_guide_and_references/error_message_writing_specification_cn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@ PADDLE_ENFORCE_EQ(
266266

267267
```c++
268268
PADDLE_ENFORCE(
269-
tmp == *data_type || *data_type == dafault_data_type,
269+
tmp == *data_type || *data_type == default_data_type,
270270
phi::errors::InvalidArgument(
271271
"The DataType of %s Op's duplicable Variable %s must be "
272272
"consistent. The current variable type is (%s), but the "

0 commit comments

Comments
 (0)