Skip to content

Commit ead188c

Browse files
test-commit
1 parent bea67d8 commit ead188c

File tree

7 files changed

+6
-12
lines changed

7 files changed

+6
-12
lines changed

_typos.toml

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -38,11 +38,6 @@ Similarily = "Similarily"
3838
Simle = "Simle"
3939
Sovler = "Sovler"
4040
Successed = "Successed"
41-
desgin = "desgin"
42-
desginated = "desginated"
43-
desigin = "desigin"
44-
determinated = "determinated"
45-
diffcult = "diffcult"
4641
dimention = "dimention"
4742
dimentions = "dimentions"
4843
dirrectories = "dirrectories"
@@ -95,7 +90,6 @@ outpu = "outpu"
9590
outpus = "outpus"
9691
overrided = "overrided"
9792
overwrited = "overwrited"
98-
porcess = "porcess"
9993
processer = "processer"
10094
sacle = "sacle"
10195
samle = "samle"

docs/design/data_type/float16.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ fp16_tensor.set(tensor.astype(numpy.float16).view(numpy.uint16), GPUPlace)
130130
```
131131

132132
### Consistent API requirement
133-
The basic inference in float16 mode requires users to feed input and obtain output both of float16 data type. However, in this way, the inference APIs are not consistent between float16 mode and float mode, and users may find it confusing and diffcult to use float16 inference since they need to do extra steps to provide float16 input data and convert float16 output data back to float. To have consistent API for different inference modes, we need to transpile the program desc in some way so that we can run float16 inference by feeding and fetching variables of float data type.
133+
The basic inference in float16 mode requires users to feed input and obtain output both of float16 data type. However, in this way, the inference APIs are not consistent between float16 mode and float mode, and users may find it confusing and difficult to use float16 inference since they need to do extra steps to provide float16 input data and convert float16 output data back to float. To have consistent API for different inference modes, we need to transpile the program desc in some way so that we can run float16 inference by feeding and fetching variables of float data type.
134134

135135
This problem can be solved by introducing a type-casting operator which takes an input variable of certain data type, cast it to another specified data type, and put the casted data into the output variable. Insert cast operator where needed can make a program internally run in float16 mode.
136136

docs/design/others/graph.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ For each parameter, like W and b created by `layer.fc`, marked as double circles
5656

5757
## Block and Graph
5858

59-
The word block and graph are interchangable in the desgin of PaddlePaddle. A [Block](https://github.com/PaddlePaddle/Paddle/pull/3708) is a metaphore of the code and local variables in a pair of curly braces in programming languages, where operators are like statements or instructions. A graph of operators and variables is a representation of the block.
59+
The word block and graph are interchangable in the design of PaddlePaddle. A [Block](https://github.com/PaddlePaddle/Paddle/pull/3708) is a metaphore of the code and local variables in a pair of curly braces in programming languages, where operators are like statements or instructions. A graph of operators and variables is a representation of the block.
6060

6161
A Block keeps operators in an array `BlockDesc::ops`
6262

docs/design/quantization/fixed_point_quantization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ From these formulas, dequantization also can be moved before GEMM, do dequantiza
7979
Figure 2. Equivalent forward in training with simulated quantization.
8080
</p>
8181

82-
We use this equivalent workflow in the training. In our desigin, there is a quantization transpiler to insert the quantization operator and the de-quantization operator in the Fluid `ProgramDesc`. Since the outputs of quantization and de-quantization operator are still in floating point, they are called faked quantization and de-quantization operator. And the training framework is called simulated quantization.
82+
We use this equivalent workflow in the training. In our design, there is a quantization transpiler to insert the quantization operator and the de-quantization operator in the Fluid `ProgramDesc`. Since the outputs of quantization and de-quantization operator are still in floating point, they are called faked quantization and de-quantization operator. And the training framework is called simulated quantization.
8383

8484
#### Backward pass
8585

docs/dev_guides/custom_device_docs/custom_runtime_en.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Device APIs
2929
+---------------------------+----------------------------------------------------------+----------+
3030
| get_device | To get the current device | Y |
3131
+---------------------------+----------------------------------------------------------+----------+
32-
| synchronize_device | To synchronize the desginated device | Y |
32+
| synchronize_device | To synchronize the designated device | Y |
3333
+---------------------------+----------------------------------------------------------+----------+
3434
| get_device_count | To count available devices | Y |
3535
+---------------------------+----------------------------------------------------------+----------+
File renamed without changes.

docs/guides/paddle_v3_features/sot_cn.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ import paddle
130130
import numpy as np
131131
import random
132132

133-
# set seed for determinated output
133+
# set seed for determined output
134134
paddle.seed(2025)
135135
np.random.seed(2025)
136136
random.seed(2025)
@@ -172,7 +172,7 @@ import paddle
172172
import numpy as np
173173
import random
174174

175-
# set seed for determinated output
175+
# set seed for determined output
176176
paddle.seed(2025)
177177
np.random.seed(2025)
178178
random.seed(2025)

0 commit comments

Comments
 (0)