Skip to content

Commit d77e6c4

Browse files
[CodeStyle][Typos][T-[1-5]] Fix typo('Tenosr','Tthe','Traning','Transfomed','Tansformer') #7574 (#7577)
* Fix-c-1-5 * fix-t1-t5 * fix_HOT_1 * del_typos * test * . * debug * world-debug
1 parent 4ae6d13 commit d77e6c4

File tree

9 files changed

+16
-23
lines changed

9 files changed

+16
-23
lines changed

_typos.toml

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -58,11 +58,6 @@ Similarily = "Similarily"
5858
Simle = "Simle"
5959
Sovler = "Sovler"
6060
Successed = "Successed"
61-
Tansformer = "Tansformer"
62-
Tenosr = "Tenosr"
63-
Traning = "Traning"
64-
Transfomed = "Transfomed"
65-
Tthe = "Tthe"
6661
Ture = "Ture"
6762
accordding = "accordding"
6863
accoustic = "accoustic"
@@ -214,12 +209,10 @@ sucessor = "sucessor"
214209
sucessors = "sucessors"
215210
szie = "szie"
216211
tempory = "tempory"
217-
tenosr = "tenosr"
218212
thier = "thier"
219213
traget = "traget"
220214
traing = "traing"
221215
trainning = "trainning"
222-
traning = "traning"
223216
transfered = "transfered"
224217
trasformed = "trasformed"
225218
treshold = "treshold"

docs/api/paddle/distribution/Overview_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ paddle.distribution 目录下包含飞桨框架支持的随机变量的概率分
3535
" :ref:`MultivariateNormal <cn_api_paddle_distribution_MultivariateNormal>` ", "MultivariateNormal 概率分布类"
3636
" :ref:`Multinomial <cn_api_paddle_distribution_Multinomial>` ", "Multinomial 概率分布类"
3737
" :ref:`Independent <cn_api_paddle_distribution_Independent>` ", "Independent 概率分布类"
38-
" :ref:`TransfomedDistribution <cn_api_paddle_distribution_TransformedDistribution>` ", "TransformedDistribution 概率分布类"
38+
" :ref:`TransformedDistribution <cn_api_paddle_distribution_TransformedDistribution>` ", "TransformedDistribution 概率分布类"
3939
" :ref:`Laplace <cn_api_paddle_distribution_Laplace>`", "Laplace 概率分布类"
4040
" :ref:`LKJCholesky <cn_api_paddle_distribution_LKJCholesky>`", "LKJCholesky 概率分布类"
4141
" :ref:`LogNormal <cn_api_paddle_distribution_LogNormal>` ", "LogNormal 概率分布类"

docs/api/paddle/nn/BeamSearchDecoder_cn.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ tile_beam_merge_with_batch(x, beam_size)
4444

4545
**参数**
4646

47-
- **x** (Variable) - 形状为 :math:`[batch\_size, ...]` 的 tenosr。数据类型应为 float32,float64,int32,int64 或 bool。
47+
- **x** (Variable) - 形状为 :math:`[batch\_size, ...]` 的 tensor。数据类型应为 float32,float64,int32,int64 或 bool。
4848
- **beam_size** (int) - 在 beam search 中使用的 beam 宽度。
4949

5050
**返回**
@@ -59,7 +59,7 @@ _split_batch_beams(x)
5959

6060
**参数**
6161

62-
- **x** (Variable) - 形状为 :math:`[batch\_size * beam\_size, ...]` 的 tenosr。数据类型应为 float32,float64,int32,int64 或 bool。
62+
- **x** (Variable) - 形状为 :math:`[batch\_size * beam\_size, ...]` 的 tensor。数据类型应为 float32,float64,int32,int64 或 bool。
6363

6464
**返回**
6565

@@ -72,7 +72,7 @@ _merge_batch_beams(x)
7272

7373
**参数**
7474

75-
- **x** (Variable) - 形状为 :math:`[batch\_size, beam_size,...]` 的 Tenosr。数据类型应为 float32,float64,int32,int64 或 bool。
75+
- **x** (Variable) - 形状为 :math:`[batch\_size, beam_size,...]` 的 Tensor。数据类型应为 float32,float64,int32,int64 或 bool。
7676

7777
**返回**
7878

@@ -85,7 +85,7 @@ _expand_to_beam_size(x)
8585

8686
**参数**
8787

88-
- **x** (Variable) - 形状为 :math:`[batch\_size, ...]` 的 tenosr。数据类型应为 float32,float64,int32,int64 或 bool。
88+
- **x** (Variable) - 形状为 :math:`[batch\_size, ...]` 的 tensor。数据类型应为 float32,float64,int32,int64 或 bool。
8989

9090
**返回**
9191

docs/api/paddle/static/nn/sequence_pool_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ sequence_pool
6767
:::::::::
6868
- **input** (Tensor) - 类型为 Tensor 的输入序列,仅支持 lod_level 不超过 2 的 Tensor,数据类型为 float32。
6969
- **pool_type** (str) - 池化类型,支持 average,sum,sqrt,max,last 和 first 池化操作。
70-
- **is_test** (bool,可选) - 仅在 pool_type 取值为 max 时生效。当 is_test 为 False 时,则在池化操作过程中会创建 maxIndex 临时 Tenosr,以记录最大特征值对应的索引信息,用于训练阶段的反向梯度计算。默认为 False。
70+
- **is_test** (bool,可选) - 仅在 pool_type 取值为 max 时生效。当 is_test 为 False 时,则在池化操作过程中会创建 maxIndex 临时 Tensor,以记录最大特征值对应的索引信息,用于训练阶段的反向梯度计算。默认为 False。
7171
- **pad_value** (float,可选) - 用于填充输入序列为空时的池化结果,默认为 0.0。
7272

7373
返回

docs/design/quantization/fixed_point_quantization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ So the quantization transipler will change some inputs of the corresponding back
9696

9797
There are two strategies to calculate quantization scale, we call them dynamic and static strategy. The dynamic strategy calculates the quantization scale value each iteration. The static strategy keeps the quantization scale for different inputs.
9898

99-
For weights, we apply the dynamic strategy in the training, that is to say, the quantization scale will be recalculated during each iteration until the traning is finished.
99+
For weights, we apply the dynamic strategy in the training, that is to say, the quantization scale will be recalculated during each iteration until the training is finished.
100100

101101
For activations, the quantization scales are estimated during training, then used in inference. There are several different ways to estimate them:
102102

docs/dev_guides/git_guides/submit_pr_guide_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Guide of submitting PR to GitHub
22

3-
## Tthe submit of Pull Request
3+
## The submit of Pull Request
44

55
- Please note the number of commit:
66

docs/eval/evaluation_of_docs_system.md

Lines changed: 6 additions & 6 deletions
Large diffs are not rendered by default.

docs/faq/save_cn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ adam.set_state_dict(opti_state_dict)
6161
+ 答复:
6262
1. 对于``state_dict``保存方式与 paddle2.0 完全相同,我们将``Tensor``转化为``numpy.ndarray``保存。
6363

64-
2. 对于其他形式的包含``Tensor``的对象(``Layer``对象,单个``Tensor``以及包含``Tensor``的嵌套``list````tuple````dict``),在动态图中,将``Tensor``转化为``tuple(Tensor.name, Tensor.numpy())``;在静态图中,将``Tensor``直接转化为``numpy.ndarray``。之所以这样做,是因为当在静态图中使用动态保存的模型时,有时需要``Tensor``的名字因此将名字保存下来,同时,在``load``时区分这个``numpy.ndarray``是由 Tenosr 转化而来还是本来就是``numpy.ndarray``;保存静态图的``Tensor``时,通常通过``Variable.get_value``得到``Tensor``再使用``paddle.save``保存``Tensor``,此时,``Variable``是有名字的,这个``Tensor``是没有名字的,因此将静态图``Tensor``直接转化为``numpy.ndarray``保存。
64+
2. 对于其他形式的包含``Tensor``的对象(``Layer``对象,单个``Tensor``以及包含``Tensor``的嵌套``list````tuple````dict``),在动态图中,将``Tensor``转化为``tuple(Tensor.name, Tensor.numpy())``;在静态图中,将``Tensor``直接转化为``numpy.ndarray``。之所以这样做,是因为当在静态图中使用动态保存的模型时,有时需要``Tensor``的名字因此将名字保存下来,同时,在``load``时区分这个``numpy.ndarray``是由 Tensor 转化而来还是本来就是``numpy.ndarray``;保存静态图的``Tensor``时,通常通过``Variable.get_value``得到``Tensor``再使用``paddle.save``保存``Tensor``,此时,``Variable``是有名字的,这个``Tensor``是没有名字的,因此将静态图``Tensor``直接转化为``numpy.ndarray``保存。
6565
> 此处动态图 Tensor 和静态图 Tensor 是不相同的,动态图 Tensor 有 name、stop_gradient 等属性;而静态图的 Tensor 是比动态图 Tensor 轻量级的,只包含 place 等基本信息,不包含名字等。
6666
6767
##### 问题:将 Tensor 转换为 numpy.ndarray 或者 tuple(Tensor.name, Tensor.numpy())不是惟一可译编码,为什么还要做这样的转换呢?

docs/guides/custom_op/new_python_op_cn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -269,7 +269,7 @@ def tanh(x):
269269
# 可以直接将 Tensor 作为 np.tanh 的输入参数
270270
return np.tanh(x)
271271

272-
# 前向函数 2:将两个 2-D Tenosr 相加,输入多个 Tensor 以 list[Tensor]或 tuple(Tensor)形式
272+
# 前向函数 2:将两个 2-D Tensor 相加,输入多个 Tensor 以 list[Tensor]或 tuple(Tensor)形式
273273
def element_wise_add(x, y):
274274
# 必须先手动将 Tensor 转换为 numpy 数组,否则无法支持 numpy 的 shape 操作
275275
x = np.array(x)

0 commit comments

Comments
 (0)