Skip to content
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 0 additions & 8 deletions _typos.toml
Original file line number Diff line number Diff line change
Expand Up @@ -58,11 +58,6 @@ Similarily = "Similarily"
Simle = "Simle"
Sovler = "Sovler"
Successed = "Successed"
Tansformer = "Tansformer"
Tenosr = "Tenosr"
Traning = "Traning"
Transfomed = "Transfomed"
Tthe = "Tthe"
Ture = "Ture"
accordding = "accordding"
accoustic = "accoustic"
Expand Down Expand Up @@ -214,12 +209,9 @@ sucessor = "sucessor"
sucessors = "sucessors"
szie = "szie"
tempory = "tempory"
tenosr = "tenosr"
thier = "thier"
traget = "traget"
traing = "traing"
trainning = "trainning"
traning = "traning"
transfered = "transfered"
trasformed = "trasformed"
treshold = "treshold"
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/distribution/Overview_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ paddle.distribution 目录下包含飞桨框架支持的随机变量的概率分
" :ref:`MultivariateNormal <cn_api_paddle_distribution_MultivariateNormal>` ", "MultivariateNormal 概率分布类"
" :ref:`Multinomial <cn_api_paddle_distribution_Multinomial>` ", "Multinomial 概率分布类"
" :ref:`Independent <cn_api_paddle_distribution_Independent>` ", "Independent 概率分布类"
" :ref:`TransfomedDistribution <cn_api_paddle_distribution_TransformedDistribution>` ", "TransformedDistribution 概率分布类"
" :ref:`TransformedDistribution <cn_api_paddle_distribution_TransformedDistribution>` ", "TransformedDistribution 概率分布类"
" :ref:`Laplace <cn_api_paddle_distribution_Laplace>`", "Laplace 概率分布类"
" :ref:`LKJCholesky <cn_api_paddle_distribution_LKJCholesky>`", "LKJCholesky 概率分布类"
" :ref:`LogNormal <cn_api_paddle_distribution_LogNormal>` ", "LogNormal 概率分布类"
Expand Down
8 changes: 4 additions & 4 deletions docs/api/paddle/nn/BeamSearchDecoder_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ tile_beam_merge_with_batch(x, beam_size)

**参数**

- **x** (Variable) - 形状为 :math:`[batch\_size, ...]` 的 tenosr。数据类型应为 float32,float64,int32,int64 或 bool。
- **x** (Variable) - 形状为 :math:`[batch\_size, ...]` 的 tensor。数据类型应为 float32,float64,int32,int64 或 bool。
- **beam_size** (int) - 在 beam search 中使用的 beam 宽度。

**返回**
Expand All @@ -59,7 +59,7 @@ _split_batch_beams(x)

**参数**

- **x** (Variable) - 形状为 :math:`[batch\_size * beam\_size, ...]` 的 tenosr。数据类型应为 float32,float64,int32,int64 或 bool。
- **x** (Variable) - 形状为 :math:`[batch\_size * beam\_size, ...]` 的 tensor。数据类型应为 float32,float64,int32,int64 或 bool。

**返回**

Expand All @@ -72,7 +72,7 @@ _merge_batch_beams(x)

**参数**

- **x** (Variable) - 形状为 :math:`[batch\_size, beam_size,...]` 的 Tenosr。数据类型应为 float32,float64,int32,int64 或 bool。
- **x** (Variable) - 形状为 :math:`[batch\_size, beam_size,...]` 的 Tensor。数据类型应为 float32,float64,int32,int64 或 bool。

**返回**

Expand All @@ -85,7 +85,7 @@ _expand_to_beam_size(x)

**参数**

- **x** (Variable) - 形状为 :math:`[batch\_size, ...]` 的 tenosr。数据类型应为 float32,float64,int32,int64 或 bool。
- **x** (Variable) - 形状为 :math:`[batch\_size, ...]` 的 tensor。数据类型应为 float32,float64,int32,int64 或 bool。

**返回**

Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/static/nn/sequence_pool_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ sequence_pool
:::::::::
- **input** (Tensor) - 类型为 Tensor 的输入序列,仅支持 lod_level 不超过 2 的 Tensor,数据类型为 float32。
- **pool_type** (str) - 池化类型,支持 average,sum,sqrt,max,last 和 first 池化操作。
- **is_test** (bool,可选) - 仅在 pool_type 取值为 max 时生效。当 is_test 为 False 时,则在池化操作过程中会创建 maxIndex 临时 Tenosr,以记录最大特征值对应的索引信息,用于训练阶段的反向梯度计算。默认为 False。
- **is_test** (bool,可选) - 仅在 pool_type 取值为 max 时生效。当 is_test 为 False 时,则在池化操作过程中会创建 maxIndex 临时 Tensor,以记录最大特征值对应的索引信息,用于训练阶段的反向梯度计算。默认为 False。
- **pad_value** (float,可选) - 用于填充输入序列为空时的池化结果,默认为 0.0。

返回
Expand Down
2 changes: 1 addition & 1 deletion docs/design/quantization/fixed_point_quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ So the quantization transipler will change some inputs of the corresponding back

There are two strategies to calculate quantization scale, we call them dynamic and static strategy. The dynamic strategy calculates the quantization scale value each iteration. The static strategy keeps the quantization scale for different inputs.

For weights, we apply the dynamic strategy in the training, that is to say, the quantization scale will be recalculated during each iteration until the traning is finished.
For weights, we apply the dynamic strategy in the training, that is to say, the quantization scale will be recalculated during each iteration until the training is finished.

For activations, the quantization scales are estimated during training, then used in inference. There are several different ways to estimate them:

Expand Down
2 changes: 1 addition & 1 deletion docs/dev_guides/git_guides/submit_pr_guide_en.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Guide of submitting PR to GitHub

## Tthe submit of Pull Request
## The submit of Pull Request

- Please note the number of commit:

Expand Down
12 changes: 6 additions & 6 deletions docs/eval/evaluation_of_docs_system.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/faq/save_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ adam.set_state_dict(opti_state_dict)
+ 答复:
1. 对于``state_dict``保存方式与 paddle2.0 完全相同,我们将``Tensor``转化为``numpy.ndarray``保存。

2. 对于其他形式的包含``Tensor``的对象(``Layer``对象,单个``Tensor``以及包含``Tensor``的嵌套``list``、``tuple``、``dict``),在动态图中,将``Tensor``转化为``tuple(Tensor.name, Tensor.numpy())``;在静态图中,将``Tensor``直接转化为``numpy.ndarray``。之所以这样做,是因为当在静态图中使用动态保存的模型时,有时需要``Tensor``的名字因此将名字保存下来,同时,在``load``时区分这个``numpy.ndarray``是由 Tenosr 转化而来还是本来就是``numpy.ndarray``;保存静态图的``Tensor``时,通常通过``Variable.get_value``得到``Tensor``再使用``paddle.save``保存``Tensor``,此时,``Variable``是有名字的,这个``Tensor``是没有名字的,因此将静态图``Tensor``直接转化为``numpy.ndarray``保存。
2. 对于其他形式的包含``Tensor``的对象(``Layer``对象,单个``Tensor``以及包含``Tensor``的嵌套``list``、``tuple``、``dict``),在动态图中,将``Tensor``转化为``tuple(Tensor.name, Tensor.numpy())``;在静态图中,将``Tensor``直接转化为``numpy.ndarray``。之所以这样做,是因为当在静态图中使用动态保存的模型时,有时需要``Tensor``的名字因此将名字保存下来,同时,在``load``时区分这个``numpy.ndarray``是由 Tensor 转化而来还是本来就是``numpy.ndarray``;保存静态图的``Tensor``时,通常通过``Variable.get_value``得到``Tensor``再使用``paddle.save``保存``Tensor``,此时,``Variable``是有名字的,这个``Tensor``是没有名字的,因此将静态图``Tensor``直接转化为``numpy.ndarray``保存。
> 此处动态图 Tensor 和静态图 Tensor 是不相同的,动态图 Tensor 有 name、stop_gradient 等属性;而静态图的 Tensor 是比动态图 Tensor 轻量级的,只包含 place 等基本信息,不包含名字等。

##### 问题:将 Tensor 转换为 numpy.ndarray 或者 tuple(Tensor.name, Tensor.numpy())不是惟一可译编码,为什么还要做这样的转换呢?
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/custom_op/new_python_op_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@ def tanh(x):
# 可以直接将 Tensor 作为 np.tanh 的输入参数
return np.tanh(x)

# 前向函数 2:将两个 2-D Tenosr 相加,输入多个 Tensor 以 list[Tensor]或 tuple(Tensor)形式
# 前向函数 2:将两个 2-D Tensor 相加,输入多个 Tensor 以 list[Tensor]或 tuple(Tensor)形式
def element_wise_add(x, y):
# 必须先手动将 Tensor 转换为 numpy 数组,否则无法支持 numpy 的 shape 操作
x = np.array(x)
Expand Down