Skip to content

Commit 9f0fbae

Browse files
committed
fix O
1 parent 6f40997 commit 9f0fbae

File tree

13 files changed

+17
-29
lines changed

13 files changed

+17
-29
lines changed

_typos.toml

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -29,25 +29,13 @@ feeded = "feeded"
2929

3030
# These words need to be fixed
3131
Learing = "Learing"
32-
Operaton = "Operaton"
33-
Optimizaing = "Optimizaing"
34-
Optimzier = "Optimzier"
3532
Setment = "Setment"
3633
Simle = "Simle"
3734
Sovler = "Sovler"
3835
libary = "libary"
3936
matrics = "matrics"
4037
metrices = "metrices"
4138
mutbale = "mutbale"
42-
occurence = "occurence"
43-
opeartor = "opeartor"
44-
opeartors = "opeartors"
45-
operaters = "operaters"
46-
optmization = "optmization"
47-
outpu = "outpu"
48-
outpus = "outpus"
49-
overrided = "overrided"
50-
overwrited = "overwrited"
5139
samle = "samle"
5240
schedual = "schedual"
5341
secenarios = "secenarios"

docs/api/gen_doc.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
# "short_name":"", # without module name
3636
# "module_name":"", # the module of the real api belongs to
3737
# "display":True/Flase, # consider the not_display_doc_list and the display_doc_list
38-
# "has_overwrited_doc":True/False #
38+
# "has_overwritten_doc":True/False #
3939
# "doc_filename" # document filename without suffix
4040
# "suggested_name":"", # the shortest name in all_names
4141
# }

docs/design/concurrent/parallel_do.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ AddOutput(kOutputs, "Outputs needed to be merged from different devices").AsDupl
1515
AddOutput(kParallelScopes,
1616
"Scopes for all local variables in forward pass. One scope for each device");
1717
AddAttr<framework::BlockDesc *>(kParallelBlock,
18-
"List of operaters to be executed in parallel");
18+
"List of operators to be executed in parallel");
1919
```
2020
2121
A vanilla implementation of parallel_do can be shown as the following (`|` means single thread and
@@ -94,7 +94,7 @@ There are serial places we can make this parallel_do faster.
9494

9595
### forward: split input onto different devices
9696

97-
If the input of the parallel_do is independent from any prior opeartors, we can avoid this step by
97+
If the input of the parallel_do is independent from any prior operators, we can avoid this step by
9898
prefetching the input onto different devices in a separate background thread. And the python code
9999
looks like this.
100100
```python

docs/design/data_type/float16.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ In Fluid, a neural network is represented as a protobuf message called [ProgramD
101101
### Operator level requirement
102102
Each operator has many kernels for different data types, devices, and library types. The operator will select the appropriate kernel to run based on, among other things, the data type of the input variables. By default, every Fluid operator has a float data type kernel that takes float variables as input and generates float output.
103103

104-
This means that if we provide float input to the first operator in a program, then each opeartor will use float kernel to compute float output and send it as input to the next operator to trigger the float kernel. Overall, the program will run in float mode and give us a final output of float data type.
104+
This means that if we provide float input to the first operator in a program, then each operator will use float kernel to compute float output and send it as input to the next operator to trigger the float kernel. Overall, the program will run in float mode and give us a final output of float data type.
105105

106106
The same principle applies if we want a program to run in float16 mode. We provide input variable of float16 data type to the first operator, and then one by one, each operator in the program will run the float16 kernel (provided that each operator in this program has float16 kernels registered) until we finally obtain a float16 output variable.
107107

docs/design/memory/memory_optimization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ In former control flow graph, the out-edges of node 5 are 5 --> 6 and 5 --> 2, a
7979

8080
- Uses and Defs
8181

82-
An assignmemt to a variable or temporary defines that variable. An occurence of a variable on the right-hand side of an assignment(or in other expressions) uses the variable. We can define the *def* of a variable as the set of graph nodes that define it; or the *def* of a graph node as the set of variables that it defines; and the similarly for the *use* of a variable or graph node. In former control flow graph, *def(3)* = {c}, *use(3)* = {b, c}.
82+
An assignmemt to a variable or temporary defines that variable. An occurrence of a variable on the right-hand side of an assignment(or in other expressions) uses the variable. We can define the *def* of a variable as the set of graph nodes that define it; or the *def* of a graph node as the set of variables that it defines; and the similarly for the *use* of a variable or graph node. In former control flow graph, *def(3)* = {c}, *use(3)* = {b, c}.
8383

8484
- Liveness
8585

docs/design/mkldnn/int8/QAT/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Notes:
6262
```... → input1 → conv2d → output1 → batch_norm → output2 → relu → output3 → ...```
6363
and we want to quantize the `conv2d` op, then after applying FP32 optimizations the sequence will become
6464
```... → input1 → conv2d → output3 → ...```
65-
and the quantization scales have to be collected for the `input1` and `outpu3` tensors in the Quant model.
65+
and the quantization scales have to be collected for the `input1` and `output3` tensors in the Quant model.
6666
2. Quantization of the following operators is supported: `conv2d`, `depthwise_conv2d`, `mul`, `fc`, `matmul`, `pool2d`, `reshape2`, `transpose2`, `concat`.
6767
3. The longest sequence of consecutive quantizable operators in the model, the biggest performance boost can be achieved through quantization:
6868
```... → conv2d → conv2d → pool2d → conv2d → conv2d → ...```

docs/design/modules/optimizer.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ class Optimizer:
7272
parameters_and_grads: a list of (variable, gradient) pair to update.
7373
7474
Returns:
75-
optmization_op_list: a list of optimization operator that will update parameter using gradient.
75+
optimization_op_list: a list of optimization operator that will update parameter using gradient.
7676
"""
7777
return None
7878

docs/dev_guides/amp_precision/amp_test_dev_guide_cn.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@
7373

7474
首先需要对输入数据进行计算,对于复杂一些的计算,可能会使得 setUp 函数过分冗长,可以写成额外的函数, **如代码 1-1 的第 13 行**
7575

76-
outpus 部分需要传入由 numpy 计算出的参考结果。
76+
outputs 部分需要传入由 numpy 计算出的参考结果。
7777

7878
**代码 1-1**
7979

@@ -283,7 +283,7 @@ BF16 在传入输入和输入参考值时需要调用**convert_float_to_uint16**
283283

284284
3. 设置 self.outputs。**如代码 2-1 的第 15 行所示。**
285285

286-
outpus 部分需要传入 Uint16 格式的参考结果。可使用**convert_float_to_uint16**完成转换。
286+
outputs 部分需要传入 Uint16 格式的参考结果。可使用**convert_float_to_uint16**完成转换。
287287

288288
**代码 2-1**
289289

docs/eval/evaluation_of_docs_system.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,7 @@ TensorFlow 的文档规划,比较直接地匹配了本文所介绍的分类标
204204
- Adversarial Example Generation
205205
- DCGAN Tutorial
206206
- Spatial Transformer Networks Tutorial
207-
- Optimizaing Vision Transformer Model for Deployment
207+
- Optimizing Vision Transformer Model for Deployment
208208
- Audio
209209
- Audio I/O
210210
- Audio Resampling
@@ -554,7 +554,7 @@ MindSpore 的有自己独立的文档分类标准和风格,所以硬套本文
554554
| 自动微分 | Automatic differentiation Advanced autodiff | 2 | Automatic Differentiation with torch.autograd The Fundamentals of Autograd | 2 | 自动微分 | 1 | 自动微分 | 1 |
555555
| 动态图与静态图 | Graphs and functions | 1 | (torchscript 其实是静态图,不过归类到部署中了) | 0 | 动态图与静态图 | 1 | 使用样例 转换原理 支持语法 案例解析 报错调试 动态图 使用动转静完成以图搜图 | 7 |
556556
| 部署相关 | https://www.tensorflow.org/tfx/tutorials 下的 21 篇文章 https://www.tensorflow.org/tfx/guide 下的 30+文章 | 50+ | Deploying PyTorch in Python via a REST API with Flask Introduction to TorchScript Loading a TorchScript Model in C++ (optional) Exporting a Model from PyTorch to ONNX and Running it using ONNX Runtime Real Time Inference on Raspberry Pi 4 | 6 | 推理与部署 模型推理总览 GPU/CPU 推理 Ascend 910 AI 处理器上推理 Ascend 310 AI 处理器上使用 MindIR 模型进行推理 Ascend 310 AI 处理器上使用 AIR 模型进行推理 | 7 | 服务器部署 移动端/嵌入式部署 模型压缩 https://www.paddlepaddle.org.cn/lite/v2.10/guide/introduction.html 下 50+ 篇文章 | 50+ |
557-
| CV 领域相关 | Basic image classification Convolutional Neural Network Image classification Transfer learning and fine-tuning Transfer learning with TF Hub Data Augmentaion Image segmentation Object detection with TF Hub Neural style transfer DeepDream DCGAN Pix2Pix CycleGAN Adversarial FGSM Intro to Autoencoders Variational Autoencoder | 16 | TorchVision Object Detection Finetuning Tutorial Transfer Learning for Computer Vision Tutorial Adversarial Example Generation DCGAN Tutorial Spatial Transformer Networks Tutorial Optimizaing Vision Transformer Model for Deployment Quantized Transfer Learning for Computer Vision Tutorial | 7 | ResNet50 网络进行图像分类 图像分类迁移学习 模型对抗攻击 生成式对抗网络 | 4 | 使用 LeNet 在 MNIST 数据集实现图像分类 使用卷积神经网络进行图像分类 基于图片相似度的图片搜索 基于 U-Net 卷积神经网络实现宠物图像分割 通过 OCR 实现验证码识别 通过 Sub-Pixel 实现图像超分辨率 人脸关键点检测 点云处理:实现 PointNet 点云分类 | 7 |
557+
| CV 领域相关 | Basic image classification Convolutional Neural Network Image classification Transfer learning and fine-tuning Transfer learning with TF Hub Data Augmentaion Image segmentation Object detection with TF Hub Neural style transfer DeepDream DCGAN Pix2Pix CycleGAN Adversarial FGSM Intro to Autoencoders Variational Autoencoder | 16 | TorchVision Object Detection Finetuning Tutorial Transfer Learning for Computer Vision Tutorial Adversarial Example Generation DCGAN Tutorial Spatial Transformer Networks Tutorial Optimizing Vision Transformer Model for Deployment Quantized Transfer Learning for Computer Vision Tutorial | 7 | ResNet50 网络进行图像分类 图像分类迁移学习 模型对抗攻击 生成式对抗网络 | 4 | 使用 LeNet 在 MNIST 数据集实现图像分类 使用卷积神经网络进行图像分类 基于图片相似度的图片搜索 基于 U-Net 卷积神经网络实现宠物图像分割 通过 OCR 实现验证码识别 通过 Sub-Pixel 实现图像超分辨率 人脸关键点检测 点云处理:实现 PointNet 点云分类 | 7 |
558558
| NLP 领域相关 | Basic text classification Text classification with TF Hub Word embeddings Word2Vec Text classification with an RNN classify Text with BERT Solve GLUE tasks using BERT on TPU Neural machine translation with attention Image captioning | 9 | Language Modeling with nn.Transformer and TorchText NLP From Scratch: Classifying Names with a Character-Level RNN NLP From Scratch: Generating Names with a Character-Level RNN NLP From Scratch: Translation with a Sequence to Sequence Network and Attention Text classification with the torchtext library Language Translation with nn.Transformer and torchtext Dynamic Quantization on an LSTM Word Language Model Dynamic Quantization on BERT | 8 | 使用 RNN 实现情感分类 LSTM+CRF 实现序列标注 | 2 | 用 N-Gram 模型在莎士比亚文集中训练 word embedding IMDB 数据集使用 BOW 网络的文本分类 使用预训练的词向量完成文本分类任务 使用注意力机制的 LSTM 的机器翻译 使用序列到序列模型完成数字加法 | 5 |
559559
| 语音领域相关 | | | Audio I/O Audio Resampling Audio Data Augmentation Audio Feature Extractions Audio Feature Augmentation Audio Datasets Speech Recognition with Wav2Vec2 Speech Command Classification with torchaudio Text-to-speech with torchaudio Forced Alignment with Wav2Vec2 | 10 | | 0 | | 0 |
560560
| 推荐领域相关 | Recommenders | 1 | Introduction to TorchRec | 1 | | 0 | 使用协同过滤实现电影推荐 | 1 |

docs/guides/beginner/model_save_load_cn.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@
169169
"source": [
170170
"#### 2.2.1 保存动态图模型\n",
171171
"\n",
172-
"参数保存时,先获取目标对象(Layer 或者 Optimzier)的 state_dict,然后将 state_dict 保存至磁盘,同时也可以保存模型训练 checkpoint 的信息,保存的 checkpoint 的对象已在上文示例代码中进行了设置,保存代码如下(接上文示例代码):"
172+
"参数保存时,先获取目标对象(Layer 或者 Optimizer)的 state_dict,然后将 state_dict 保存至磁盘,同时也可以保存模型训练 checkpoint 的信息,保存的 checkpoint 的对象已在上文示例代码中进行了设置,保存代码如下(接上文示例代码):"
173173
]
174174
},
175175
{

0 commit comments

Comments
 (0)