Skip to content

Commit 195dd29

Browse files
authored
[CodeStyle][Typos][W-[1-8]] Fix typo(startswith W) (#7570)
* fix typos with W * del
1 parent 6ce82ae commit 195dd29

File tree

9 files changed

+13
-21
lines changed

9 files changed

+13
-21
lines changed

_typos.toml

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,6 @@ Traning = "Traning"
6868
Transfomed = "Transfomed"
6969
Tthe = "Tthe"
7070
Ture = "Ture"
71-
Wether = "Wether"
7271
accordding = "accordding"
7372
accoustic = "accoustic"
7473
accpetance = "accpetance"
@@ -235,10 +234,3 @@ transfered = "transfered"
235234
trasformed = "trasformed"
236235
treshold = "treshold"
237236
trian = "trian"
238-
warpped = "warpped"
239-
wether = "wether"
240-
wiht = "wiht"
241-
wirte = "wirte"
242-
workign = "workign"
243-
wraper = "wraper"
244-
writter = "writter"

docs/design/ir/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ each other via inputs and outputs.
5757
TODO: Better definitions for the graph.
5858

5959
`Graph` can also contain `Attribute`s. `Attribute`s
60-
can be `any` thing. For example, it can be a list of "wraper"
60+
can be `any` thing. For example, it can be a list of "wrapper"
6161
nodes. The `wrapper` nodes compose `Node`s and provide
6262
helper method for execution or transformation. `Attribute`
6363
can also contain other things that describe some properties of

docs/design/mkldnn/caching/scripts/cache.dot

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,6 @@ digraph Q {
3434

3535
}
3636

37-
// For DefaultSessionID Key is having TID inside, for anything else eg. clearing mode , named session ID. no TID in key. ParallelExecutor is workign in default mode
37+
// For DefaultSessionID Key is having TID inside, for anything else eg. clearing mode , named session ID. no TID in key. ParallelExecutor is working in default mode
3838
//
3939
//

docs/design/modules/batch_norm_op.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ cudnn provides APIs to finish the whole series of computation, we can use them i
7272

7373
### Python
7474

75-
`batch_norm_op` is warpped as a layer in Python:
75+
`batch_norm_op` is wrapped as a layer in Python:
7676

7777
```python
7878
def batch_norm_layer(net,

docs/design/others/graph_survey.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -227,6 +227,6 @@ digraph G {
227227

228228
Actually, Symbol/Tensor/Expression in Mxnet/TensorFlow/Dynet are the same level concepts. We use a unified name Expression here, this level concept has following features:
229229

230-
- Users wirte topoloy with symbolic API, and all return value is Expression, including input data and parameter.
230+
- Users write topoloy with symbolic API, and all return value is Expression, including input data and parameter.
231231
- Expression corresponds with a global Graph, and Expression can also be composed.
232232
- Expression tracks all dependency and can be taken as a run target

docs/design/quantization/fixed_point_quantization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ $$ q = \left \lfloor \frac{x}{M} * (n - 1) \right \rceil $$
2424
where, $x$ is the float value to be quantized, $M$ is maximum absolute value. $\left \lfloor \right \rceil$ denotes rounding to the nearest integer. For 8 bit quantization, $n=2^{8}=256$. $q$ is the quantized integer.
2525

2626

27-
Wether the *min-max* quantization or *max-abs* quantization, they also can be represent:
27+
Whether the *min-max* quantization or *max-abs* quantization, they also can be represent:
2828

2929
$q = scale * r + b$
3030

docs/dev_guides/custom_device_docs/custom_kernel_docs/tensor_api_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ All element data of `DenseTensor` are stored in contiguous memory, and you can r
107107
// Return:bool categorical variable
108108
bool valid() const noexcept override;
109109

110-
// Check wether the tensor is initialized
110+
// Check whether the tensor is initialized
111111
// Parameter:None
112112
// Return:bool categorical variable
113113
bool initialized() const override;

docs/eval/evaluation_of_docs_system.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ TensorFlow 的文档规划,比较直接地匹配了本文所介绍的分类标
128128
- Customize what happens in Model.fit
129129
- Writing a training loop from scratch
130130
- Recurrent Neural Networks(RNN) with Keras
131-
- Masking and padding wiht Keras
131+
- Masking and padding with Keras
132132
- Writing your own callbacks
133133
- Transfer learning and fine-tuning
134134
- Training Keras models with TensorFlow Cloud
@@ -191,7 +191,7 @@ TensorFlow 的文档规划,比较直接地匹配了本文所介绍的分类标
191191
- The Fundamentals of Autograd
192192
- Building Models with PyTorch
193193
- PyTorch TensorBoard Support
194-
- Traning wiht PyTorch
194+
- Traning with PyTorch
195195
- Model Understanding with Captum
196196
- Learning PyTorch
197197
- Deep Learning with PyTorch: A 60 Minute Blitz
@@ -548,7 +548,7 @@ MindSpore 的有自己独立的文档分类标准和风格,所以硬套本文
548548
| 基本数据(Tensor)和基本算子 | Tensors Variables Tensor slicing Ragged tensor Sparse tensor DTensor concepts | 6 | Tensors Transforms Introduction to PyTorch Tensors | 3 | 张量 Tensor | 1 | Tensor 概念介绍 | 1 |
549549
| 数据加载与预处理 | Images CSV Numpy pandas.DataFrame TFRecord and tf.Example Additional formats with tf.io Text More text loading Classifying structured data with preprocessing layers Classfication on imbalanced data Time series forecasting Decision forest models | 13 | Datasets & Dataloaders | 1 | 数据处理 数据处理(进阶) 自动数据增强 轻量化数据处理 单节点数据缓存 优化数据处理 | 6 | 数据集的定义和加载 数据预处理 | 2 |
550550
| 如何组网 | Modules, layers, and models | 1 | Build the Neural Network Building Models with PyTorch What is torch.nn really? Learing PyTorch with Examples | 4 | 创建网络 网络构建 | 2 | 模型组网 飞桨高层 API 使用指南 层与模型 | 3 |
551-
| 如何训练 | Training loops NumPy API Checkpoint SavedModel | 4 | Optimization Model Parameters Traning wiht PyTorch | 2 | 模型训练 训练与评估 | 2 | 训练与预测验证 自定义指标 | 2 |
551+
| 如何训练 | Training loops NumPy API Checkpoint SavedModel | 4 | Optimization Model Parameters Traning with PyTorch | 2 | 模型训练 训练与评估 | 2 | 训练与预测验证 自定义指标 | 2 |
552552
| 保存与加载模型 | Save and load Save and load(Distributed Training) | 2 | Save and Load the Model | 1 | 保存与加载 | 1 | 模型保存与载入 模型保存及加载(应用实践) | 2 |
553553
| 可视化、调优技巧 | Overfit and underfit Tune hyperprameters with Keras Tuner Better performance with tf.function Profile TensorFlow performance Graph optimizaition Optimize GPU Performance Mixed precision | 7 | PyTorch TensorBoard Support Model Understanding with Captum Visualizing Models, Data, and Training with TensorBoard Profiling your PyTorch Module PyTorch Profiler with TensorBoard Hyperparameter tuning with Ray Tune Optimizing Vision Transformer Model for Deployment Parametrization Tutorial Pruning Tutorial Grokking PyTorch Intel CPU performance from first principles | 11 | 查看中间文件 Dump 功能调试 自定义调试信息 调用自定义类 算子增量编译 算子调优工具 自动数据加速 固定随机性以复现脚本运行结果 | 8 | VisualDL 工具简介 VisualDL 使用指南 飞桨模型量化 | 3 |
554554
| 自动微分 | Automatic differentiation Advanced autodiff | 2 | Automatic Differentiation with torch.autograd The Fundamentals of Autograd | 2 | 自动微分 | 1 | 自动微分 | 1 |

docs/guides/advanced/visualdl_usage_en.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -341,10 +341,10 @@ Demo 6. text demo program [GitHub](https://github.com/PaddlePaddle/VisualDL/blob
341341
from visualdl import LogWriter
342342

343343
# create a LogWriter instance
344-
log_writter = LogWriter("./log", sync_cycle=10)
344+
log_writer = LogWriter("./log", sync_cycle=10)
345345

346346
# Create a TextWriter instance
347-
with log_writter.mode("train") as logger:
347+
with log_writer.mode("train") as logger:
348348
vdl_text_comp = logger.text(tag="test")
349349

350350
# Use member function add_record() to add data
@@ -443,11 +443,11 @@ def read_audio_data(audio_path):
443443

444444

445445
# Create a LogWriter instance
446-
log_writter = LogWriter("./log", sync_cycle=10)
446+
log_writer = LogWriter("./log", sync_cycle=10)
447447

448448
# Create an AudioWriter instance
449449
ns = 2
450-
with log_writter.mode("train") as logger:
450+
with log_writer.mode("train") as logger:
451451
input_audio = logger.audio(tag="test", num_samples=ns)
452452

453453
# The variable sample_num is used to record the number of audio data that have been sampled

0 commit comments

Comments
 (0)