diff --git a/_typos.toml b/_typos.toml index 4c4cc9b1c66..8b281b8f886 100644 --- a/_typos.toml +++ b/_typos.toml @@ -68,7 +68,6 @@ Traning = "Traning" Transfomed = "Transfomed" Tthe = "Tthe" Ture = "Ture" -Wether = "Wether" accordding = "accordding" accoustic = "accoustic" accpetance = "accpetance" @@ -235,10 +234,3 @@ transfered = "transfered" trasformed = "trasformed" treshold = "treshold" trian = "trian" -warpped = "warpped" -wether = "wether" -wiht = "wiht" -wirte = "wirte" -workign = "workign" -wraper = "wraper" -writter = "writter" diff --git a/docs/design/ir/overview.md b/docs/design/ir/overview.md index 83ef97c99ef..bbbd7a7c2fa 100644 --- a/docs/design/ir/overview.md +++ b/docs/design/ir/overview.md @@ -57,7 +57,7 @@ each other via inputs and outputs. TODO: Better definitions for the graph. `Graph` can also contain `Attribute`s. `Attribute`s -can be `any` thing. For example, it can be a list of "wraper" +can be `any` thing. For example, it can be a list of "wrapper" nodes. The `wrapper` nodes compose `Node`s and provide helper method for execution or transformation. `Attribute` can also contain other things that describe some properties of diff --git a/docs/design/mkldnn/caching/scripts/cache.dot b/docs/design/mkldnn/caching/scripts/cache.dot index ed235c9be0f..52f64904373 100644 --- a/docs/design/mkldnn/caching/scripts/cache.dot +++ b/docs/design/mkldnn/caching/scripts/cache.dot @@ -34,6 +34,6 @@ digraph Q { } -// For DefaultSessionID Key is having TID inside, for anything else eg. clearing mode , named session ID. no TID in key. ParallelExecutor is workign in default mode +// For DefaultSessionID Key is having TID inside, for anything else eg. clearing mode , named session ID. no TID in key. ParallelExecutor is working in default mode // // diff --git a/docs/design/modules/batch_norm_op.md b/docs/design/modules/batch_norm_op.md index a1bb7f709c9..7dc75cbd520 100644 --- a/docs/design/modules/batch_norm_op.md +++ b/docs/design/modules/batch_norm_op.md @@ -72,7 +72,7 @@ cudnn provides APIs to finish the whole series of computation, we can use them i ### Python -`batch_norm_op` is warpped as a layer in Python: +`batch_norm_op` is wrapped as a layer in Python: ```python def batch_norm_layer(net, diff --git a/docs/design/others/graph_survey.md b/docs/design/others/graph_survey.md index e789f5cb2f0..b4b824a2893 100644 --- a/docs/design/others/graph_survey.md +++ b/docs/design/others/graph_survey.md @@ -227,6 +227,6 @@ digraph G { Actually, Symbol/Tensor/Expression in Mxnet/TensorFlow/Dynet are the same level concepts. We use a unified name Expression here, this level concept has following features: -- Users wirte topoloy with symbolic API, and all return value is Expression, including input data and parameter. +- Users write topoloy with symbolic API, and all return value is Expression, including input data and parameter. - Expression corresponds with a global Graph, and Expression can also be composed. - Expression tracks all dependency and can be taken as a run target diff --git a/docs/design/quantization/fixed_point_quantization.md b/docs/design/quantization/fixed_point_quantization.md index eba2db4a1c6..947ec3a0f7a 100644 --- a/docs/design/quantization/fixed_point_quantization.md +++ b/docs/design/quantization/fixed_point_quantization.md @@ -24,7 +24,7 @@ $$ q = \left \lfloor \frac{x}{M} * (n - 1) \right \rceil $$ where, $x$ is the float value to be quantized, $M$ is maximum absolute value. $\left \lfloor \right \rceil$ denotes rounding to the nearest integer. For 8 bit quantization, $n=2^{8}=256$. $q$ is the quantized integer. -Wether the *min-max* quantization or *max-abs* quantization, they also can be represent: +Whether the *min-max* quantization or *max-abs* quantization, they also can be represent: $q = scale * r + b$ diff --git a/docs/dev_guides/custom_device_docs/custom_kernel_docs/tensor_api_en.md b/docs/dev_guides/custom_device_docs/custom_kernel_docs/tensor_api_en.md index 6c9033ab314..ac24fbaebef 100644 --- a/docs/dev_guides/custom_device_docs/custom_kernel_docs/tensor_api_en.md +++ b/docs/dev_guides/custom_device_docs/custom_kernel_docs/tensor_api_en.md @@ -107,7 +107,7 @@ All element data of `DenseTensor` are stored in contiguous memory, and you can r // Return:bool categorical variable bool valid() const noexcept override; - // Check wether the tensor is initialized + // Check whether the tensor is initialized // Parameter:None // Return:bool categorical variable bool initialized() const override; diff --git a/docs/eval/evaluation_of_docs_system.md b/docs/eval/evaluation_of_docs_system.md index 4dfaded535f..bc3c709a95e 100644 --- a/docs/eval/evaluation_of_docs_system.md +++ b/docs/eval/evaluation_of_docs_system.md @@ -128,7 +128,7 @@ TensorFlow 的文档规划,比较直接地匹配了本文所介绍的分类标 - Customize what happens in Model.fit - Writing a training loop from scratch - Recurrent Neural Networks(RNN) with Keras - - Masking and padding wiht Keras + - Masking and padding with Keras - Writing your own callbacks - Transfer learning and fine-tuning - Training Keras models with TensorFlow Cloud @@ -191,7 +191,7 @@ TensorFlow 的文档规划,比较直接地匹配了本文所介绍的分类标 - The Fundamentals of Autograd - Building Models with PyTorch - PyTorch TensorBoard Support - - Traning wiht PyTorch + - Traning with PyTorch - Model Understanding with Captum - Learning PyTorch - Deep Learning with PyTorch: A 60 Minute Blitz @@ -548,7 +548,7 @@ MindSpore 的有自己独立的文档分类标准和风格,所以硬套本文 | 基本数据(Tensor)和基本算子 | Tensors Variables Tensor slicing Ragged tensor Sparse tensor DTensor concepts | 6 | Tensors Transforms Introduction to PyTorch Tensors | 3 | 张量 Tensor | 1 | Tensor 概念介绍 | 1 | | 数据加载与预处理 | Images CSV Numpy pandas.DataFrame TFRecord and tf.Example Additional formats with tf.io Text More text loading Classifying structured data with preprocessing layers Classfication on imbalanced data Time series forecasting Decision forest models | 13 | Datasets & Dataloaders | 1 | 数据处理 数据处理(进阶) 自动数据增强 轻量化数据处理 单节点数据缓存 优化数据处理 | 6 | 数据集的定义和加载 数据预处理 | 2 | | 如何组网 | Modules, layers, and models | 1 | Build the Neural Network Building Models with PyTorch What is torch.nn really? Learing PyTorch with Examples | 4 | 创建网络 网络构建 | 2 | 模型组网 飞桨高层 API 使用指南 层与模型 | 3 | -| 如何训练 | Training loops NumPy API Checkpoint SavedModel | 4 | Optimization Model Parameters Traning wiht PyTorch | 2 | 模型训练 训练与评估 | 2 | 训练与预测验证 自定义指标 | 2 | +| 如何训练 | Training loops NumPy API Checkpoint SavedModel | 4 | Optimization Model Parameters Traning with PyTorch | 2 | 模型训练 训练与评估 | 2 | 训练与预测验证 自定义指标 | 2 | | 保存与加载模型 | Save and load Save and load(Distributed Training) | 2 | Save and Load the Model | 1 | 保存与加载 | 1 | 模型保存与载入 模型保存及加载(应用实践) | 2 | | 可视化、调优技巧 | Overfit and underfit Tune hyperprameters with Keras Tuner Better performance with tf.function Profile TensorFlow performance Graph optimizaition Optimize GPU Performance Mixed precision | 7 | PyTorch TensorBoard Support Model Understanding with Captum Visualizing Models, Data, and Training with TensorBoard Profiling your PyTorch Module PyTorch Profiler with TensorBoard Hyperparameter tuning with Ray Tune Optimizing Vision Transformer Model for Deployment Parametrization Tutorial Pruning Tutorial Grokking PyTorch Intel CPU performance from first principles | 11 | 查看中间文件 Dump 功能调试 自定义调试信息 调用自定义类 算子增量编译 算子调优工具 自动数据加速 固定随机性以复现脚本运行结果 | 8 | VisualDL 工具简介 VisualDL 使用指南 飞桨模型量化 | 3 | | 自动微分 | Automatic differentiation Advanced autodiff | 2 | Automatic Differentiation with torch.autograd The Fundamentals of Autograd | 2 | 自动微分 | 1 | 自动微分 | 1 | diff --git a/docs/guides/advanced/visualdl_usage_en.md b/docs/guides/advanced/visualdl_usage_en.md index 8079505882c..fa8cc29a062 100755 --- a/docs/guides/advanced/visualdl_usage_en.md +++ b/docs/guides/advanced/visualdl_usage_en.md @@ -341,10 +341,10 @@ Demo 6. text demo program [GitHub](https://github.com/PaddlePaddle/VisualDL/blob from visualdl import LogWriter # create a LogWriter instance -log_writter = LogWriter("./log", sync_cycle=10) +log_writer = LogWriter("./log", sync_cycle=10) # Create a TextWriter instance -with log_writter.mode("train") as logger: +with log_writer.mode("train") as logger: vdl_text_comp = logger.text(tag="test") # Use member function add_record() to add data @@ -443,11 +443,11 @@ def read_audio_data(audio_path): # Create a LogWriter instance -log_writter = LogWriter("./log", sync_cycle=10) +log_writer = LogWriter("./log", sync_cycle=10) # Create an AudioWriter instance ns = 2 -with log_writter.mode("train") as logger: +with log_writer.mode("train") as logger: input_audio = logger.audio(tag="test", num_samples=ns) # The variable sample_num is used to record the number of audio data that have been sampled