Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 0 additions & 14 deletions _typos.toml
Original file line number Diff line number Diff line change
Expand Up @@ -37,14 +37,6 @@ Moible = "Moible"
Operaton = "Operaton"
Optimizaing = "Optimizaing"
Optimzier = "Optimzier"
Paremeter = "Paremeter"
Pipline = "Pipline"
Porgram = "Porgram"
Prallel = "Prallel"
Propegation = "Propegation"
Propogation = "Propogation"
Protocal = "Protocal"
Pyhton = "Pyhton"
REGISTE = "REGISTE"
Reivew = "Reivew"
Reuqest = "Reuqest"
Expand Down Expand Up @@ -147,14 +139,8 @@ outpu = "outpu"
outpus = "outpus"
overrided = "overrided"
overwrited = "overwrited"
palce = "palce"
parammeters = "parammeters"
poniter = "poniter"
porcess = "porcess"
processer = "processer"
promot = "promot"
propegation = "propegation"
provicded = "provicded"
recevied = "recevied"
recomment = "recomment"
registerd = "registerd"
Expand Down
14 changes: 7 additions & 7 deletions ci_scripts/check_api_parameters.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ def _check_params_in_description(rstfilename, paramstr):
)
else:
info = f"The number of params in title does not match the params in description: {len(params_in_title)} != {len(items)}."
print(f"check failed (parammeters description): {rstfilename}")
print(f"check failed (parameters description): {rstfilename}")
else:
for i in range(len(items)):
pname_in_title = params_in_title[i].split("=")[0].strip()
Expand All @@ -120,13 +120,13 @@ def _check_params_in_description(rstfilename, paramstr):
flag = False
info = f"the following param in title does not match the param in description: {pname_in_title} != {pname_indesc}."
print(
f"check failed (parammeters description): {rstfilename}, {pname_in_title} != {pname_indesc}"
f"check failed (parameters description): {rstfilename}, {pname_in_title} != {pname_indesc}"
)
else:
flag = False
info = f"param name '{pname_in_title}' not matched in description line{i + 1}, check it please."
print(
f"check failed (parammeters description): {rstfilename}, param name not found in {i} paragraph."
f"check failed (parameters description): {rstfilename}, param name not found in {i} paragraph."
)
else:
if params_in_title:
Expand All @@ -148,8 +148,8 @@ def _check_params_in_description_with_fullargspec(rstfilename, funcname):
params_inspec = funcspec.args
if len(items) != len(params_inspec):
flag = False
info = f"check_with_fullargspec failed (parammeters description): {rstfilename}"
print(f"check failed (parammeters description): {rstfilename}")
info = f"check_with_fullargspec failed (parameters description): {rstfilename}"
print(f"check failed (parameters description): {rstfilename}")
else:
for i in range(len(items)):
pname_in_title = params_inspec[i]
Expand All @@ -162,13 +162,13 @@ def _check_params_in_description_with_fullargspec(rstfilename, funcname):
flag = False
info = f"the following param in title does not match the param in description: {pname_in_title} != {pname_indesc}."
print(
f"check failed (parammeters description): {rstfilename}, {pname_in_title} != {pname_indesc}"
f"check failed (parameters description): {rstfilename}, {pname_in_title} != {pname_indesc}"
)
else:
flag = False
info = f"param name '{pname_in_title}' not matched in description line{i + 1}, check it please."
print(
f"check failed (parammeters description): {rstfilename}, param name not found in {i} paragraph."
f"check failed (parameters description): {rstfilename}, param name not found in {i} paragraph."
)
else:
if funcspec.args:
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/jit/TranslatedLayer_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ program(method_name='forward'):

**参数**

- **method_name** (string) - 要获取的 Porgram 对应的方法名。默认值为"forward"。
- **method_name** (string) - 要获取的 Program 对应的方法名。默认值为"forward"。

**返回**
Program
Expand Down
4 changes: 2 additions & 2 deletions docs/design/concurrent/go_op.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The go operator can be accessed by using the fluid.Go() control flow. This
will create a new sub block, where the user can add additional operators
to be ran on the thread.

**Note:** Since back propegation is currently not support in the go_op, users
**Note:** Since back propagation is currently not support in the go_op, users
should ensure that operators in the go block does not require gradient
calculations.

Expand Down Expand Up @@ -225,7 +225,7 @@ when spawning these threads. For the first version of CSP, we only support
OS threads.


#### Backward Propegation:
#### Backward Propagation:

go_op currently does not support backwards propagation. Please use go_op with
non training operators.
2 changes: 1 addition & 1 deletion docs/design/modules/net_op_design.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ class PlainNet : public Net {
virtual Error InferShape(Scope *scope) override;

// Run all the operators with the `scope`, if no scope is provided, default
// scope will be used instead. If no OpContext is provicded, default context will be used.
// scope will be used instead. If no OpContext is provided, default context will be used.
virtual Error Run(Scope *scope = nullptr, OpContext *context=nullptr, OpIndex begin = -1,
OpIndex end = -1) const override;

Expand Down
2 changes: 1 addition & 1 deletion docs/design/others/graph_survey.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def get_symbol(num_classes=10, **kwargs):

Variable here is actually a Symbol. Every basic Symbol will correspond to one Node, and every Node has its own AnyAttr. There is a op field in AnyAttr class, when a Symbol represents Variable(often input data), the op field is null.

Symbol contains a data member, std::vector<NodeEntry> outputs, and NodeEntry cantains a poniter to Node. We can follow the Node pointer to get all the Graph.
Symbol contains a data member, std::vector<NodeEntry> outputs, and NodeEntry cantains a pointer to Node. We can follow the Node pointer to get all the Graph.

And Symbol can be saved to a JSON file.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/phi/design_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -867,7 +867,7 @@ For the management of the new form of Kernel, the current design is as follows:
Described as follows:

- `KernelFactory` is a global singleton data structure for managing Kernel. Similar to `OpKernelMap` of fluid, it is a two-level map. The first-level mapping finds the Kernel set according to the name, and the second-level mapping finds the specific Kernel according to the KernelKey.
- `KernelKey` is similar to the original `OpKernelType`, but the `palce` and `library_type` fields are combined into one and called `Backend`, because the original `LibraryType` is a limited enumeration class, which is strongly related to place, the splitting increases the cost of understanding instead.
- `KernelKey` is similar to the original `OpKernelType`, but the `place` and `library_type` fields are combined into one and called `Backend`, because the original `LibraryType` is a limited enumeration class, which is strongly related to place, the splitting increases the cost of understanding instead.
- `Kernel` holds more information than the original `OpKernel`. In addition to the Function during execution, it also holds information about specific parameters, namely `KernelArgsDef`. For Tensor type input and output, it saves Tensor type information, Device, data Type, data layout. For Attribute type input and output, it saves type information.


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ SpmdInfo ElementwiseBinaryInferSpmd(const DistMetaTensor& x,
std::string x_axes, y_axes, out_axes;
GetBinaryNotations(x_shape, y_shape, &x_axes, &y_axes, &out_axes);

// Step2: Sharding Propogation
// Step2: Sharding Propagation
// Step2.1: 合并输入的 dims mapping,得到每一维度对应的 dims mapping 值。
// 调用 ShardingMergeForTensors 可以对输入 dims mapping 进行合并,返回的 map 即为
// 每一维度对应的 dims mapping 值。
Expand Down
2 changes: 1 addition & 1 deletion docs/dev_guides/custom_device_docs/event_api_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ C_Status (*create_event)(const C_Device device, C_Event* event)

It creates an event, which is used to synchronize tasks of different streams within the framework. When the device does not support asynchronous execution, empty implementation of the API is required.

### Paremeter
### Parameter

device - the device to be used

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -256,7 +256,7 @@ def filter_user(user: list[User], type: UserType) -> list[User]: ...

### 参数应尽可能使用抽象类型,返回值应尽可能使用具体类型

对于函数输入参数,如果允许,我们应该尽可能使用 [Protocal](https://docs.python.org/3/library/typing.html#typing.Protocol),如 [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)、[Mapping](https://docs.python.org/3/library/collections.abc.html#collections.abc.Mapping) 、[Iterable](https://docs.python.org/3/library/collections.abc.html#collections.abc.Iterable) 等抽象类型,以提高函数的通用性。而对于函数返回值,我们应该尽可能使用具体类型,以确保下游使用时能得到更好的提示效果。
对于函数输入参数,如果允许,我们应该尽可能使用 [Protocol](https://docs.python.org/3/library/typing.html#typing.Protocol),如 [Sequence](https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence)、[Mapping](https://docs.python.org/3/library/collections.abc.html#collections.abc.Mapping) 、[Iterable](https://docs.python.org/3/library/collections.abc.html#collections.abc.Iterable) 等抽象类型,以提高函数的通用性。而对于函数返回值,我们应该尽可能使用具体类型,以确保下游使用时能得到更好的提示效果。

比如相比于如下写法:

Expand Down
6 changes: 3 additions & 3 deletions docs/eval/evaluation_of_docs_system.md
Original file line number Diff line number Diff line change
Expand Up @@ -261,13 +261,13 @@ TensorFlow 的文档规划,比较直接地匹配了本文所介绍的分类标
- Single-Machine Model Parallel Best Practices
- Getting Started with Distributed Data Parallel
- Writing Distributed Applications with PyTorch
- Getting Started with Fully Sharded Data Prallel
- Getting Started with Fully Sharded Data Parallel
- Customize Process Group Backends Using Cpp Extension
- Getting Started with Distributed RPC Framework
- Implementing a Parameter Server Using Distributed RPC Framework
- Distributed Pipeline Parallelsim using RPC
- Implementing Batch RPC Processing Using Asynchronous Executions
- Combining Distributed DataPrallel with Distributed RPC Framework
- Combining Distributed DataParallel with Distributed RPC Framework
- Training Transformer models using Pipeline Parallelism
- Training Transformer models using Distributed Data Parallel and Pipeline Parallelism
- Distributed Training with Uneven Inputs Using the Join Context Manager
Expand Down Expand Up @@ -562,7 +562,7 @@ MindSpore 的有自己独立的文档分类标准和风格,所以硬套本文
| 移动端相关 | 独立的栏目 https://www.tensorflow.org/lite | 10+ | Image Segmentation DeepLabV3 on iOS Image Segmentation DeepLabV3 on Android | 2 | | 0 | Paddle Lite 中独立存在 | 未统计 |
| 框架之间的迁移相关 | | | | 0 | 概述 准备工作 网络脚本分析 网络脚本开发 网络调试 精度调试 性能调试 推理执行 网络迁移调试实例 常见问题 | 10 | Paddle 1.8 与 Paddle 2.0 API 映射表 PyTorch-PaddlePaddle API 映射表 版本迁移工具 | 3 |
| 自定义算子 | Tensors and operations Custom layers Custom training: walkthrough Create an op Extension types | 5 | Double Backward with Custom Functions Fusing Convolution and Batch Norm using Custom Function Custom C++ and CUDA Extensions Extending TorchScript with Custom C++ Operators Extending TorchScript with Custom C++ Classes Registering a Dispatched Operator in C++ Extending dispatcher for a new backend in C++ | 7 | 算子分类 运算重载 自定义算子(CPU) 自定义算子(GPU) 自定义算子(Ascend) 自定义算子(基于 Custom 表达) | 6 | 自定义原生算子 原生算子开发注意事项 自定义外部算子 自定义 Python 算子 API 介绍 API 示例 本地开发指南 提交 PR 注意事项 FAQ | 9 |
| 分布式训练 | Distributed training with Kereas Distributed training with DTensors Using DTensors with Keras Custom training loops Multi-worker training with Keras Multi-worker training with CTL Parameter Server Training Distributed input Distributed training | 9 | PyTorch Distributed Overview Single-Machine Model Parallel Best PracticesGetting Started with Distributed Data Parallel Writing Distributed Applications with PyTorch Getting Started with Fully Sharded Data Prallel Customize Process Group Backends Using Cpp Extension Getting Started with Distributed RPC Framework Implementing a Parameter Server Using Distributed RPC Framework Distributed Pipeline Parallelsim using RPC Implementing Batch RPC Processing Using Asynchronous Executions Combining Distributed DataPrallel with Distributed RPC Framework Training Transformer models using Pipeline Parallelism Training Transformer models using Distributed Data Parallel and Pipeline Parallelism Distributed Training with Uneven Inputs Using the Join Context Manager | 16 | 分布式并行总览 分布式集合通信原语 分布式并行训练基础样例(Ascend) 分布式并行训练基础样例(GPU) 分布式推理 保存和加载模型(HyBrid Parallel 模式) 分布式并行训练 Transformer 模型 鹏程·盘古模型网络多维度混合并行解析 分布式故障恢复 | 9 | 单机多卡训练 分布式训练开始 使用 FleetAPI 进行分布式训练 | 3 |
| 分布式训练 | Distributed training with Kereas Distributed training with DTensors Using DTensors with Keras Custom training loops Multi-worker training with Keras Multi-worker training with CTL Parameter Server Training Distributed input Distributed training | 9 | PyTorch Distributed Overview Single-Machine Model Parallel Best PracticesGetting Started with Distributed Data Parallel Writing Distributed Applications with PyTorch Getting Started with Fully Sharded Data Parallel Customize Process Group Backends Using Cpp Extension Getting Started with Distributed RPC Framework Implementing a Parameter Server Using Distributed RPC Framework Distributed Pipeline Parallelsim using RPC Implementing Batch RPC Processing Using Asynchronous Executions Combining Distributed DataParallel with Distributed RPC Framework Training Transformer models using Pipeline Parallelism Training Transformer models using Distributed Data Parallel and Pipeline Parallelism Distributed Training with Uneven Inputs Using the Join Context Manager | 16 | 分布式并行总览 分布式集合通信原语 分布式并行训练基础样例(Ascend) 分布式并行训练基础样例(GPU) 分布式推理 保存和加载模型(HyBrid Parallel 模式) 分布式并行训练 Transformer 模型 鹏程·盘古模型网络多维度混合并行解析 分布式故障恢复 | 9 | 单机多卡训练 分布式训练开始 使用 FleetAPI 进行分布式训练 | 3 |
| 框架设计文档 | Random number generation | 1 | 分散在 API 文档、源码中,其实比较丰富。30+ | 30+ | 设计白皮书 全场景统一 函数式微分编程 动静态图结合 异构并行训练 分布式并行 中间表达 MindIR 高性能数据处理引擎 图算融合加速引擎 二阶优化 可视化调试调优 安全可信 术语 | 13 | | 0 |
| 其它 | Integrated gradients Uncertainty quantification with SNGP Probabilistic regression Keras 一级标题下的 13 篇文章 Thinking in TensorFlow 2 Data input pipelines 一级标题下的 3 篇 GPU TPU | 20 | Learn the Basics Quickstart Deep Learning with PyTorch: A 60 Minute Blitz Building a Convolution/Batch Norm fuser in FX Building a Simple CPU Performance Profiler with FX Channels Last Memory Format in PyTorch Forward-mode Automatic Differentiation Using the PyTorch C++ Frontend Dynamic Parallelism in TorchScript Autograd in C++ Frontend Static Quantization with Eager Model in PyTorch | 11 | 基本介绍 快速入门 进阶案例:线性拟合 混合精度 梯度累积算法 自适应梯度求和算法 降维训练算法 | 7 | 10 分钟快速上手飞桨 使用线性回归预测波士顿房价 模型导出 ONNX 协议 飞桨产品硬件支持表 昆仑芯 XPU 芯片运行飞桨 海光 DCU 芯片运行飞桨 昇腾 NPU 芯片运行飞桨 环境变量 FLAGS 下 9 篇 hello paddle:从普通程序走向机器学习程序 通过 AutoEncoder 实现时序数据异常检测 广播介绍 自动混合精度训练 梯度裁剪 升级指南 | 20+ |

Expand Down
2 changes: 1 addition & 1 deletion docs/guides/jit/debugging_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ The C++ error stack is hidden by default. You can set the C++ environment variab
## 2、Debugging Method
Before debugging, **please ensure that the dynamic graph code before conversion can run successfully**. The following introduces several debugging methods recommended in Dynamic-to-Static.
### 2.1 Pdb Debugging
pdb is a module in Python that defines an interactive Pyhton source code debugger. It supports setting breakpoints and single stepping between source lines, listing source code and variables, running Python code, etc.
pdb is a module in Python that defines an interactive Python source code debugger. It supports setting breakpoints and single stepping between source lines, listing source code and variables, running Python code, etc.
#### 2.1.1 Debugging steps

- step1: Insert `import pdb; pdb.set_trace()` before the code where you want to enable pdb debugging.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ paddlenlp.generation.GenerationConfig(*kwargs)
| transformers | PaddlePaddle | 备注 |
| -------------------------------------| ------------------- | -------- |
| max_length | max_length | 最大生成长度。 |
| max_new_tokens | - | 最大生成长度(忽略 promot),Paddle 无此参数,一般对网络训练结果影响不大,可直接删除。|
| max_new_tokens | - | 最大生成长度(忽略 prompt),Paddle 无此参数,一般对网络训练结果影响不大,可直接删除。|
| min_length | min_length | 最小生成长度。 |
| min_new_tokens | - | 最小生成长度(忽略 promot),Paddle 无此参数,一般对网络训练结果影响不大,可直接删除。 |
| min_new_tokens | - | 最小生成长度(忽略 prompt),Paddle 无此参数,一般对网络训练结果影响不大,可直接删除。 |
| early_stopping | early_stopping | 早停是否开启。 |
| max_time | - | 最大允许计算运行时间,Paddle 无此参数,一般对网络训练结果影响不大,可直接删除。 |
| do_sample | do_sample | 是否进行采样。 |
Expand Down
Loading