Skip to content

Commit cf668ab

Browse files
LigomlLiyulingyueenkileemrcangyeSigureMo
authored
[cherry-pick2.4]docs fix (#47669)
* #46165 * #45752 * fix some doc bug test=document_fix (#45488) * fix some doc bug test=document_fix * fix some docs issues, test=document_fix * beta -> \beta in softplus * threshold -> \varepsilon in softplus * parameter name * delta -> \delta in smooth_l1_loss * fix some docs test=document_fix * fix docs test=document_fix * fix docs && 增加空行 test=document_fix * Update python/paddle/nn/functional/activation.py, test=document_fix * Update python/paddle/nn/layer/activation.py, test=document_fix Co-authored-by: SigureMo <[email protected]> * [docs] add ipustrategy Hyperlink (#46422) * [docs] add ipustrategy Hyperlink * fix ipu_shard_guard docs; test=document_fix * [docs] add set_ipu_shard note * [docs] fix hyperlink * update framework.py * fix mlu_places docs; test=document_fix * fix put_along_axis docs; test=document_fix * fix flake8 W293 error, test=document_fix * fix typo in typing, test=document_fix Co-authored-by: Ligoml <[email protected]> Co-authored-by: Nyakku Shigure <[email protected]> * #46659 * Update README_cn.md (#46927) 修复了错别字 * #46738 * fix paddle.get_default_dtype (#47040) Chinese and English return values are inconsistent * fix bug Co-authored-by: 张春乔 <[email protected]> Co-authored-by: Infinity_lee <[email protected]> Co-authored-by: mrcangye <[email protected]> Co-authored-by: SigureMo <[email protected]> Co-authored-by: gouzil <[email protected]> Co-authored-by: Hamid Zare <[email protected]> Co-authored-by: Sqhttwl <[email protected]> Co-authored-by: OccupyMars2025 <[email protected]> Co-authored-by: 超级码牛 <[email protected]> Co-authored-by: jzhang533 <[email protected]>
1 parent 3a01478 commit cf668ab

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

52 files changed

+1193
-1301
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -89,8 +89,8 @@ We provide [English](https://www.paddlepaddle.org.cn/documentation/docs/en/guide
8989

9090
## Courses
9191

92-
- [Server Deployments](https://aistudio.baidu.com/aistudio/course/introduce/19084): Courses intorducing high performance server deployments via local and remote services.
93-
- [Edge Deployments](https://aistudio.baidu.com/aistudio/course/introduce/22690): Courses intorducing edge deployments from mobile, IoT to web and applets.
92+
- [Server Deployments](https://aistudio.baidu.com/aistudio/course/introduce/19084): Courses introducing high performance server deployments via local and remote services.
93+
- [Edge Deployments](https://aistudio.baidu.com/aistudio/course/introduce/22690): Courses introducing edge deployments from mobile, IoT to web and applets.
9494

9595
## Copyright and License
9696
PaddlePaddle is provided under the [Apache-2.0 license](LICENSE).

README_cn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ PaddlePaddle用户可领取**免费Tesla V100在线算力资源**,训练模型
8888
## 课程
8989

9090
- [服务器部署](https://aistudio.baidu.com/aistudio/course/introduce/19084): 详细介绍高性能服务器端部署实操,包含本地端及服务化Serving部署等
91-
- [端侧部署](https://aistudio.baidu.com/aistudio/course/introduce/22690): 详细介绍端侧多场景部署实操,从移端端设备、IoT、网页到小程序部署
91+
- [端侧部署](https://aistudio.baidu.com/aistudio/course/introduce/22690): 详细介绍端侧多场景部署实操,从移动端设备、IoT、网页到小程序部署
9292

9393
## 版权和许可证
9494
PaddlePaddle由[Apache-2.0 license](LICENSE)提供

paddle/fluid/operators/activation_op.cc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -172,9 +172,9 @@ class ActivationOpGrad : public framework::OperatorWithKernel {
172172
};
173173

174174
UNUSED constexpr char SigmoidDoc[] = R"DOC(
175-
Sigmoid Activation Operator
175+
Sigmoid Activation
176176
177-
$$out = \\frac{1}{1 + e^{-x}}$$
177+
$$out = \frac{1}{1 + e^{-x}}$$
178178
179179
)DOC";
180180

python/paddle/autograd/py_layer.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ def save_for_backward(self, *tensors):
5555
"""
5656
Saves given tensors that backward need. Use ``saved_tensor`` in the `backward` to get the saved tensors.
5757
58-
.. note::
58+
Note:
5959
This API should be called at most once, and only inside `forward`.
6060
6161
Args:
@@ -341,7 +341,7 @@ def save_for_backward(self, *tensors):
341341
"""
342342
Saves given tensors that backward need. Use ``saved_tensor`` in the `backward` to get the saved tensors.
343343
344-
.. note::
344+
Note:
345345
This API should be called at most once, and only inside `forward`.
346346
347347
Args:

python/paddle/device/cuda/__init__.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@ def max_memory_allocated(device=None):
203203
'''
204204
Return the peak size of gpu memory that is allocated to tensor of the given device.
205205
206-
.. note::
206+
Note:
207207
The size of GPU memory allocated to tensor is 256-byte aligned in Paddle, which may larger than the memory size that tensor actually need.
208208
For instance, a float32 tensor with shape [1] in GPU will take up 256 bytes memory, even though storing a float32 data requires only 4 bytes.
209209
@@ -269,7 +269,7 @@ def memory_allocated(device=None):
269269
'''
270270
Return the current size of gpu memory that is allocated to tensor of the given device.
271271
272-
.. note::
272+
Note:
273273
The size of GPU memory allocated to tensor is 256-byte aligned in Paddle, which may be larger than the memory size that tensor actually need.
274274
For instance, a float32 tensor with shape [1] in GPU will take up 256 bytes memory, even though storing a float32 data requires only 4 bytes.
275275

python/paddle/distributed/collective.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1349,7 +1349,7 @@ def alltoall_single(
13491349
"""
13501350
Scatter a single input tensor to all participators and gather the received tensors in out_tensor.
13511351
1352-
.. note::
1352+
Note:
13531353
``alltoall_single`` is only supported in eager mode.
13541354
13551355
Args:

python/paddle/distributed/fleet/base/private_helper_function.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,9 @@ def wait_server_ready(endpoints):
3030
["127.0.0.1:8080", "127.0.0.1:8081"]
3131
3232
Examples:
33-
.. code-block:: python
33+
.. code-block:: python
3434
35-
wait_server_ready(["127.0.0.1:8080", "127.0.0.1:8081"])
35+
wait_server_ready(["127.0.0.1:8080", "127.0.0.1:8081"])
3636
"""
3737
assert not isinstance(endpoints, str)
3838
while True:

python/paddle/distributed/parallel.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ def init_parallel_env():
105105
"""
106106
Initialize parallel training environment in dynamic graph mode.
107107
108-
.. note::
108+
Note:
109109
Now initialize both `NCCL` and `GLOO` contexts for communication.
110110
111111
Args:

python/paddle/distributed/sharding/group_sharded.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ def save_group_sharded_model(model, output, optimizer=None):
209209
"""
210210
Group sharded encapsulated model and optimizer state saving module.
211211
212-
.. note::
212+
Note:
213213
If using save_group_sharded_model saves the model. When loading again, you need to set the model or optimizer state before using group_sharded_parallel.
214214
215215
Args:

python/paddle/distribution/distribution.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ def log_prob(self, value):
140140
def probs(self, value):
141141
"""Probability density/mass function.
142142
143-
.. note::
143+
Note:
144144
145145
This method will be deprecated in the future, please use `prob`
146146
instead.

0 commit comments

Comments
 (0)