Skip to content

Commit ada82a3

Browse files
author
Yancey
authored
Add is_local paramter description (#8893)
* add is_local paramter description * update * update by comment
1 parent 2cc2fb4 commit ada82a3

File tree

2 files changed

+15
-0
lines changed

2 files changed

+15
-0
lines changed

doc/v2/howto/cluster/cmd_argument_cn.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,13 @@ paddle.init(
7171
- trainer_id:**必选,默认0**,每个trainer的唯一ID,从0开始的整数
7272
- pservers:**必选,默认127.0.0.1**,当前训练任务启动的pserver的IP列表,多个IP使用“,”隔开
7373

74+
```python
75+
trainer = paddle.trainer.SGD(..., is_local=False)
76+
```
77+
78+
参数说明
79+
80+
- is_local: **必选, 默认True**, 是否使用PServer更新参数
7481

7582
## 准备数据集
7683

doc/v2/howto/cluster/cmd_argument_en.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,14 @@ Parameter Description
7373
- trainer_id: **required, default 0**, ID for every trainer, start from 0.
7474
- pservers: **required, default 127.0.0.1**, list of IPs of parameter servers, separated by ",".
7575

76+
```python
77+
trainer = paddle.trainer.SGD(..., is_local=False)
78+
```
79+
80+
Parameter Description
81+
82+
- is_local: **required, default True**, whether update parameters by PServer.
83+
7684
## Prepare Training Dataset
7785

7886
Here's some example code [prepare.py](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/howto/usage/cluster/src/word2vec/prepare.py), it will download public `imikolov` dataset and split it into multiple files according to job parallelism(trainers count). Modify `SPLIT_COUNT` at the begining of `prepare.py` to change the count of output files.

0 commit comments

Comments
 (0)