Skip to content

Commit a505af4

Browse files
committed
Merge branch 'develop' of https://github.com/PaddlePaddle/docs into feature/my-cool-stuff
2 parents 6f61cb8 + 025be1d commit a505af4

29 files changed

+325
-252
lines changed
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
name: Comment Preview URLs
2+
3+
on:
4+
workflow_run:
5+
workflows: ["Generate Preview URLs"]
6+
types:
7+
- completed
8+
9+
jobs:
10+
comment:
11+
name: Post Preview URLs Comment
12+
runs-on: ubuntu-latest
13+
if: >
14+
github.event.workflow_run.event == 'pull_request' &&
15+
github.event.workflow_run.conclusion == 'success'
16+
permissions:
17+
pull-requests: write
18+
19+
steps:
20+
- name: Download artifacts
21+
uses: actions/download-artifact@v4
22+
with:
23+
github-token: ${{ secrets.GITHUB_TOKEN }}
24+
run-id: ${{ github.event.workflow_run.id }}
25+
26+
- name: Read PR metadata
27+
id: pr-metadata
28+
run: |
29+
PR_NUMBER=$(find . -name "pr_number.txt" -exec cat {} \;)
30+
PR_SHA=$(find . -name "pr_sha.txt" -exec cat {} \;)
31+
echo "pr_number=$PR_NUMBER" >> $GITHUB_OUTPUT
32+
echo "pr_sha=$PR_SHA" >> $GITHUB_OUTPUT
33+
34+
- name: Read preview URLs
35+
id: preview-urls
36+
run: |
37+
PREVIEW_CONTENT=$(find . -name "preview_urls.txt" -exec cat {} \;)
38+
{
39+
echo 'content<<EOF'
40+
echo "$PREVIEW_CONTENT"
41+
echo EOF
42+
} >> $GITHUB_OUTPUT
43+
44+
- name: Find existing comment
45+
uses: peter-evans/find-comment@v4
46+
id: fc
47+
with:
48+
issue-number: ${{ steps.pr-metadata.outputs.pr_number }}
49+
comment-author: 'github-actions[bot]'
50+
body-includes: '本次 PR 文档预览链接'
51+
52+
- name: Create or update comment
53+
uses: peter-evans/create-or-update-comment@v4
54+
with:
55+
comment-id: ${{ steps.fc.outputs.comment-id }}
56+
issue-number: ${{ steps.pr-metadata.outputs.pr_number }}
57+
body: ${{ steps.preview-urls.outputs.content }}
58+
edit-mode: replace
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
name: Generate Preview URLs
2+
3+
on:
4+
pull_request:
5+
branches: ["develop"]
6+
paths:
7+
- 'docs/**.rst'
8+
- 'docs/**.md'
9+
10+
jobs:
11+
generate-urls:
12+
name: Generate Preview URLs
13+
runs-on: ubuntu-latest
14+
permissions:
15+
contents: read
16+
17+
steps:
18+
- name: Checkout PR branch
19+
uses: actions/checkout@v4
20+
with:
21+
fetch-depth: 0
22+
23+
- name: Fetch base branch
24+
run: |
25+
git fetch origin develop:develop
26+
27+
- name: Generate preview URLs
28+
id: generate
29+
run: |
30+
chmod +x ci_scripts/report_preview_url.sh
31+
./ci_scripts/report_preview_url.sh ${{ github.event.pull_request.number }} > preview_urls.txt
32+
33+
- name: Upload preview URLs as artifact
34+
uses: actions/upload-artifact@v4
35+
with:
36+
name: preview-urls-${{ github.event.pull_request.number }}
37+
path: preview_urls.txt
38+
retention-days: 1
39+
40+
- name: Save PR metadata
41+
run: |
42+
echo "${{ github.event.pull_request.number }}" > pr_number.txt
43+
echo "${{ github.event.pull_request.head.sha }}" > pr_sha.txt
44+
45+
- name: Upload PR metadata
46+
uses: actions/upload-artifact@v4
47+
with:
48+
name: pr-metadata-${{ github.event.pull_request.number }}
49+
path: |
50+
pr_number.txt
51+
pr_sha.txt
52+
retention-days: 1

ci_scripts/check_api_label_cn.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
# check file's api_label
2424
def check_api_label(rootdir, file):
2525
real_file = Path(rootdir) / file
26-
with open(real_file, "r", encoding="utf-8") as f:
26+
with real_file.open("r", encoding="utf-8") as f:
2727
first_line = f.readline().strip()
2828
return first_line == generate_en_label_by_path(file)
2929

@@ -53,13 +53,13 @@ def find_all_api_labels_in_dir(rootdir):
5353
# api_labels in a file
5454
def find_api_labels_in_one_file(file_path):
5555
api_labels_in_one_file = []
56-
with open(file_path, "r", encoding="utf-8") as f:
56+
with file_path.open("r", encoding="utf-8") as f:
5757
lines = f.readlines()
5858
for line in lines:
59-
line = re.search(".. _([a-zA-Z0-9_]+)", line)
60-
if not line:
59+
match = re.search(".. _([a-zA-Z0-9_]+)", line)
60+
if not match:
6161
continue
62-
api_labels_in_one_file.append(line.group(1))
62+
api_labels_in_one_file.append(match.group(1))
6363
return api_labels_in_one_file
6464

6565

@@ -84,7 +84,7 @@ def run_cn_api_label_checking(rootdir, files):
8484
for file in files:
8585
if not file.endswith(".rst"):
8686
continue
87-
with open(Path(rootdir) / file, "r", encoding="utf-8") as f:
87+
with (Path(rootdir) / file).open("r", encoding="utf-8") as f:
8888
pattern = f.read()
8989
matches = re.findall(r":ref:`([^`]+)`", pattern)
9090
for match in matches:

ci_scripts/check_api_label_cn.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ if [ -z ${BRANCH} ]; then
1313
BRANCH="develop"
1414
fi
1515

16-
all_git_files=`git diff --name-only --diff-filter=ACMR upstream/${BRANCH} | sed 's#docs/##g'`
16+
all_git_files=`git diff --name-only --diff-filter=ACMR upstream/${BRANCH} | sed 's#^docs/##'`
1717
echo $all_git_files
1818
echo "Run API_LABEL Checking"
1919
python check_api_label_cn.py ${DOCROOT} ${APIROOT} $all_git_files

ci_scripts/report_preview_url.sh

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
#!/bin/bash
2+
3+
pr_id="$1"
4+
5+
if [ -z "$pr_id" ]; then
6+
echo "Error: Pull Request ID is not provided."
7+
exit 1
8+
fi
9+
10+
generate_preview_url() {
11+
local file_path="$1"
12+
local pr_id="$2"
13+
local path_no_ext="${file_path%.*}"
14+
local base_url="http://preview-pr-${pr_id}.paddle-docs-preview.paddlepaddle.org.cn/documentation/docs/zh/"
15+
local final_url="${base_url}${path_no_ext}.html"
16+
echo "$final_url"
17+
}
18+
19+
mapfile -t all_git_files < <(git diff --name-only --diff-filter=ACMR develop | sed 's#^docs/##')
20+
21+
output_lines=()
22+
23+
for file in "${all_git_files[@]}"; do
24+
if [[ "$file" == *.rst || "$file" == *.md ]]; then
25+
url=$(generate_preview_url "$file" "$pr_id")
26+
output_lines+=("- \`docs/${file}\`: [点击预览](${url})")
27+
fi
28+
done
29+
30+
31+
if [ ${#output_lines[@]} -gt 0 ]; then
32+
cat <<-EOF
33+
<details>
34+
<summary>📚 本次 PR 文档预览链接 (点击展开)</summary>
35+
36+
以下是本次 PR 中变更文档的预览链接:
37+
38+
$(printf '%s\n' "${output_lines[@]}")
39+
40+
</details>
41+
EOF
42+
fi

docs/api/paddle/distributed/fleet/DistributedStrategy_cn.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -232,7 +232,7 @@ COPY-FROM: paddle.distributed.fleet.DistributedStrategy.amp_configs
232232
dgc
233233
'''''''''
234234

235-
是否启用深度梯度压缩训练。更多信息请参考[Deep Gradient Compression](https://arxiv.org/abs/1712.01887)。默认值:False
235+
是否启用深度梯度压缩训练。更多信息请参考 `Deep Gradient Compression https://arxiv.org/abs/1712.01887`_ 。默认值:False
236236

237237
**代码示例**
238238

@@ -267,7 +267,7 @@ COPY-FROM: paddle.distributed.fleet.DistributedStrategy.fp16_allreduce
267267
sharding
268268
'''''''''
269269

270-
是否开启 sharding 策略。sharding 实现了[ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054)
270+
是否开启 sharding 策略。sharding 实现了 `ZeRO: Memory Optimizations Toward Training Trillion Parameter Models https://arxiv.org/abs/1910.02054`_
271271
中 ZeRO-DP 类似的功能,其通过将模型的参数和优化器状态在 ranks 间分片来支持更大模型的训练。
272272

273273
目前在混合并行(Hybrid parallelism) 模式下,sharding config 作为混合并行设置的统一入口来设置混合并行相关参数。

docs/api/paddle/distributed/fleet/UtilBase_cn.rst

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -9,43 +9,43 @@ UtilBase
99
方法
1010
::::::::::::
1111
all_reduce(input, mode="sum", comm_world="worker")
12-
'''''''''
12+
''''''''''''''''''''''''''''''''''''''''''''''''''
1313
在指定的通信集合间进行归约操作,并将归约结果返回给集合中每个实例。
1414

1515
**参数**
1616

1717
- **input** (list|tuple|numpy.array) – 归约操作的输入。
1818
- **mode** (str) - 归约操作的模式,包含求和,取最大值和取最小值,默认为求和归约。
19-
- **comm_world** (str) - 归约操作的通信集合,包含:server 集合(``server``),worker 集合(``worker``)及所有节点集合(``all``),默认为 worker 集合。
19+
- **comm_world** (str) - 归约操作的通信集合,包含:server 集合 (``server``),worker 集合 (``worker``) 及所有节点集合 (``all``),默认为 worker 集合。
2020

2121
**返回**
2222

23-
Numpy.array|None:一个和``input``形状一致的 numpy 数组或 None。
23+
Numpy.array|None:一个和 ``input`` 形状一致的 numpy 数组或 None。
2424

2525
**代码示例**
2626

2727
COPY-FROM: paddle.distributed.fleet.UtilBase.all_reduce
2828

2929
barrier(comm_world="worker")
30-
'''''''''
30+
''''''''''''''''''''''''''''
3131
在指定的通信集合间进行阻塞操作,以实现集合间进度同步。
3232

3333
**参数**
3434

35-
- **comm_world** (str) - 阻塞操作的通信集合,包含:server 集合(``server``),worker 集合(``worker``)及所有节点集合(``all``),默认为 worker 集合。
35+
- **comm_world** (str) - 阻塞操作的通信集合,包含:server 集合 (``server``),worker 集合 (``worker``) 及所有节点集合 (``all``),默认为 worker 集合。
3636

3737
**代码示例**
3838

3939
COPY-FROM: paddle.distributed.fleet.UtilBase.barrier
4040

4141
all_gather(input, comm_world="worker")
42-
'''''''''
42+
''''''''''''''''''''''''''''''''''''''''
4343
在指定的通信集合间进行聚合操作,并将聚合的结果返回给集合中每个实例。
4444

4545
**参数**
4646

4747
- **input** (int|float) - 聚合操作的输入。
48-
- **comm_world** (str) - 聚合操作的通信集合,包含:server 集合(``server``),worker 集合(``worker``)及所有节点集合(``all``),默认为 worker 集合。
48+
- **comm_world** (str) - 聚合操作的通信集合,包含:server 集合 (``server``),worker 集合 (``worker``) 及所有节点集合 (``all``),默认为 worker 集合。
4949

5050
**返回**
5151

@@ -56,7 +56,7 @@ all_gather(input, comm_world="worker")
5656
COPY-FROM: paddle.distributed.fleet.UtilBase.all_gather
5757

5858
get_file_shard(files)
59-
'''''''''
59+
'''''''''''''''''''''
6060
在数据并行的分布式训练中,获取属于当前训练节点的文件列表。
6161

6262
.. code-block:: text
@@ -77,8 +77,7 @@ get_file_shard(files)
7777
COPY-FROM: paddle.distributed.fleet.UtilBase.get_file_shard
7878

7979
print_on_rank(message, rank_id)
80-
'''''''''
81-
80+
'''''''''''''''''''''''''''''''''
8281
在编号为 `rank_id` 的节点上打印指定信息。
8382

8483
**参数**

docs/api/paddle/distributed/utils/global_gather_cn.rst

Lines changed: 0 additions & 50 deletions
This file was deleted.

docs/api/paddle/distributed/utils/global_scatter_cn.rst

Lines changed: 0 additions & 55 deletions
This file was deleted.

0 commit comments

Comments
 (0)