Skip to content

Commit b33ec56

Browse files
Bordalantiga
authored andcommitted
docs: include external pages (#17826)
* pull docs * local * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * ... * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * replace * strategies * 1.0.0 * skip * links * more --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> (cherry picked from commit 6b0ec10)
1 parent 7a21166 commit b33ec56

File tree

16 files changed

+101
-255
lines changed

16 files changed

+101
-255
lines changed

.actions/assistant.py

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414
import glob
15+
import logging
1516
import os
1617
import pathlib
1718
import re
@@ -421,6 +422,42 @@ def copy_replace_imports(
421422
source_dir, source_imports, target_imports, target_dir=target_dir, lightning_by=lightning_by
422423
)
423424

425+
@staticmethod
426+
def pull_docs_files(
427+
gh_user_repo: str,
428+
target_dir: str = "docs/source-pytorch/XXX",
429+
checkout: str = "tags/1.0.0",
430+
source_dir: str = "docs/source",
431+
) -> None:
432+
"""Pull docs pages from external source and append to local docs."""
433+
import zipfile
434+
435+
zip_url = f"https://github.com/{gh_user_repo}/archive/refs/{checkout}.zip"
436+
437+
with tempfile.TemporaryDirectory() as tmp:
438+
zip_file = os.path.join(tmp, "repo.zip")
439+
urllib.request.urlretrieve(zip_url, zip_file)
440+
441+
with zipfile.ZipFile(zip_file, "r") as zip_ref:
442+
zip_ref.extractall(tmp)
443+
444+
zip_dirs = [d for d in glob.glob(os.path.join(tmp, "*")) if os.path.isdir(d)]
445+
# check that the extracted archive has only repo folder
446+
assert len(zip_dirs) == 1
447+
repo_dir = zip_dirs[0]
448+
449+
ls_pages = glob.glob(os.path.join(repo_dir, source_dir, "*.rst"))
450+
ls_pages += glob.glob(os.path.join(repo_dir, source_dir, "**", "*.rst"))
451+
for rst in ls_pages:
452+
rel_rst = rst.replace(os.path.join(repo_dir, source_dir) + os.path.sep, "")
453+
rel_dir = os.path.dirname(rel_rst)
454+
os.makedirs(os.path.join(_PROJECT_ROOT, target_dir, rel_dir), exist_ok=True)
455+
new_rst = os.path.join(_PROJECT_ROOT, target_dir, rel_rst)
456+
if os.path.isfile(new_rst):
457+
logging.warning(f"Page {new_rst} already exists in the local tree so it will be skipped.")
458+
continue
459+
shutil.copy(rst, new_rst)
460+
424461

425462
if __name__ == "__main__":
426463
import jsonargparse

docs/source-pytorch/accelerators/hpu_basic.rst

Lines changed: 0 additions & 109 deletions
This file was deleted.

docs/source-pytorch/accelerators/hpu_intermediate.rst

Lines changed: 0 additions & 101 deletions
This file was deleted.

docs/source-pytorch/advanced/model_parallel.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -58,11 +58,11 @@ Cutting-edge and third-party Strategies
5858

5959
Cutting-edge Lightning strategies are being developed by third-parties outside of Lightning.
6060

61-
If you want to try some of the latest and greatest features for model-parallel training, check out the :doc:`Colossal-AI Strategy <./third_party/colossalai>` integration.
61+
If you want to try some of the latest and greatest features for model-parallel training, check out the :doc:`Colossal-AI Strategy <../integrations/strategies/colossalai>` integration.
6262

63-
Another integration is :doc:`Bagua Strategy <./third_party/bagua>`, deep learning training acceleration framework for PyTorch, with advanced distributed training algorithms and system optimizations.
63+
Another integration is :doc:`Bagua Strategy <../integrations/strategies/bagua>`, deep learning training acceleration framework for PyTorch, with advanced distributed training algorithms and system optimizations.
6464

65-
For training on unreliable mixed GPUs across the internet check out the :doc:`Hivemind Strategy <./third_party/hivemind>` integration.
65+
For training on unreliable mixed GPUs across the internet check out the :doc:`Hivemind Strategy <../integrations/strategies/hivemind>` integration.
6666

6767
----
6868

docs/source-pytorch/common/index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
Save memory with half-precision <precision>
1919
../advanced/model_parallel
2020
Train on single or multiple GPUs <../accelerators/gpu>
21-
Train on single or multiple HPUs <../accelerators/hpu>
21+
Train on single or multiple HPUs <../integrations/hpu/index>
2222
Train on single or multiple IPUs <../accelerators/ipu>
2323
Train on single or multiple TPUs <../accelerators/tpu>
2424
Train on MPS <../accelerators/mps>
@@ -150,7 +150,7 @@ How-to Guides
150150
.. displayitem::
151151
:header: Train on single or multiple HPUs
152152
:description: Train models faster with HPU accelerators
153-
:button_link: ../accelerators/hpu.html
153+
:button_link: ../integrations/hpu/index.html
154154
:col_css: col-md-4
155155
:height: 180
156156

docs/source-pytorch/common_usecases.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ Customize and extend Lightning for things like custom hardware or distributed st
123123
:header: Train on single or multiple HPUs
124124
:description: Train models faster with HPUs.
125125
:col_css: col-md-12
126-
:button_link: accelerators/hpu.html
126+
:button_link: integrations/hpu/index.html
127127
:height: 100
128128

129129
.. displayitem::

0 commit comments

Comments
 (0)