Skip to content

Commit 739cf13

Browse files
authored
Remove Support For Deprecated Habana (#21327)
* remove habana * changelog * remove from install * update docs * update * update * pytest match
1 parent a883890 commit 739cf13

File tree

26 files changed

+17
-471
lines changed

26 files changed

+17
-471
lines changed

.github/workflows/ci-tests-pytorch.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,6 @@ jobs:
133133
run: |
134134
uv pip install ".[${EXTRA_PREFIX}extra,${EXTRA_PREFIX}test,${EXTRA_PREFIX}strategies]" \
135135
--upgrade \
136-
-r requirements/_integrations/accelerators.txt \
137136
--find-links="${TORCH_URL}" \
138137
--find-links="https://download.pytorch.org/whl/torch-tensorrt"
139138
uv pip list

docs/source-pytorch/common/index.rst

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@
1616
Save memory with half-precision <precision>
1717
../advanced/model_parallel
1818
Train on single or multiple GPUs <../accelerators/gpu>
19-
Train on single or multiple HPUs <../integrations/hpu/index>
2019
Train on single or multiple TPUs <../accelerators/tpu>
2120
Train on MPS <../accelerators/mps>
2221
Use a pretrained model <../advanced/pretrained>
@@ -161,13 +160,6 @@ How-to Guides
161160
:col_css: col-md-4
162161
:height: 180
163162

164-
.. displayitem::
165-
:header: Train on single or multiple HPUs
166-
:description: Train models faster with HPU accelerators
167-
:button_link: ../integrations/hpu/index.html
168-
:col_css: col-md-4
169-
:height: 180
170-
171163
.. displayitem::
172164
:header: Train on single or multiple TPUs
173165
:description: TTrain models faster with TPU accelerators

docs/source-pytorch/common_usecases.rst

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -126,13 +126,6 @@ Customize and extend Lightning for things like custom hardware or distributed st
126126
:button_link: accelerators/gpu.html
127127
:height: 100
128128

129-
.. displayitem::
130-
:header: Train on single or multiple HPUs
131-
:description: Train models faster with HPUs.
132-
:col_css: col-md-12
133-
:button_link: integrations/hpu/index.html
134-
:height: 100
135-
136129
.. displayitem::
137130
:header: Train on single or multiple TPUs
138131
:description: Train models faster with TPUs.

docs/source-pytorch/conf.py

Lines changed: 0 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -86,20 +86,6 @@ def _load_py_module(name: str, location: str) -> ModuleType:
8686
os.path.join(_PATH_HERE, _FOLDER_GENERATED, "CHANGELOG.md"),
8787
)
8888

89-
# Copy Accelerator docs
90-
assist_local.AssistantCLI.pull_docs_files(
91-
gh_user_repo="Lightning-AI/lightning-Habana",
92-
target_dir="docs/source-pytorch/integrations/hpu",
93-
# checkout="refs/tags/1.6.0",
94-
checkout="5549fa927d5501d31aac0c9b2ed479be62a02cbc",
95-
)
96-
# the HPU also need some images
97-
URL_RAW_DOCS_HABANA = "https://raw.githubusercontent.com/Lightning-AI/lightning-Habana/1.5.0/docs/source"
98-
for img in ["_images/HPUProfiler.png", "_images/IGP.png"]:
99-
img_ = os.path.join(_PATH_HERE, "integrations", "hpu", img)
100-
os.makedirs(os.path.dirname(img_), exist_ok=True)
101-
urllib.request.urlretrieve(f"{URL_RAW_DOCS_HABANA}/{img}", img_)
102-
10389
# Copy strategies docs as single pages
10490
assist_local.AssistantCLI.pull_docs_files(
10591
gh_user_repo="Lightning-Universe/lightning-Hivemind",
@@ -360,7 +346,6 @@ def _load_py_module(name: str, location: str) -> ModuleType:
360346
"numpy": ("https://numpy.org/doc/stable/", None),
361347
"PIL": ("https://pillow.readthedocs.io/en/stable/", None),
362348
"torchmetrics": ("https://lightning.ai/docs/torchmetrics/stable/", None),
363-
"lightning_habana": ("https://lightning-ai.github.io/lightning-Habana/", None),
364349
"tensorboardX": ("https://tensorboardx.readthedocs.io/en/stable/", None),
365350
# needed for referencing Fabric from lightning scope
366351
"lightning.fabric": ("https://lightning.ai/docs/fabric/stable/", None),
@@ -468,10 +453,6 @@ def _load_py_module(name: str, location: str) -> ModuleType:
468453
("py:class", "lightning.pytorch.utilities.types.LRSchedulerConfigType"),
469454
("py:class", "lightning.pytorch.utilities.types.OptimizerConfig"),
470455
("py:class", "lightning.pytorch.utilities.types.OptimizerLRSchedulerConfig"),
471-
("py:class", "lightning_habana.pytorch.plugins.precision.HPUPrecisionPlugin"),
472-
("py:class", "lightning_habana.pytorch.strategies.HPUDDPStrategy"),
473-
("py:class", "lightning_habana.pytorch.strategies.HPUParallelStrategy"),
474-
("py:class", "lightning_habana.pytorch.strategies.SingleHPUStrategy"),
475456
("py:obj", "logger.experiment"),
476457
("py:class", "mlflow.tracking.MlflowClient"),
477458
("py:attr", "model"),
@@ -648,7 +629,6 @@ def package_list_from_file(file):
648629
r"^../common/trainer.html#trainer-flags$",
649630
"https://medium.com/pytorch-lightning/quick-contribution-guide-86d977171b3a",
650631
"https://deepgenerativemodels.github.io/assets/slides/cs236_lecture11.pdf",
651-
"https://developer.habana.ai", # returns 403 error but redirects to intel.com documentation
652632
"https://www.supermicro.com", # returns 403 error
653633
"https://www.intel.com/content/www/us/en/products/docs/processors/what-is-a-gpu.html",
654634
"https://www.microsoft.com/en-us/research/blog/zero-infinity-and-deepspeed-unlocking-unprecedented-model-scale-for-deep-learning-training/", # noqa: E501

docs/source-pytorch/expertise_levels.rst

Lines changed: 2 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -190,23 +190,15 @@ Configure all aspects of Lightning for advanced usecases.
190190
:tag: advanced
191191

192192
.. displayitem::
193-
:header: Level 18: Explore HPUs
194-
:description: Explore Havana Gaudi Processing Unit (HPU) for model scaling.
195-
:col_css: col-md-6
196-
:button_link: levels/advanced_level_19.html
197-
:height: 150
198-
:tag: advanced
199-
200-
.. displayitem::
201-
:header: Level 19: Master TPUs
193+
:header: Level 18: Master TPUs
202194
:description: Master TPUs and run on cloud TPUs.
203195
:col_css: col-md-6
204196
:button_link: levels/advanced_level_20.html
205197
:height: 150
206198
:tag: advanced
207199

208200
.. displayitem::
209-
:header: Level 20: Train models with billions of parameters
201+
:header: Level 19: Train models with billions of parameters
210202
:description: Scale GPU training to models with billions of parameters
211203
:col_css: col-md-6
212204
:button_link: levels/advanced_level_21.html

docs/source-pytorch/extensions/accelerator.rst

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,6 @@ Currently there are accelerators for:
1010
- CPU
1111
- :doc:`GPU <../accelerators/gpu>`
1212
- :doc:`TPU <../accelerators/tpu>`
13-
- :doc:`HPU <../integrations/hpu/index>`
1413
- :doc:`MPS <../accelerators/mps>`
1514

1615
The Accelerator is part of the Strategy which manages communication across multiple devices (distributed communication).

docs/source-pytorch/extensions/strategy.rst

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -78,12 +78,6 @@ The below table lists all relevant strategies available in Lightning with their
7878
* - deepspeed
7979
- :class:`~lightning.pytorch.strategies.DeepSpeedStrategy`
8080
- Provides capabilities to run training using the DeepSpeed library, with training optimizations for large billion parameter models. :doc:`Learn more. <../advanced/model_parallel/deepspeed>`
81-
* - hpu_parallel
82-
- ``HPUParallelStrategy``
83-
- Strategy for distributed training on multiple HPU devices. :doc:`Learn more. <../integrations/hpu/index>`
84-
* - hpu_single
85-
- ``SingleHPUStrategy``
86-
- Strategy for training on a single HPU device. :doc:`Learn more. <../integrations/hpu/index>`
8781
* - xla
8882
- :class:`~lightning.pytorch.strategies.XLAStrategy`
8983
- Strategy for training on multiple TPU devices using the :func:`torch_xla.distributed.xla_multiprocessing.spawn` method. :doc:`Learn more. <../accelerators/tpu>`

docs/source-pytorch/glossary/index.rst

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,6 @@
2121
GPU <../accelerators/gpu>
2222
Half precision <../common/precision>
2323
Hooks <../common/hooks>
24-
HPU <../integrations/hpu/index>
2524
Inference <../deploy/production_intermediate>
2625
Lightning CLI <../cli/lightning_cli>
2726
LightningDataModule <../data/datamodule>
@@ -187,13 +186,6 @@ Glossary
187186
:button_link: ../common/hooks.html
188187
:height: 100
189188

190-
.. displayitem::
191-
:header: HPU
192-
:description: Habana Gaudi AI Processor Unit for faster training
193-
:col_css: col-md-12
194-
:button_link: ../integrations/hpu/index.html
195-
:height: 100
196-
197189
.. displayitem::
198190
:header: Inference
199191
:description: Making predictions by applying a trained model to unlabeled examples

docs/source-pytorch/integrations/hpu/index.rst

Lines changed: 0 additions & 40 deletions
This file was deleted.

docs/source-pytorch/levels/advanced.rst

Lines changed: 2 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -46,23 +46,15 @@ Configure all aspects of Lightning for advanced usecases.
4646
:tag: advanced
4747

4848
.. displayitem::
49-
:header: Level 18: Explore HPUs
50-
:description: Explore Habana Gaudi Processing Unit (HPU) for model scaling.
51-
:col_css: col-md-6
52-
:button_link: advanced_level_19.html
53-
:height: 150
54-
:tag: advanced
55-
56-
.. displayitem::
57-
:header: Level 19: Master TPUs
49+
:header: Level 18: Master TPUs
5850
:description: Master TPUs and run on cloud TPUs.
5951
:col_css: col-md-6
6052
:button_link: advanced_level_20.html
6153
:height: 150
6254
:tag: advanced
6355

6456
.. displayitem::
65-
:header: Level 20: Train models with billions of parameters
57+
:header: Level 19: Train models with billions of parameters
6658
:description: Scale GPU training to models with billions of parameters
6759
:col_css: col-md-6
6860
:button_link: advanced_level_21.html

0 commit comments

Comments
 (0)