Skip to content

Commit ccfd120

Browse files
authored
Merge branch 'master' into save_on_train_epoch_end_default_behavior
2 parents 93be63a + 460c60c commit ccfd120

File tree

3 files changed

+39
-3
lines changed

3 files changed

+39
-3
lines changed

.github/CONTRIBUTING.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -109,6 +109,36 @@ ______________________________________________________________________
109109

110110
## Guidelines
111111

112+
### Development environment
113+
114+
To set up a local development environment, we recommend using `uv`, which can be installed following their [instructions](https://docs.astral.sh/uv/getting-started/installation/).
115+
116+
Once `uv` has been installed, begin by cloning the repository:
117+
118+
```bash
119+
git clone https://github.com/Lightning-AI/lightning.git
120+
cd lightning
121+
```
122+
123+
Once in root level of the repository, create a new virtual environment and install the project dependencies.
124+
125+
```bash
126+
uv venv
127+
# uv venv --python 3.11 # use this instead if you need a specific python version
128+
129+
source .venv/bin/activate # command may differ based on your shell
130+
uv pip install ".[dev, examples]"
131+
```
132+
133+
Once the dependencies have been installed, install pre-commit and set up the git hook scripts:
134+
135+
```bash
136+
uv pip install pre-commit
137+
pre-commit install
138+
```
139+
140+
If you would like more information regarding the uv commands, please refer to uv's documentation for more information on their [pip interface](https://docs.astral.sh/uv/pip/).
141+
112142
### Developments scripts
113143

114144
To build the documentation locally, simply execute the following commands from project root (only for Unix):

src/lightning/pytorch/trainer/connectors/accelerator_connector.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -453,10 +453,11 @@ def _check_strategy_and_fallback(self) -> None:
453453

454454
if (
455455
strategy_flag in FSDPStrategy.get_registered_strategies() or type(self._strategy_flag) is FSDPStrategy
456-
) and self._accelerator_flag not in ("cuda", "gpu"):
456+
) and not (self._accelerator_flag in ("cuda", "gpu") or isinstance(self._accelerator_flag, CUDAAccelerator)):
457457
raise ValueError(
458-
f"The strategy `{FSDPStrategy.strategy_name}` requires a GPU accelerator, but got:"
459-
f" {self._accelerator_flag}"
458+
f"The strategy `{FSDPStrategy.strategy_name}` requires a GPU accelerator, but received "
459+
f"`accelerator={self._accelerator_flag!r}`. Please set `accelerator='cuda'`, `accelerator='gpu'`,"
460+
" or pass a `CUDAAccelerator()` instance to use FSDP."
460461
)
461462
if strategy_flag in _DDP_FORK_ALIASES and "fork" not in torch.multiprocessing.get_all_start_methods():
462463
raise ValueError(

tests/tests_pytorch/trainer/connectors/test_accelerator_connector.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -582,6 +582,11 @@ class AcceleratorSubclass(CPUAccelerator):
582582
Trainer(accelerator=AcceleratorSubclass(), strategy=FSDPStrategySubclass())
583583

584584

585+
@RunIf(min_cuda_gpus=1)
586+
def test_check_fsdp_strategy_and_fallback_with_cudaaccelerator():
587+
Trainer(strategy="fsdp", accelerator=CUDAAccelerator())
588+
589+
585590
@mock.patch.dict(os.environ, {}, clear=True)
586591
def test_unsupported_tpu_choice(xla_available, tpu_available):
587592
# if user didn't set strategy, _Connector will choose the SingleDeviceXLAStrategy or XLAStrategy

0 commit comments

Comments
 (0)