Skip to content

Commit 4a6ab85

Browse files
Bordalantiga
authored andcommitted
cleaning chlog & fix imports
1 parent 6d99729 commit 4a6ab85

File tree

5 files changed

+37
-14
lines changed

5 files changed

+37
-14
lines changed

examples/fabric/build_your_own_trainer/trainer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
from typing import Any, cast, Iterable, List, Literal, Optional, Tuple, Union
55

66
import torch
7-
from lightning_utilities import apply_to_collection
7+
from lightning_utilities.core.apply_func import apply_to_collection
88
from tqdm import tqdm
99

1010
import lightning as L

src/lightning/app/CHANGELOG.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,20 @@ All notable changes to this project will be documented in this file.
44

55
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
66

7+
## [UnREleased] - 2023-08-DD
8+
9+
## Canaged
10+
11+
- Change top folder ([#18212](https://github.com/Lightning-AI/lightning/pull/18212))
12+
13+
14+
- Remove `_handle_is_headless` calls in app run loop ([#18362](https://github.com/Lightning-AI/lightning/pull/18362))
15+
16+
17+
### Fixed
18+
19+
- refactor path to root preventing circular import ([#18357](https://github.com/Lightning-AI/lightning/pull/18357))
20+
721

822
## [2.0.7] - 2023-08-14
923

src/lightning/fabric/CHANGELOG.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
2222
- Fixed issue where Fabric would not initialize the global rank, world size, and rank-zero-only rank after initialization and before launch ([#16966](https://github.com/Lightning-AI/lightning/pull/16966))
2323

2424

25+
- Fixed FSDP full-precision `param_dtype` training (`16-mixed`, `bf16-mixed` and `32-true` configurations) to avoid FSDP assertion errors with PyTorch < 2.0 ([#18278](https://github.com/Lightning-AI/lightning/pull/18278))
26+
27+
2528
## [2.0.7] - 2023-08-14
2629

2730
### Changed
@@ -30,9 +33,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
3033

3134
### Fixed
3235

33-
- Fixed FSDP full-precision `param_dtype` training (`16-mixed`, `bf16-mixed` and `32-true` configurations) to avoid FSDP assertion errors with PyTorch < 2.0 ([#18278](https://github.com/Lightning-AI/lightning/pull/18278))
34-
35-
3636
- Fixed issue where DDP subprocesses that used Hydra would set hydra's working directory to current directory ([#18145](https://github.com/Lightning-AI/lightning/pull/18145))
3737
- Fixed an issue that would prevent the user to set the multiprocessing start method after importing lightning ([#18177](https://github.com/Lightning-AI/lightning/pull/18177))
3838
- Fixed an issue with `Fabric.all_reduce()` not performing an inplace operation for all backends consistently ([#18235](https://github.com/Lightning-AI/lightning/pull/18235))

src/lightning/fabric/strategies/launchers/multiprocessing.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
import torch
2121
import torch.backends.cudnn
2222
import torch.multiprocessing as mp
23-
from lightning_utilities import apply_to_collection
23+
from lightning_utilities.core.apply_func import apply_to_collection
2424
from torch.nn import Module
2525

2626
from lightning.fabric.accelerators.cpu import CPUAccelerator

src/lightning/pytorch/CHANGELOG.md

Lines changed: 18 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -12,12 +12,29 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
1212
- On XLA, avoid setting the global rank before processes have been launched as this will initialize the PJRT computation client in the main process ([#16966](https://github.com/Lightning-AI/lightning/pull/16966))
1313

1414

15-
- Fixed FSDP full-precision `param_dtype` training (`16-mixed`, `bf16-mixed` and `32-true` configurations) to avoid FSDP assertion errors with PyTorch < 2.0 ([#18278](https://github.com/Lightning-AI/lightning/pull/18278))
15+
- Fix inefficiency in rich progress bar ([#18369](https://github.com/Lightning-AI/lightning/pull/18369))
16+
17+
18+
### Fixed
19+
20+
- Fixed FSDP full-precision `param_dtype` training (`16-mixed` and `bf16-mixed` configurations) to avoid FSDP assertion errors with PyTorch < 2.0 ([#18278](https://github.com/Lightning-AI/lightning/pull/18278))
1621

1722

1823
- Fixed an issue that prevented the use of custom logger classes without an `experiment` property defined ([#18093](https://github.com/Lightning-AI/lightning/pull/18093))
1924

2025

26+
- Fixed setting the tracking uri in `MLFlowLogger` for logging artifacts to the MLFlow server ([#18395](https://github.com/Lightning-AI/lightning/pull/18395))
27+
28+
29+
- Fixed redundant `iter()` call to dataloader when checking dataloading configuration ([#18415](https://github.com/Lightning-AI/lightning/pull/18415))
30+
31+
32+
- Fixed model parameters getting shared between processes when running with `strategy="ddp_spawn"` and `accelerator="cpu"`; this has a necessary memory impact, as parameters are replicated for each process now ([#18238](https://github.com/Lightning-AI/lightning/pull/18238))
33+
34+
35+
- Properly manage `fetcher.done` with `dataloader_iter` ([#18376](https://github.com/Lightning-AI/lightning/pull/18376))
36+
37+
2138
## [2.0.7] - 2023-08-14
2239

2340
### Added
@@ -46,14 +63,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
4663
- Fixed an attribute error for `_FaultTolerantMode` when loading an old checkpoint that pickled the enum ([#18094](https://github.com/Lightning-AI/lightning/pull/18094))
4764

4865

49-
- Fixed setting the tracking uri in `MLFlowLogger` for logging artifacts to the MLFlow server ([#18395](https://github.com/Lightning-AI/lightning/pull/18395))
50-
51-
52-
- Fixed redundant `iter()` call to dataloader when checking dataloading configuration ([#18415](https://github.com/Lightning-AI/lightning/pull/18415))
53-
54-
- Fixed model parameters getting shared between processes when running with `strategy="ddp_spawn"` and `accelerator="cpu"`; this has a necessary memory impact, as parameters are replicated for each process now ([#18238](https://github.com/Lightning-AI/lightning/pull/18238))
55-
56-
5766
## [2.0.5] - 2023-07-07
5867

5968
### Fixed

0 commit comments

Comments
 (0)