Skip to content

Commit e719d60

Browse files
Bordalexierule
authored andcommitted
v1.2.3 & test past ckpts
update version
1 parent b6bd04a commit e719d60

File tree

3 files changed

+4
-26
lines changed

3 files changed

+4
-26
lines changed

CHANGELOG.md

Lines changed: 0 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -7,44 +7,19 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
77

88
## [1.2.3] - 2021-03-09
99

10-
### Added
11-
12-
13-
### Changed
14-
1510

1611
### Fixed
1712

1813
- Fixed `ModelPruning(make_pruning_permanent=True)` pruning buffers getting removed when saved during training ([#6073](https://github.com/PyTorchLightning/pytorch-lightning/pull/6073))
19-
20-
2114
- Fixed when `_stable_1d_sort` to work when `n >= N` ([#6177](https://github.com/PyTorchLightning/pytorch-lightning/pull/6177))
22-
23-
2415
- Fixed `AttributeError` when `logger=None` on TPU ([#6221](https://github.com/PyTorchLightning/pytorch-lightning/pull/6221))
25-
26-
2716
- Fixed PyTorch Profiler with `emit_nvtx` ([#6260](https://github.com/PyTorchLightning/pytorch-lightning/pull/6260))
28-
29-
3017
- Fixed `trainer.test` from `best_path` hangs after calling `trainer.fit` ([#6272](https://github.com/PyTorchLightning/pytorch-lightning/pull/6272))
31-
32-
3318
- Fixed `SingleTPU` calling `all_gather` ([#6296](https://github.com/PyTorchLightning/pytorch-lightning/pull/6296))
34-
35-
3619
- Ensure we check deepspeed/sharded in multinode DDP ([#6297](https://github.com/PyTorchLightning/pytorch-lightning/pull/6297)
37-
38-
3920
- Check `LightningOptimizer` doesn't delete optimizer hooks ([#6305](https://github.com/PyTorchLightning/pytorch-lightning/pull/6305)
40-
41-
4221
- Resolve memory leak for evaluation ([#6326](https://github.com/PyTorchLightning/pytorch-lightning/pull/6326)
43-
44-
4522
- Ensure that clip gradients is only called if the value is greater than 0 ([#6330](https://github.com/PyTorchLightning/pytorch-lightning/pull/6330)
46-
47-
4823
- Fixed `Trainer` not resetting `lightning_optimizers` when calling `Trainer.fit()` multiple times ([#6372](https://github.com/PyTorchLightning/pytorch-lightning/pull/6372))
4924

5025

pytorch_lightning/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
import time
66

77
_this_year = time.strftime("%Y")
8-
__version__ = '1.2.2'
8+
__version__ = '1.2.3'
99
__author__ = 'William Falcon et al.'
1010
__author_email__ = '[email protected]'
1111
__license__ = 'Apache-2.0'

tests/checkpointing/test_legacy_checkpoints.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,9 @@
5252
"1.1.6",
5353
"1.1.7",
5454
"1.1.8",
55+
"1.2.0",
56+
"1.2.1",
57+
"1.2.2",
5558
]
5659
)
5760
def test_resume_legacy_checkpoints(tmpdir, pl_version):

0 commit comments

Comments
 (0)