Releases: Lightning-AI/pytorch-lightning
Lightning v2.6.1
Changes in 2.6.1
PyTorch Lightning
Added
- Added method chaining support to
LightningModule.freeze()andLightningModule.unfreeze()by returningself(#21469) - Added litlogger integration(#21430)
Deprecated
- Deprecated
to_torchscriptmethod due to deprecation of TorchScript in PyTorch (#21397)
Removed
- Removed support for Python 3.9 due to end-of-life status (#21398)
Fixed
- Fixed
save_hyperparameters(ignore=...)behavior so subclass ignore rules override base class rules (#21490) - Fixed
LightningDataModule.load_from_checkpointto restore the datamodule subclass and hyperparameters (#21478) - Fixed
ModelParallelStrategysingle-file checkpointing whentorch.compilewraps the model so optimizer states no longer raiseKeyErrorduring save (#21357) - Sanitize profiler filenames when saving to avoid crashes due to invalid characters (#21395)
- Fixed
StochasticWeightAveragingwith infinite epochs (#21396) - Fixed
_generate_seed_sequence_samplingfunction not producing unique seeds (#21399) - Fixed
ThroughputMonitorcallback emitting warnings too frequently (#21453)
Lightning Fabric
Added
- Exposed
weights_onlyargument for loading checkpoints inFabric.load()andFabric.load_raw()(#21470)
Fixed
Full commit list: 2.6.0 -> 2.6.1
Contributors
New Contributors
- @arrdel made their first contribution in #21402
- @CodeVishal-17 made their first contribution in #21470
- @aditya0by0 made their first contribution in #21478
We thank all folks who submitted issues, features, fixes and doc changes. It's the only way we can collectively make Lightning ⚡ better for everyone, nice job!
In particular, we would like to thank the authors of the pull-requests above
Lightning v2.6.0
Changes in 2.6.0
PyTorch Lightning
Added
- Added
WeightAveragingcallback that wraps the PyTorchAveragedModelclass (#20545) - Added Torch-Tensorrt integration with
LightningModule(#20808) - Added time-based validation support though
val_check_interval(#21071) - Added attributes to access stopping reason in
EarlyStoppingcallback (#21188) - Added support for variable batch size in
ThroughputMonitor(#20236) - Added
EMAWeightAveragingcallback that wraps Lightning'sWeightAveragingclass (#21260)
Changed
- Expose
weights_onlyargument forTrainer.{fit,validate,test,predict}and lettorchhandle default value (#21072) - Default to
RichProgressBarandRichModelSummaryif the rich package is available. Fallback to TQDMProgressBar and ModelSummary otherwise (#20896) - Add MPS accelerator support for mixed precision (#21209)
Fixed
- Fixed edgecase when
max_trialsis reached inTuner.scale_batch_size(#21187) - Fixed case where
LightningCLIcould not be initialized withtrainer_defaultcontaining callbacks (#21192) - Fixed missing reset when
ModelPruningis applied with lottery ticket hypothesis (#21191) - Fixed preventing recursive symlink creation iwhen
save_last='link'andsave_top_k=-1(#21186) - Fixed
last.ckptbeing created and not linked to another checkpoint (#21244) - Fixed bug that prevented
BackboneFinetuningfrom being used together withLearningRateFinder(#21224) - Fixed
ModelPruningsparsity logging bug that caused incorrect sparsity percentages (#21223) - Fixed
LightningCLIloading of hyperparameters fromckpt_pathfailing for subclass model mode (#21246) - Fixed check the init args only when the given frames are in
__init__method (#21227) - Fixed how
ThroughputMonitorcalculated training time (#21291) - Fixed synchronization of gradients in manual optimization with
DDPStrategy(static_graph=True)(#21251) - Fixed FSDP mixed precision semantics and added user warning (#21361)
Lightning Fabric
Changed
Fixed
Full commit list: 2.5.0 -> 2.6.0
Contributors
We thank all folks who submitted issues, features, fixes and doc changes. It's the only way we can collectively make Lightning ⚡ better for everyone, nice job!
In particular, we would like to thank the authors of the pull-requests above
Lightning v2.5.6
Lightning v2.5.5
Changes in 2.5.5
PyTorch Lightning
Changed
Fixed
- Fixed
LightningCLInot usingckpt_pathhyperparameters to instantiate classes (#21116) - Fixed callbacks by defer step/time-triggered
ModelCheckpointsaves until validation metrics are available (#21106) - Fixed with adding a missing device id for pytorch 2.8 (#21105)
- Fixed
TQDMProgressBarnot resetting correctly when using both a finite and iterable dataloader (#21147) - Fixed cleanup of temporary files from
Tuneron crashes (#21162)
Lightning Fabric
Changed
Fixed
Full commit list: 2.5.4 -> 2.5.5
Contributors
We thank all folks who submitted issues, features, fixes and doc changes. It's the only way we can collectively make Lightning ⚡ better for everyone, nice job!
In particular, we would like to thank the authors of the pull-requests above, in no particular order:
@Borda, @KAVYANSHTYAGI, @littlebullGit, @mauvilsa, @SkafteNicki, @taozhiwei
Thank you ❤️ and we hope you'll keep them coming!
Lightning v2.5.4
Changes in 2.5.4
PyTorch Lightning
Fixed
- Fixed
AsyncCheckpointIOsnapshots tensors to avoid race with parameter mutation (#21079) - Fixed
AsyncCheckpointIOthreadpool exception if calling fit or validate more than one (#20952) - Fixed learning rate not being correctly set after using
LearningRateFindercallback (#21068) - Fixed misalignment column while using rich model summary in
DeepSpeedstrategy(#21100) - Fixed
RichProgressBarcrashing when sanity checking using val dataloader with 0 len (#21108)
Lightning Fabric
Changed
- Added support for NVIDIA H200 GPUs in
get_available_flops(#20913)
Full commit list: 2.5.3 -> 2.5.4
Contributors
We thank all folks who submitted issues, features, fixes and doc changes. It's the only way we can collectively make Lightning ⚡ better for everyone, nice job!
In particular, we would like to thank the authors of the pull-requests above, in no particular order:
@fnhirwa, @GdoongMathew, @jjh42, @littlebullGit, @SkafteNicki
Thank you ❤️ and we hope you'll keep them coming!
Lightning v2.5.3
Notable changes in this release
PyTorch Lightning
Changed
- Added
save_on_exceptionoption toModelCheckpointCallback (#20916) - Allow
dataloader_idx_in log names whenadd_dataloader_idx=False(#20987) - Allow returning
ONNXProgramwhen callingto_onnx(dynamo=True)(#20811) - Extended support for general mappings being returned from
training_stepwhen using manual optimization (#21011)
Fixed
- Fixed Allowing trainer to accept CUDAAccelerator instance as accelerator with FSDP strategy (#20964)
- Fixed progress bar console clearing for Rich
14.1+(#21016) - Fixed
AdvancedProfilerto handle nested profiling actions for Python 3.12+ (#20809) - Fixed
richprogress bar error when resume training (#21000) - Fixed double iteration bug when resumed from a checkpoint. (#20775)
- Fixed support for more dtypes in
ModelSummary(#21034) - Fixed metrics in
RichProgressBarbeing updated according to user providedrefresh_rate(#21032) - Fixed
save_lastbehavior in the absence of validation (#20960) - Fixed integration between
LearningRateFinderandEarlyStopping(#21056) - Fixed gradient calculation in
lr_finderformode="exponential"(#21055) - Fixed
save_hyperparameterscrashing withdataclassesusinginit=Falsefields (#21051)
Lightning Fabric
Changed
Fixed
Full commit list: 2.5.2 -> 2.5.3
Contributors
We thank all folks who submitted issues, features, fixes and doc changes. It's the only way we can collectively make Lightning ⚡ better for everyone, nice job!
In particular, we would like to thank the authors of the pull-requests above, in no particular order:
@baskrahmer, @bhimrazy, @deependujha, @fnhirwa, @GdoongMathew, @jonathanking, @relativityhd, @rittik9, @SkafteNicki, @sudiptob2, @vsey, @YgLK
Thank you ❤️ and we hope you'll keep them coming!
Lightning v2.5.2
Notable changes in this release
PyTorch Lightning
Changed
- Add
toggled_optimizer(optimizer)method to the LightningModule, which is a context manager version oftoggle_optimizeanduntoggle_optimizer(#20771) - For cross-device local checkpoints, instruct users to install
fsspec>=2025.5.0if unavailable (#20780) - Check param is of
nn.Parametertype for pruning sanitization (#20783)
Fixed
- Fixed
save_hyperparametersnot working correctly withLightningCLIwhen there are parsing links applied on instantiation (#20777) - Fixed
logger_connectorhas an edge case where step can be a float (#20692) - Fixed Synchronize SIGTERM Handling in DDP to Prevent Deadlocks (#20825)
- Fixed case-sensitive model name (#20661)
- CLI: resolve jsonargparse deprecation warning (#20802)
- Fix: move
check_inputsto the target device if available duringto_torchscript(#20873) - Fixed progress bar display to correctly handle iterable dataset and
max_stepsduring training (#20869) - Fixed problem for silently supporting
jsonnet(#20899)
Lightning Fabric
Changed
- Ensure correct device is used for autocast when mps is selected as Fabric accelerator (#20876)
Removed
- Fix:
TransformerEnginePrecisionconversion for layers withbias=False(#20805)
Full commit list: 2.5.1 -> 2.5.2
Contributors
We thank all folks who submitted issues, features, fixes, and doc changes. It's the only way we can collectively make Lightning ⚡ better for everyone, nice job!
In particular, we would like to thank the authors of the pull-requests above, in no particular order:
@adamjstewart, @Armannas, @bandpooja, @Borda, @chanokin, @duydl, @GdoongMathew, @KAVYANSHTYAGI, @mauvilsa, @muthissar, @rustamzh, @siemdejong
Thank you ❤️ and we hope you'll keep them coming!
Lightning v2.5.1.post
Full Changelog: 2.5.1...2.5.1.post0
Lightning v2.5.1
Changes
PyTorch Lightning
Changed
- Allow LightningCLI to use a customized argument parser class (#20596)
- Change
wandbdefault x-axis totensorboard'sglobal_stepwhensync_tensorboard=True(#20611) - Added a new
checkpoint_path_prefixparameter to the MLflow logger which can control the path to where the MLflow artifacts for the model checkpoints are stored (#20538) - CometML logger was updated to support the recent Comet SDK (#20275)
- bump: testing with latest
torch2.6 (#20509)
Fixed
- Fixed CSVLogger logging hyperparameter at every write which increases latency (#20594)
- Fixed OverflowError when resuming from checkpoint with an iterable dataset (#20565)
- Fixed swapped
_R_coand_Pto prevent type error (#20508) - Always call
WandbLogger.experimentfirst in_call_setup_hookto ensuretensorboardlogs can sync towandb(#20610) - Fixed TBPTT example (#20528)
- Fixed test compatibility as AdamW became a subclass of Adam (#20574)
- Fixed file extension of model checkpoints uploaded by NeptuneLogger (#20581)
- Reset trainer variable
should_stopwhenfitis called (#19177) - Fixed making
WandbLoggerupload models from allModelCheckpointcallbacks, not just one (#20191) - Error when logging to MLFlow deleted experiment (#20556)
Lightning Fabric
Changed
Removed
- Removed legacy support for
lightning run model; usefabric runinstead. (#20588)
Full commit list: 2.5.0 -> 2.5.1
Contributors
We thank all folks who submitted issues, features, fixes and doc changes. It's the only way we can collectively make Lightning ⚡ better for everyone, nice job!
In particular, we would like to thank the authors of the pull-requests above, in no particular order:
@benglewis, @Borda, @cgebbe, @duydl, @haifeng-jin, @japdubengsub, @justusschock, @lantiga, @mauvilsa, @millskyle, @ringohoffman, @ryan597, @senarvi, @TresYap
Thank you ❤️ and we hope you'll keep them coming!
Lightning v2.5 post0
Full Changelog: 2.5.0...2.5.0.post0