Update dependency pytorch-lightning to v1.6.0 [SECURITY] #340
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==1.4.9
->==1.6.0
Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
GitHub Vulnerability Alerts
CVE-2021-4118
pytorch-lightning is vulnerable to Deserialization of Untrusted Data.
CVE-2022-0845
PyTorch Lightning version 1.5.10 and prior is vulnerable to code injection. An attacker could execute commands on the target OS running the operating system by setting the
PL_TRAINER_GPUS
when using theTrainer
module. A patch is included in the1.6.0
release.Release Notes
Lightning-AI/lightning (pytorch-lightning)
v1.6.0
: PyTorch Lightning 1.6: Support Intel's Habana Accelerator, New efficient DDP strategy (Bagua), Manual Fault-tolerance, Stability and Reliability.Compare Source
The core team is excited to announce the PyTorch Lightning 1.6 release ⚡
Highlights
PyTorch Lightning 1.6 is the work of 99 contributors who have worked on features, bug-fixes, and documentation for a total of over 750 commits since 1.5. This is our most active release yet. Here are some highlights:
Introducing Intel's Habana Accelerator
Lightning 1.6 now supports the Habana® framework, which includes Gaudi® AI training processors. Their heterogeneous architecture includes a cluster of fully programmable Tensor Processing Cores (TPC) along with its associated development tools and libraries and a configurable Matrix Math engine.
You can leverage the Habana hardware to accelerate your Deep Learning training workloads simply by passing:
The Bagua Strategy
The Bagua Strategy is a deep learning acceleration framework that supports multiple, advanced distributed training algorithms with state-of-the-art system relaxation techniques. Enabling Bagua, which can be considerably faster than vanilla PyTorch DDP, is as simple as:
Towards stable Accelerator, Strategy, and Plugin APIs
The
Accelerator
,Strategy
, andPlugin
APIs are a core part of PyTorch Lightning. They're where all the distributed boilerplate lives, and we're constantly working to improve both them and the overall PyTorch Lightning platform experience.In this release, we've made some large changes to achieve that goal. Not to worry, though! The only users affected by these changes are those who use custom implementations of Accelerator and Strategy (
TrainingTypePlugin
) as well as certain Plugins. In particular, we want to highlight the following changes:All
TrainingTypePlugin
s have been renamed toStrategy
(#11120). Strategy is a more appropriate name because it encompasses more than simply training communcation. This change is now aligned with the changes we implemented in 1.5, which introduced the newstrategy
anddevices
flags to the Trainer.Before
New
The
Accelerator
andPrecisionPlugin
have moved intoStrategy
. All strategies now take an optional parameteraccelerator
andprecision_plugin
(#11022, #10570).Custom Accelerator implementations must now implement two new abstract methods:
is_available()
(#11797) andauto_device_count()
(#10222). The latter determines how many devices get used by default when specifyingTrainer(accelerator=..., devices="auto")
.We redesigned the process creation for spawn-based strategies such as
DDPSpawnStrategy
andTPUSpawnStrategy
(#10896). All spawn-based strategies now spawn processes immediately upon callingTrainer.{fit,validate,test,predict}
, which means the hooks/callbacksprepare_data
,setup
,configure_sharded_model
andteardown
all run under an initialized process group. These changes align the spawn-based strategies with their non-spawn counterparts (such asDDPStrategy
).We've also exposed the process group backend for use. For example, you can now easily enable
fairring
like this:In a similar fashion, if installing
torch>=1.11
, you can enable DDP static graph to apply special runtime optimizations:LightningCLI
improvementsIn the previous release, we added shorthand notation support for registered components. In this release, we added a flag to automatically register all available components:
We have also added support for the
ReduceLROnPlateau
scheduler with shorthand notation:If you need to customize the learning rate scheduler configuration, you can do so by overriding:
Finally, loggers are also now configurable with shorthand:
Control SLURM's re-queueing
We've added the ability to turn the automatic resubmission on or off when a job gets interrupted by the SLURM controller (via signal handling). Users who prefer to let their code handle the resubmission (for example, when submitit is used) can now pass:
Fault-tolerance improvements
The Fault-tolerance training under manual optimization now tracks optimization progress. We also changed the graceful exit signal from
SIGUSR1
toSIGTERM
for better support inside cloud instances.An additional feature we're excited to announce is support for consecutive
trainer.fit()
calls.Loop customization improvements
The
Loop
's state is now included as part of the checkpoints saved by the library. This enables finer restoration of custom loops.We've also made it easier to replace Lightning's loops with your own. For example:
Data-Loading improvements
In previous versions, Lightning required that the
DataLoader
instance set its input arguments as instance attributes. This meant that customDataLoader
s also had this hidden requirement. In this release, we do this automatically for the user, easing the passing of custom loaders:As of this release, Lightning no longer pre-fetches 1 extra batch if it doesn't need to. Previously, doing so would conflict with the internal pre-fetching done by optimized data loaders such as FFCV's. You can now define your own pre-fetching value like this:
New Hooks
LightningModule.lr_scheduler_step
Lightning now allows the use of custom learning rate schedulers that aren't natively available in PyTorch. A great example of this is Timm Schedulers.
When using custom learning rate schedulers relying on an API other than PyTorch's, you can now define the
LightningModule.lr_scheduler_step
with your desired logic.A new stateful API
This release introduces new hooks to standardize all stateful components to use
state_dict
andload_state_dict
, mimicking the PyTorch API. The new hooks receive their own component's state and replace most usages of the previouson_save_checkpoint
andon_load_checkpoint
hooks.New properties
Trainer.estimated_stepping_batches
You can use built-in
Trainer.estimated_stepping_batches
to compute the total number of stepping batches needed for the complete training.The property takes gradient accumulation factor and distributed setting into consideration when performing this computation so that you don't have to derive it manually:
Trainer.num_devices
andTrainer.device_ids
In the past, retrieving the number of devices used, or their IDs, posed a considerable challenge. Additionally, doing so required knowing which property to access based on the current
Trainer
configuration.To simplify this process, we've deprecated the per-accelerator properties to have accelerator agnostic properties. For example:
Experimental Features
Manual Fault-tolerance
Fault Tolerance has limitations that require specific information about your data-loading structure.
It is now possible to resolve those limitations by enabling manual fault tolerance where you can write your own logic and specify how exactly to checkpoint your own datasets and samplers. You can do so using this environment flag:
Check out this video for a dive into the internals of this flag.
Customizing the layer synchronization
We introduced a new plugin class for wrapping layers of a model with synchronization logic for multiprocessing.
Registering Custom Accelerators
There has been much progress in the field of ML Accelerators, and the list of accelerators is constantly expanding.
We've made it easier for users to try out new accelerators by enabling support for registering custom
Accelerator
classes in Lightning.Backward Incompatible Changes
Here is a selection of notable changes that are not backward compatible with previous versions. The full list of changes and removals can be found in the CHANGELOG below.
Drop PyTorch 1.7 support
Following our 4 PyTorch release window, this release supports PyTorch 1.8 to 1.11. Support for PyTorch 1.7 has been removed.
Drop Python 3.6 support
Following Python's end-of-life, support for Python 3.6 has been removed.
AcceleratorConnector
rewriteTo support new accelerator and stategy features, we completely rewrote our internal
AcceleratorConncetor
class. No backwards compatibility was maintained so it is likely to have broken your code if it was using this class.Re-define the
current_epoch
boundaryTo resolve fault-tolerance issues, we changed where the current epoch value gets increased.
trainer.current_epoch
is now increased by 1on_train_end
. This means that if a model is run for 3 epochs (0, 1, 2),trainer.current_epoch
will now return 3 instead of 2 aftertrainer.fit()
. This can also impact custom callbacks that acess this property inside this hook.This also impacts checkpoints saved during an epoch (e.g.
on_train_epoch_end
). For example, aTrainer(max_epochs=1, limit_train_batches=1)
instance that saves a checkpoint will have thecurrent_epoch=0
value saved instead ofcurrent_epoch=1
.Re-define the
global_step
boundaryTo resolve fault-tolerance issues, we changed where the global step value gets increased.
Access to
trainer.global_step
during an intra-training validation hook will now correctly return the number of optimizer steps taken already. In pseudocode:Saved checkpoints that use the global step value as part of the filename are now increased by 1 for the same reason. A checkpoint saved after 1 step will be now be named
step=1.ckpt
instead ofstep=0.ckpt
.The
trainer.global_step
value will now account for TBPTT or multiple optimizers. Users settingTrainer({min,max}_steps=...)
under these circumstances will need to adjust their values.Removed automatic reduction of outputs in
training_step
when using DataParallelWhen using
Trainer(strategy="dp")
, all the tensors returned by training_step were previously reduced to a scalar (https://github.com/PyTorchLightning/pytorch-lightning/pull/11594). This behavior was especially confusing when outputs needed to be collected into thetraining_epoch_end
hook.From now on, outputs are no longer reduced except for the
loss
tensor, unless you implementtraining_step_end
, in which case the loss won't get reduced either.No longer fallback to CPU with no devices
Previous versions were lenient in that the lack of GPU devices defaulted to running on CPU. This meant that users' code could be running much slower without them ever noticing that it was running on CPU.
We suggest passing
Trainer(accelerator="auto")
when this leniency is desired.CHANGELOG
Added
MLFlowLogger
(#12290)backward_passes_per_step
(#11911)DETAIL
log level to provide useful logs for improving monitoring and debugging of batch jobs (#11008)SLURMEnvironment(auto_requeue=True|False)
to control whether Lightning handles the requeuing (#10601)_Stateful
protocol to detect if classes are stateful (#10646)_FaultTolerantMode
enum used to track different supported fault tolerant modes (#10645)_rotate_worker_indices
utility to reload the state according the latest worker (#10647)_terminate_gracefully
to all processes and add support for DDP (#10638)DataLoaders
returned in the*_dataloader()
methods, i.e., automatic replacement of samplers now works with custom types ofDataLoader
(#10680)DataLoader
implementation is not well implemented and we need to reconstruct it (#10719)Loop
's state by default in the checkpoint (#10784)Loop.replace
to easily switch one loop for another (#10324)--lr_scheduler=ReduceLROnPlateau
to theLightningCLI
(#10860)LightningCLI.configure_optimizers
to override theconfigure_optimizers
return value (#10860)LightningCLI(auto_registry)
flag to register all subclasses of the registerable components automatically (#12108)max_epochs
in theTrainer
is not set (#10700)LightningModule.configure_callbacks
without wrapping it into a list (#11060)console_kwargs
forRichProgressBar
to initialize inner Console (#10875)LightningCLI
(#11533)LOGGER_REGISTRY
instance to register custom loggers to theLightningCLI
(#11533)Trainer
argumentslimit_*_batches
,overfit_batches
, orval_check_interval
are set to1
or1.0
(#11950)PrecisionPlugin.teardown
method (#10990)LightningModule.lr_scheduler_step
(#10249)DataFetcher
(#11606)optimizer.step
. This can be useful forLightningLite
users, manual optimization users, or users overridingLightningModule.optimizer_step
(#11711)MisconfigurationException
if user providedopt_idx
in scheduler config doesn't match with actual optimizer index of its respective optimizer (#11247)loggers
property toTrainer
which returns a list of loggers provided by the user (#11683)loggers
property toLightningModule
which retrieves theloggers
property fromTrainer
(#11683)CombinedLoader
for the training data (#11648)DistributedSampler
during validation/testing (#11479)Bagua
training strategy (#11146)poptorch.DataLoader
in a*_dataloader
hook (#12116)rank_zero
module to centralize utilities (#11747)_Stateful
support forLightningDataModule
(#11637)_Stateful
support forPrecisionPlugin
(#11638)Accelerator.is_available
to check device availability (#11797)Trainer
(#11888)nn.Module
withsave_hyperparameters()
(#12068)estimated_stepping_batches
property toTrainer
(#11599)on_load_checkpoint
/on_save_checkpoint
callback and LightningModule hooks (#12149)LayerSync
andNativeSyncBatchNorm
plugins (#11754)storage_options
argument toTrainer.save_checkpoint()
to pass to customCheckpointIO
implementations (#11891)device_ids
andnum_devices
property toTrainer
(#12151)Callback.state_dict()
andCallback.load_state_dict()
methods (#12232)AcceleratorRegistry
(#12180)apply_to_collections
(#11889)Changed
benchmark
flag optional and set its value based on the deterministic flag (#11944)_print_results
method of theEvaluationLoop
(#11332)EvaluationLoop
(#12427)prog_bar
flag to False inLightningModule.log_grad_norm
(#11472)init_dist_connection()
when torch distributed is not available (#10418)monitor
argument in theEarlyStopping
callback is no longer optional (#10328)MisconfigurationException
whenenable_progress_bar=False
and a progress bar instance has been passed in the callback list (#10520)trainer.connectors.env_vars_connector._defaults_from_env_vars
toutilities.argsparse._defaults_from_env_vars
(#10501)LightningCLI
required for the new major release of jsonargparse v4.0.0 (#10426)refresh_rate_per_second
parameter torefresh_rate
forRichProgressBar
signature (#10497)PrecisionPlugin
intoTrainingTypePlugin
and updated all references (#10570)signal.SIGTERM
to gracefully exit instead ofsignal.SIGUSR1
(#10605)Loop.restarting=...
now sets the value recursively for all subloops (#11442)batch_size
cannot be inferred from the current batch if it contained a string or was a custom batch object (#10541)overfit_batches > 0
is set in the Trainer (#9709)Accelerator
toTrainingTypePlugin
(#10596)Trainer
to theStrategy
(#11444)batch_to_device
method fromAccelerator
toTrainingTypePlugin
(#10649)DDPSpawnPlugin
no longer overrides thepost_dispatch
plugin hook (#10034)LightningModule.{add_to_queue,get_from_queue}
hooks no longer get atorch.multiprocessing.SimpleQueue
and instead receive a list based queue (#10034)training_step
,validation_step
,test_step
andpredict_step
method signatures inAccelerator
and updated input from caller side (#10908)DDPSpawnPlugin
and related plugins save (#10934)LoggerCollection
returns only unique logger names and versions (#10976)DDPSpawnPlugin
,TPUSpawnPlugin
, etc.) (#10896)Trainer.{fit,validate,test,predict}
prepare_data
,setup
,configure_sharded_model
andteardown
now run under initialized process group for spawn-based plugins just like their non-spawn counterpartsMisconfigurationException
s will now be raised asProcessRaisedException
(torch>=1.8) or asException
(torch<1.8)TrainingTypePlugin.pre_dispatch()
method and merged it withTrainingTypePlugin.setup()
(#11137)batch_to_device
entry in profiling from stage-specific to generic, to match profiling of other hooks (#11031)NeptuneLogger
(#11015)__getstate__
and__setstate__
ofRichProgressBar
(#11100)DDPPlugin
andDDPSpawnPlugin
and their subclasses now remove theSyncBatchNorm
wrappers inteardown()
to enable proper support at inference after fitting (#11078)Accelerator
instance to theTrainingTypePlugin
; all training-type plugins now take an optional parameteraccelerator
(#11022)TrainingTypePlugin
toStrategy
(#11120)ParallelPlugin
toParallelStrategy
(#11123)DataParallelPlugin
toDataParallelStrategy
(#11183)DDPPlugin
toDDPStrategy
(#11142)DDP2Plugin
toDDP2Strategy
(#11185)DDPShardedPlugin
toDDPShardedStrategy
(#11186)DDPFullyShardedPlugin
toDDPFullyShardedStrategy
(#11143)DDPSpawnPlugin
toDDPSpawnStrategy
(#11145)DDPSpawnShardedPlugin
toDDPSpawnShardedStrategy
(#11210)DeepSpeedPlugin
toDeepSpeedStrategy
(#11194)HorovodPlugin
toHorovodStrategy
(#11195)TPUSpawnPlugin
toTPUSpawnStrategy
(#11190)IPUPlugin
toIPUStrategy
(#11193)SingleDevicePlugin
toSingleDeviceStrategy
(#11182)SingleTPUPlugin
toSingleTPUStrategy
(#11182)TrainingTypePluginsRegistry
toStrategyRegistry
(#11233)ResultCollection
,ResultMetric
, andResultMetricCollection
classes as protected (#11130)trainer.checkpoint_connector
as protected (#11550)FitLoop
instead of theTrainingEpochLoop
(#11201)Strategy
classes to thestrategies
directory (#11226)training_type_plugin
file tostrategy
(#11239)DeviceStatsMonitor
to group metrics based on the logger'sgroup_separator
(#11254)UserWarning
if evaluation is triggered withbest
ckpt and trainer is configured with multiple checkpoint callbacks (#11274)Trainer.logged_metrics
now always contains scalar tensors, even when a Python scalar was logged (#11270)MisconfigurationException
toModuleNotFoundError
whenrich
isn't available (#11360)trainer.current_epoch
value is now increased by 1 during and afteron_train_end
(#8578)trainer.global_step
value now accounts for multiple optimizers and TBPTT splits (#11805)trainer.global_step
value is now increased right after theoptimizer.step()
call which will impact users who access it during an intra-training validation hook (#11805)ModelCheckpoint(filename='{step}')
is different compared to previous versions. A checkpoint saved after 1 step will be namedstep=1.ckpt
instead ofstep=0.ckpt
(#11805)ABC
forAccelerator
: Users need to implementauto_device_count
(#11521)parallel_devices
property inParallelStrategy
to be lazy initialized (#11572)TQDMProgressBar
to run a separate progress bar for each eval dataloader (#11657)SimpleProfiler(extended=False)
summary based on mean duration for each hook (#11671)shuffle=False
for eval dataloaders (#11575)training_step_end
is overridden (#11594)training_epoch_end
hook will no longer receive reduced outputs fromtraining_step
and instead get the full tensor of results from all GPUs (#11594)lightning_logs
for consistency (#11762)accelerator_connector
(#11448)find_unused_parameters=True
(#12425)limit_batches=0
(#11576)is_global_zero
check intraining_epoch_loop
beforelogger.save
. If you have a custom logger that implementssave
the Trainer will now callsave
on all ranks by default. To change this behavior add@rank_zero_only
to yoursave
implementation (#12134)trainer.logger_connector
as protected (#12195)Strategy.process_dataloader
function call fromfit/evaluation/predict_loop.py
todata_connector.py
(#12251)ModelCheckpoint(save_last=True, every_n_epochs=N)
now saves a "last" checkpoint every epoch (disregardingevery_n_epochs
) instead of only once at the end of training (#12418)sync_batchnorm
now only apply it when fitting (#11919)supporters.py
so that in the accumulator element (for loss) is created directly on the device (#12430)EarlyStopping.on_save_checkpoint
andEarlyStopping.on_load_checkpoint
in favor ofEarlyStopping.state_dict
andEarlyStopping.load_state_dict
(#11887)BaseFinetuning.on_save_checkpoint
andBaseFinetuning.on_load_checkpoint
in favor ofBaseFinetuning.state_dict
andBaseFinetuning.load_state_dict
(#11887)BackboneFinetuning.on_save_checkpoint
andBackboneFinetuning.on_load_checkpoint
in favor ofBackboneFinetuning.state_dict
andBackboneFinetuning.load_state_dict
(#11887)ModelCheckpoint.on_save_checkpoint
andModelCheckpoint.on_load_checkpoint
in favor ofModelCheckpoint.state_dict
andModelCheckpoint.load_state_dict
(#11887)Timer.on_save_checkpoint
andTimer.on_load_checkpoint
in favor ofTimer.state_dict
andTimer.load_state_dict
(#11887)Deprecated
training_type_plugin
property in favor ofstrategy
inTrainer
and updated the references (#11141)Trainer.{validated,tested,predicted}_ckpt_path
and replaced with read-only propertyTrainer.ckpt_path
set when checkpoints loaded viaTrainer.{fit,validate,test,predict}
(#11696)ClusterEnvironment.master_{address,port}
in favor ofClusterEnvironment.main_{address,port}
(#10103)DistributedType
in favor of_StrategyType
(#10505)precision_plugin
constructor argument fromAccelerator
(#10570)DeviceType
in favor of_AcceleratorType
(#10503)Trainer.slurm_job_id
in favor of the newSLURMEnvironment.job_id()
method (#10622)IndexBatchSamplerWrapper.batch_indices
in favor ofIndexBatchSamplerWrapper.seen_batch_indices
(#10870)on_init_start
andon_init_end
callback hooks (#10940)Trainer.call_hook
in favor ofTrainer._call_callback_hooks
,Trainer._call_lightning_module_hook
,Trainer._call_ttp_hook
, andTrainer._call_accelerator_hook
(#10979)TrainingTypePlugin.post_dispatch
in favor ofTrainingTypePlugin.teardown
(#10939)ModelIO.on_hpc_{save/load}
in favor ofCheckpointHooks.on_{save/load}_checkpoint
(#10911)Trainer.run_stage
in favor ofTrainer.{fit,validate,test,predict}
(#11000)Trainer.lr_schedulers
in favor ofTrainer.lr_scheduler_configs
which returns a list of dataclasses instead of dictionaries (#11443)Trainer.verbose_evaluate
in favor ofEvaluationLoop(verbose=...)
(#10931)Trainer.should_rank_save_checkpoint
Trainer property (#11068)Trainer.lightning_optimizers
(#11444)TrainerOptimizersMixin
and moved functionality tocore/optimizer.py
(#11155)on_train_batch_end(outputs)
format when multiple optimizers are used and TBPTT is enabled (#12182)training_epoch_end(outputs)
format when multiple optimizers are used and TBPTT is enabled (#12182)TrainerCallbackHookMixin
(#11148)TrainerDataLoadingMixin
and moved functionality toTrainer
andDataConnector
(#11282)pytorch_lightning.callbacks.device_stats_monitor.prefix_metric_keys
(#11254)Callback.on_epoch_start
hook in favour ofCallback.on_{train/val/test}_epoch_start
(#11578)Callback.on_epoch_end
hook in favour ofCallback.on_{train/val/test}_epoch_end
(#11578)LightningModule.on_epoch_start
hook in favor ofLightningModule.on_{train/val/test}_epoch_start
(#11578)LightningModule.on_epoch_end
hook in favor ofLightningModule.on_{train/val/test}_epoch_end
(#11578)on_before_accelerator_backend_setup
callback hook in favour ofsetup
(#11568)on_batch_start
andon_batch_end
callback hooks in favor ofon_train_batch_start
andon_train_batch_end
(#11577)on_configure_sharded_model
callback hook in favor ofsetup
(#11627)pytorch_lightning.utilities.distributed.rank_zero_only
in favor ofpytorch_lightning.utilities.rank_zero.rank_zero_only
(#11747)pytorch_lightning.utilities.distributed.rank_zero_debug
in favor ofpytorch_lightning.utilities.rank_zero.rank_zero_debug
(#11747)pytorch_lightning.utilities.distributed.rank_zero_info
in favor ofpytorch_lightning.utilities.rank_zero.rank_zero_info
(#11747)pytorch_lightning.utilities.warnings.rank_zero_warn
in favor ofpytorch_lightning.utilities.rank_zero.rank_zero_warn
(#11747)pytorch_lightning.utilities.warnings.rank_zero_deprecation
in favor ofpytorch_lightning.utilities.rank_zero.rank_zero_deprecation
(#11747)pytorch_lightning.utilities.warnings.LightningDeprecationWarning
in favor ofpytorch_lightning.utilities.rank_zero.LightningDeprecationWarning
on_pretrain_routine_start
andon_pretrain_routine_end
callback hooks in favor ofon_fit_start
(#11794)LightningModule.on_pretrain_routine_start
andLightningModule.on_pretrain_routine_end
hooks in favor ofon_fit_start
(#12122)agg_key_funcs
andagg_default_func
parameters fromLightningLoggerBase
(#11871)LightningLoggerBase.update_agg_funcs
(#11871)LightningLoggerBase.agg_and_log_metrics
in favor ofLightningLoggerBase.log_metrics
(#11832)weights_save_path
to theTrainer
constructor in favor of adding theModelCheckpoint
callback withdirpath
directly to the list of callbacks (#12084)pytorch_lightning.profiler.AbstractProfiler
in favor ofpytorch_lightning.profiler.Profiler
(#12106)pytorch_lightning.profiler.BaseProfiler
in favor ofpytorch_lightning.profiler.Profiler
(#12150)BaseProfiler.profile_iterable
(#12102)LoggerCollection
in favor oftrainer.loggers
(#12147)PrecisionPlugin.on_{save,load}_checkpoint
in favor ofPrecisionPlugin.{state_dict,load_state_dict}
(#11978)LightningDataModule.on_save/load_checkpoint
in favor ofstate_dict/load_state_dict
(#11893)Trainer.use_amp
in favor ofTrainer.amp_backend
(#12312)LightingModule.use_amp
in favor ofTrainer.amp_backend
(#12315)PL_TORCH_DISTRIBUTED_BACKEND
(#11745)ParallelPlugin.torch_distributed_backend
in favor ofDDPStrategy.process_group_backend
property (#11745)ModelCheckpoint.save_checkpoint
in favor ofTrainer.save_checkpoint
(#12456)Trainer.devices
in favor ofTrainer.num_devices
andTrainer.device_ids
(#12151)Trainer.root_gpu
in favor ofTrainer.strategy.root_device.index
when GPU is used (#12262)Trainer.num_gpus
in favor ofTrainer.num_devices
when GPU is used (#12384)Trainer.ipus
in favor ofTrainer.num_devices
when IPU is used (#12386)Trainer.num_processes
in favor ofTrainer.num_devices
(#12388)Trainer.data_parallel_device_ids
in favor ofTrainer.device_ids
([#12072](https://redirect.githubConfiguration
📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.