Using BaseFinetuning
callback with DeepSpeed
#9774
Unanswered
aakaashjois
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to use the MilestoneFinetuning as given in this example. I am using DeepSpeed optimization to distribute the job on 8 GPUs. When the
.fit()
starts, an error is raised byBaseFinetuning.__apply_mapping_to_param_groups()
:My understanding is, DeepSpeed changes the way Optimizer is distributed across the GPUs, and this might be causing a problem here. Anyone experienced a similar issue, and know of a solution? I had to revert back to using
DDP
for now to get past this error.Beta Was this translation helpful? Give feedback.
All reactions